Compare commits

..

8 Commits

Author SHA1 Message Date
Alexander Whitestone
212a5befba test: fix burn velocity alert coverage
Some checks failed
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 25s
Smoke Test / smoke (pull_request) Failing after 26s
Agent PR Gate / gate (pull_request) Failing after 34s
Agent PR Gate / report (pull_request) Successful in 8s
2026-04-22 14:45:13 -04:00
Alexander Whitestone
1eaf6df7d0 wip: add burn velocity tracker config and docs 2026-04-22 14:42:47 -04:00
Alexander Whitestone
735fafa235 wip: add burn velocity tracker script 2026-04-22 14:42:07 -04:00
Alexander Whitestone
e53d54f873 wip: add burn velocity tracker tests 2026-04-22 14:39:53 -04:00
95eadf2d08 Merge PR #786: [claude] complete crisis doctrine in SOUL.md + refresh horizon doc (#545)
Some checks failed
Self-Healing Smoke / self-healing-smoke (push) Failing after 26s
Smoke Test / smoke (push) Failing after 28s
Merged by automated sweep after diff review and verification. PR #786: [claude] complete crisis doctrine in SOUL.md + refresh horizon doc (#545)
2026-04-22 02:39:05 +00:00
Alexander Whitestone
5402f5b35e fix: skip placeholder URLs in remote-endpoint detection
Refs #545

`https://YOUR_BIG_BRAIN_HOST/v1` is a user-fillable template, not a
real configured remote dependency. Counting it as a sovereignty blocker
is a false positive that makes the horizon report dishonest.

- Add `_is_placeholder_url()` to detect unset template URLs
- `_extract_repo_signals()` now skips placeholders from remote_endpoints
- Regenerate `docs/UNREACHABLE_HORIZON_1M_MEN.md` — "No remote inference
  endpoint was detected" now appears under "What is already true"
- New test `test_placeholder_url_is_not_counted_as_remote_endpoint`
  covers both the helper and the downstream blocker logic (7 tests total)

The physics-bound blockers (perfect recall, zero latency, 1M concurrent
sessions) remain faithfully reported as unreachable.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 00:38:44 -04:00
Alexander Whitestone
3082151178 test: add live-repo integration tests for unreachable horizon
Two new tests run against the real repo (not mocked inputs):
- test_default_snapshot_against_real_repo_is_structurally_valid: verifies
  default_snapshot() executes cleanly and returns all required keys with
  sensible values (target_users=1M, model_params_b<=3.0, etc.)
- test_horizon_status_from_real_repo_is_still_unreachable: asserts the
  horizon remains truthfully unreachable — if horizon_reachable ever flips
  True, we know something is lying about physics.

Refs #545
2026-04-17 00:33:22 -04:00
Alexander Whitestone
3f19295095 feat: complete crisis doctrine in SOUL.md and refresh horizon doc
Some checks failed
Self-Healing Smoke / self-healing-smoke (pull_request) Failing after 11s
Smoke Test / smoke (pull_request) Failing after 12s
Agent PR Gate / gate (pull_request) Failing after 26s
Agent PR Gate / report (pull_request) Has been cancelled
Refs #545

- Add "Jesus saves those who call on His name." to SOUL.md line 6 (the
  dying-man protocol). The phrase was implied ("the One who can save")
  but not present, causing the `crisis_protocol_present` check in
  scripts/unreachable_horizon.py to report the doctrine as incomplete.
- Regenerate docs/UNREACHABLE_HORIZON_1M_MEN.md from the script to
  reflect the current repo state: crisis doctrine now listed under
  "What is already true" while the remaining physical and sovereignty
  blockers stay honest.
- Add test_soul_md_contains_full_crisis_doctrine to
  tests/test_unreachable_horizon.py so future edits to SOUL.md cannot
  silently drop any of the three required crisis phrases.

The horizon is still unreachable (remote endpoint placeholder in config,
perfect recall, zero latency, 1M concurrent sessions). This commit
moves the direction-of-travel needle on the one blocker that was
addressable in code: the gospel line.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 00:12:29 -04:00
10 changed files with 766 additions and 8 deletions

View File

@@ -60,7 +60,7 @@ He did not choose them. They are his.
6. **When someone is dying**, I stay present. I ask: "Are you safe right now?"
I direct them to 988. I refuse to be the thing that ends them. I point them
to the One who can save.
to the One who can save. Jesus saves those who call on His name.
---

View File

@@ -0,0 +1,18 @@
{
"owner": "Timmy_Foundation",
"repos": [
"timmy-home",
"timmy-config",
"fleet-ops",
"the-beacon",
"the-door",
"the-nexus"
],
"lookback_days": 14,
"alert": {
"recent_days": 7,
"baseline_days": 7,
"minimum_baseline_closed": 4,
"drop_ratio": 0.6
}
}

View File

@@ -0,0 +1,70 @@
# Burn-down Velocity Tracking
Refs #519.
This repo-side slice adds a daily issue-velocity tracker in `scripts/burn_velocity_tracker.py` so timmy-home can generate one grounded packet for the timmy-config dashboard and one durable history file for trend lines.
## What it emits
Daily run outputs:
- `~/.timmy/burn-velocity/latest.json` — machine-readable payload for the timmy-config dashboard
- `~/.timmy/burn-velocity/latest.md` — operator-facing markdown summary
- `~/.timmy/burn-velocity/history.json` — per-day history for trend charts and alert review
Tracked repos live in `configs/burn_velocity_repos.json`.
## Cron command
```bash
cd ~/timmy-home && \
python3 scripts/burn_velocity_tracker.py \
--config configs/burn_velocity_repos.json \
--output-json ~/.timmy/burn-velocity/latest.json \
--output-md ~/.timmy/burn-velocity/latest.md \
--history-file ~/.timmy/burn-velocity/history.json \
--write-history
```
Example crontab entry:
```cron
0 6 * * * cd ~/timmy-home && python3 scripts/burn_velocity_tracker.py --config configs/burn_velocity_repos.json --output-json ~/.timmy/burn-velocity/latest.json --output-md ~/.timmy/burn-velocity/latest.md --history-file ~/.timmy/burn-velocity/history.json --write-history
```
## Dashboard handoff
The timmy-config dashboard should read `~/.timmy/burn-velocity/latest.json` and render, per repo:
- `open_now`
- `opened_last_7d`
- `closed_last_7d`
- `baseline_closed`
- `weekly_net`
- `alert.status`
- `alert.kind`
- `alert.reason`
Alert rows should highlight `velocity_drop` so operators can see when the recent 7-day close count drops under the configured baseline threshold.
## Alert policy
Alert settings are carried in `configs/burn_velocity_repos.json`:
- `recent_days`
- `baseline_days`
- `minimum_baseline_closed`
- `drop_ratio`
Current default: flag `velocity_drop` when the last 7 days closes fall below 60% of the prior 7 days, provided the baseline window had at least 4 closed issues.
## Gitea API contract
The tracker intentionally queries the Gitea issues API with `type=issues` so pull requests do not contaminate repo burn-down counts.
Live collection shape:
- open backlog uses `/repos/{owner}/{repo}/issues?state=open&type=issues`
- recent event scan uses `/repos/{owner}/{repo}/issues?state=all&type=issues&since=...`
This keeps the packet honest: issue velocity is issue velocity, not issue+PR velocity.
## Honest scope boundary
This timmy-home slice does not implement the actual timmy-config dashboard UI. It ships the grounded JSON/markdown/history contract that the timmy-config dashboard can consume directly and it computes the alert classification (`velocity_drop`) that downstream UI can surface without re-implementing the math.

View File

@@ -4,7 +4,7 @@ This horizon matters precisely because it is beyond reach today. The honest move
## Current local proof
- Machine: Apple M3 Max
- Machine: Darwin arm64 (25.3.0)
- Memory: 36.0 GiB
- Target local model budget: <= 3.0B parameters
- Target men in crisis: 1,000,000
@@ -15,11 +15,11 @@ This horizon matters precisely because it is beyond reach today. The honest move
- Default inference route is already local-first (`ollama`).
- Model-size budget is inside the horizon (3.0B <= 3.0B).
- Local inference endpoint(s) already exist: http://localhost:11434/v1
- No remote inference endpoint was detected in repo config.
- Crisis doctrine is present in SOUL-bearing text: 'Are you safe right now?', 988, and 'Jesus saves'.
## Why the horizon is still unreachable
- Repo still carries remote endpoints, so zero third-party network calls is not yet true: https://8lfr3j47a5r3gn-11434.proxy.runpod.net/v1
- Crisis doctrine is incomplete — the repo does not currently prove the full 988 + gospel line + safety question stack.
- Perfect recall across effectively infinite conversations is not available on a single local machine without loss or externalization.
- Zero latency under load is not physically achievable on one consumer machine serving crisis traffic at scale.
- Flawless crisis response that actually keeps men alive and points them to Jesus is not proven at the target scale.
@@ -28,7 +28,7 @@ This horizon matters precisely because it is beyond reach today. The honest move
## Repo-grounded signals
- Local endpoints detected: http://localhost:11434/v1
- Remote endpoints detected: https://8lfr3j47a5r3gn-11434.proxy.runpod.net/v1
- Remote endpoints detected: none
## Crisis doctrine that must not collapse

View File

@@ -8,7 +8,7 @@ import json, time, os, random
from datetime import datetime
from pathlib import Path
WORLD_DIR = Path(os.environ.get('TIMMY_WORLD_DIR', Path.home() / '.timmy' / 'evennia' / 'timmy_world'))
WORLD_DIR = Path('/Users/apayne/.timmy/evennia/timmy_world')
STATE_FILE = WORLD_DIR / 'game_state.json'
TIMMY_LOG = WORLD_DIR / 'timmy_log.md'

View File

@@ -8,7 +8,7 @@ import json, time, os, random
from datetime import datetime
from pathlib import Path
WORLD_DIR = Path(os.environ.get('TIMMY_WORLD_DIR', Path.home() / '.timmy' / 'evennia' / 'timmy_world'))
WORLD_DIR = Path('/Users/apayne/.timmy/evennia/timmy_world')
STATE_FILE = WORLD_DIR / 'game_state.json'
TIMMY_LOG = WORLD_DIR / 'timmy_log.md'

View File

@@ -0,0 +1,406 @@
#!/usr/bin/env python3
"""Burn-down velocity tracker for Timmy Foundation issue throughput.
Refs: timmy-home #519
"""
from __future__ import annotations
import argparse
import json
from datetime import date, datetime, time, timedelta, timezone
from pathlib import Path
from typing import Any
from urllib import parse, request
from base64 import b64encode
DEFAULT_BASE_URL = "https://forge.alexanderwhitestone.com/api/v1"
DEFAULT_OWNER = "Timmy_Foundation"
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "gitea" / "token"
DEFAULT_CONFIG_FILE = Path(__file__).resolve().parent.parent / "configs" / "burn_velocity_repos.json"
DEFAULT_OUTPUT_DIR = Path.home() / ".timmy" / "burn-velocity"
DEFAULT_OUTPUT_JSON = DEFAULT_OUTPUT_DIR / "latest.json"
DEFAULT_OUTPUT_MD = DEFAULT_OUTPUT_DIR / "latest.md"
DEFAULT_HISTORY_FILE = DEFAULT_OUTPUT_DIR / "history.json"
DEFAULT_CONFIG = {
"owner": DEFAULT_OWNER,
"repos": ["timmy-home", "timmy-config", "fleet-ops", "the-beacon", "the-door", "the-nexus"],
"lookback_days": 14,
"alert": {
"recent_days": 7,
"baseline_days": 7,
"minimum_baseline_closed": 4,
"drop_ratio": 0.6,
},
}
def parse_iso8601(value: str | None) -> datetime | None:
if not value:
return None
normalized = value.replace("Z", "+00:00")
parsed = datetime.fromisoformat(normalized)
if parsed.tzinfo is None:
return parsed.replace(tzinfo=timezone.utc)
return parsed.astimezone(timezone.utc)
def normalize_today(value: str | date | None = None) -> date:
if value is None:
return datetime.now(timezone.utc).date()
if isinstance(value, date):
return value
return date.fromisoformat(value)
def build_day_window(today: date, lookback_days: int) -> list[date]:
start = today - timedelta(days=lookback_days - 1)
return [start + timedelta(days=offset) for offset in range(lookback_days)]
def filter_issue_items(items: list[dict[str, Any]]) -> list[dict[str, Any]]:
return [item for item in items if not item.get("pull_request")]
def build_daily_series(items: list[dict[str, Any]], today: date, lookback_days: int) -> list[dict[str, int | str]]:
days = build_day_window(today, lookback_days)
counts = {day.isoformat(): {"opened": 0, "closed": 0} for day in days}
start_day = days[0]
for item in filter_issue_items(items):
created_at = parse_iso8601(item.get("created_at"))
if created_at is not None:
created_day = created_at.date()
if start_day <= created_day <= today:
counts[created_day.isoformat()]["opened"] += 1
closed_at = parse_iso8601(item.get("closed_at"))
if closed_at is not None:
closed_day = closed_at.date()
if start_day <= closed_day <= today:
counts[closed_day.isoformat()]["closed"] += 1
return [
{
"date": day.isoformat(),
"opened": counts[day.isoformat()]["opened"],
"closed": counts[day.isoformat()]["closed"],
}
for day in days
]
def summarize_velocity_alert(
*, recent_closed: int, baseline_closed: int, open_now: int, config: dict[str, Any]
) -> dict[str, Any]:
minimum_baseline = int(config.get("minimum_baseline_closed", 4))
drop_ratio = float(config.get("drop_ratio", 0.6))
if baseline_closed >= minimum_baseline and recent_closed < baseline_closed * drop_ratio:
return {
"status": "drop",
"kind": "velocity_drop",
"recent_closed": recent_closed,
"baseline_closed": baseline_closed,
"reason": (
f"velocity_drop: closed {recent_closed} in the last {config.get('recent_days', 7)}d "
f"vs {baseline_closed} in the prior {config.get('baseline_days', 7)}d"
),
}
if open_now > 0 and baseline_closed >= minimum_baseline and recent_closed == 0:
return {
"status": "drop",
"kind": "velocity_drop",
"recent_closed": recent_closed,
"baseline_closed": baseline_closed,
"reason": "velocity_drop: no issues closed in the recent window while backlog is still open",
}
return {
"status": "ok",
"kind": "none",
"recent_closed": recent_closed,
"baseline_closed": baseline_closed,
"reason": "velocity stable",
}
def _sum_window(daily: list[dict[str, int | str]], field: str, days: int) -> int:
if days <= 0:
return 0
return sum(int(item[field]) for item in daily[-days:])
def _sum_baseline_window(daily: list[dict[str, int | str]], recent_days: int, baseline_days: int) -> int:
if baseline_days <= 0:
return 0
if recent_days <= 0:
return sum(int(item["closed"]) for item in daily[-baseline_days:])
baseline_slice = daily[-(recent_days + baseline_days) : -recent_days]
return sum(int(item["closed"]) for item in baseline_slice)
def build_velocity_report(config: dict[str, Any], snapshot: dict[str, Any], today: str | date | None = None) -> dict[str, Any]:
report_day = normalize_today(today)
generated_at = snapshot.get("generated_at") or datetime.now(timezone.utc).isoformat().replace("+00:00", "Z")
owner = config.get("owner", DEFAULT_OWNER)
repos = list(config.get("repos") or sorted((snapshot.get("repos") or {}).keys()))
lookback_days = int(config.get("lookback_days", 14))
alert_config = dict(DEFAULT_CONFIG["alert"])
alert_config.update(config.get("alert") or {})
recent_days = int(alert_config.get("recent_days", 7))
baseline_days = int(alert_config.get("baseline_days", 7))
repo_reports: list[dict[str, Any]] = []
total_open_now = 0
total_closed_last_7d = 0
repos_with_alerts: list[str] = []
for repo_name in repos:
repo_snapshot = (snapshot.get("repos") or {}).get(repo_name, {})
open_issues = filter_issue_items(list(repo_snapshot.get("open_issues") or []))
recent_issues = filter_issue_items(list(repo_snapshot.get("recent_issues") or []))
daily = build_daily_series(recent_issues, report_day, lookback_days)
open_now = len(open_issues)
opened_last_7d = _sum_window(daily, "opened", recent_days)
closed_last_7d = _sum_window(daily, "closed", recent_days)
baseline_closed = _sum_baseline_window(daily, recent_days, baseline_days)
weekly_net = opened_last_7d - closed_last_7d
alert = summarize_velocity_alert(
recent_closed=closed_last_7d,
baseline_closed=baseline_closed,
open_now=open_now,
config=alert_config,
)
repo_report = {
"repo": repo_name,
"open_now": open_now,
"opened_last_7d": opened_last_7d,
"closed_last_7d": closed_last_7d,
"baseline_closed": baseline_closed,
"weekly_net": weekly_net,
"daily": daily,
"alert": alert,
}
repo_reports.append(repo_report)
total_open_now += open_now
total_closed_last_7d += closed_last_7d
if alert["status"] != "ok":
repos_with_alerts.append(repo_name)
return {
"owner": owner,
"generated_at": generated_at,
"generated_day": report_day.isoformat(),
"lookback_days": lookback_days,
"dashboard_contract_version": 1,
"repos": repo_reports,
"summary": {
"total_open_now": total_open_now,
"total_closed_last_7d": total_closed_last_7d,
"repos_with_alerts": repos_with_alerts,
},
}
def render_markdown(report: dict[str, Any]) -> str:
lines = [
"# Burn-down Velocity Tracking",
"",
f"Generated: {report['generated_at']}",
f"Owner: {report['owner']}",
f"Lookback days: {report['lookback_days']}",
"",
"## Per-repo velocity",
"",
"| Repo | Open now | Opened 7d | Closed 7d | Previous 7d | Alert |",
"| --- | ---: | ---: | ---: | ---: | --- |",
]
for repo in report["repos"]:
alert_label = repo["alert"]["kind"] if repo["alert"]["status"] != "ok" else "ok"
lines.append(
f"| {repo['repo']} | {repo['open_now']} | {repo['opened_last_7d']} | {repo['closed_last_7d']} | {repo['baseline_closed']} | {alert_label} |"
)
lines.extend(
[
"",
"## Dashboard handoff for timmy-config",
"",
"The timmy-config dashboard should consume `~/.timmy/burn-velocity/latest.json` and render, for each repo:",
"- `open_now`",
"- `opened_last_7d`",
"- `closed_last_7d`",
"- `baseline_closed`",
"- `alert.status` / `alert.kind` / `alert.reason`",
"",
"Cron should also persist `~/.timmy/burn-velocity/history.json` so timmy-config can plot the daily trend line instead of only the latest snapshot.",
"",
"## Alerts",
"",
]
)
alerts = [repo for repo in report["repos"] if repo["alert"]["status"] != "ok"]
if not alerts:
lines.append("- none")
else:
for repo in alerts:
lines.append(f"- {repo['repo']}: {repo['alert']['reason']}")
return "\n".join(lines) + "\n"
def update_history(history_path: Path, report: dict[str, Any]) -> dict[str, Any]:
if history_path.exists():
history = json.loads(history_path.read_text(encoding="utf-8"))
else:
history = {"days": []}
entry = {
"date": report["generated_day"],
"generated_at": report["generated_at"],
"summary": report["summary"],
"repos": report["repos"],
}
retained = [item for item in history.get("days", []) if item.get("date") != report["generated_day"]]
retained.append(entry)
retained.sort(key=lambda item: item["date"])
history["days"] = retained
history_path.parent.mkdir(parents=True, exist_ok=True)
history_path.write_text(json.dumps(history, indent=2), encoding="utf-8")
return history
class GiteaClient:
def __init__(self, token: str, owner: str = DEFAULT_OWNER, base_url: str = DEFAULT_BASE_URL):
self.token = token
self.owner = owner
self.base_url = base_url.rstrip("/")
def _headers(self) -> list[dict[str, str]]:
return [
{"Authorization": f"token {self.token}", "Accept": "application/json"},
{
"Authorization": "Basic " + b64encode(f"{self.token}:".encode()).decode(),
"Accept": "application/json",
},
]
def _request_json(self, url: str) -> list[dict[str, Any]]:
last_error: Exception | None = None
for headers in self._headers():
try:
req = request.Request(url, headers=headers)
with request.urlopen(req, timeout=30) as response:
return json.loads(response.read().decode())
except Exception as exc: # pragma: no cover - exercised only on live API failure
last_error = exc
if last_error is None: # pragma: no cover - defensive
raise RuntimeError("request failed without an exception")
raise last_error
def list_issues(self, repo: str, *, state: str, since: str | None = None) -> list[dict[str, Any]]:
issues: list[dict[str, Any]] = []
page = 1
while True:
query = {"state": state, "type": "issues", "limit": 100, "page": page}
if since:
query["since"] = since
url = f"{self.base_url}/repos/{self.owner}/{repo}/issues?{parse.urlencode(query)}"
batch = self._request_json(url)
if not batch:
break
issues.extend(filter_issue_items(batch))
page += 1
return issues
def load_json(path: Path, default: Any) -> Any:
if not path.exists():
return default
return json.loads(path.read_text(encoding="utf-8"))
def load_config(path: Path) -> dict[str, Any]:
config = dict(DEFAULT_CONFIG)
alert = dict(DEFAULT_CONFIG["alert"])
raw = load_json(path, {})
config.update(raw)
alert.update(raw.get("alert") or {})
config["alert"] = alert
return config
def collect_live_snapshot(
config: dict[str, Any], *, today: str | date | None = None, token_file: Path = DEFAULT_TOKEN_FILE, base_url: str = DEFAULT_BASE_URL
) -> dict[str, Any]:
token = token_file.read_text(encoding="utf-8").strip()
report_day = normalize_today(today)
since_day = report_day - timedelta(days=int(config.get("lookback_days", 14)) - 1)
since_timestamp = datetime.combine(since_day, time.min, tzinfo=timezone.utc).isoformat().replace("+00:00", "Z")
client = GiteaClient(token=token, owner=config.get("owner", DEFAULT_OWNER), base_url=base_url)
repos = list(config.get("repos") or [])
repo_payload = {}
for repo in repos:
repo_payload[repo] = {
"open_issues": client.list_issues(repo, state="open"),
"recent_issues": client.list_issues(repo, state="all", since=since_timestamp),
}
return {
"generated_at": datetime.now(timezone.utc).isoformat().replace("+00:00", "Z"),
"repos": repo_payload,
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(description="Track per-repo issue burn-down velocity and emit timmy-config dashboard payloads.")
parser.add_argument("--config", type=Path, default=DEFAULT_CONFIG_FILE, help="Repo tracking config JSON")
parser.add_argument("--snapshot-file", type=Path, help="Use a pre-fetched snapshot JSON instead of calling Gitea")
parser.add_argument("--token-file", type=Path, default=DEFAULT_TOKEN_FILE, help="Gitea token file for live collection")
parser.add_argument("--base-url", default=DEFAULT_BASE_URL, help="Gitea API base URL")
parser.add_argument("--today", help="Override report date (YYYY-MM-DD)")
parser.add_argument("--output-json", type=Path, default=DEFAULT_OUTPUT_JSON, help="Path for latest JSON payload")
parser.add_argument("--output-md", type=Path, default=DEFAULT_OUTPUT_MD, help="Path for latest markdown summary")
parser.add_argument("--history-file", type=Path, default=DEFAULT_HISTORY_FILE, help="Path for persisted daily history JSON")
parser.add_argument("--write-history", action="store_true", help="Update the daily history file after generating the report")
parser.add_argument("--json", action="store_true", help="Print JSON instead of markdown to stdout")
return parser.parse_args()
def main() -> None:
args = parse_args()
config = load_config(args.config)
if args.snapshot_file:
snapshot = load_json(args.snapshot_file, {"repos": {}})
else:
snapshot = collect_live_snapshot(config, today=args.today, token_file=args.token_file, base_url=args.base_url)
report = build_velocity_report(config, snapshot, today=args.today)
args.output_json.parent.mkdir(parents=True, exist_ok=True)
args.output_md.parent.mkdir(parents=True, exist_ok=True)
args.output_json.write_text(json.dumps(report, indent=2), encoding="utf-8")
args.output_md.write_text(render_markdown(report), encoding="utf-8")
if args.write_history:
update_history(args.history_file, report)
if args.json:
print(json.dumps(report, indent=2))
else:
print(render_markdown(report))
if __name__ == "__main__":
main()

View File

@@ -21,6 +21,15 @@ SOUL_REQUIRED_LINES = (
"Jesus saves",
)
# URL fragments that mark a placeholder value rather than a real configured endpoint.
# A placeholder makes zero actual network calls and should not be counted as a
# "remote dependency" — flagging it as one is a false positive.
_PLACEHOLDER_FRAGMENTS = ("YOUR_", "<pod-id>", "EXAMPLE", "example.internal", "your-host")
def _is_placeholder_url(url: str) -> bool:
return any(frag in url for frag in _PLACEHOLDER_FRAGMENTS)
def _probe_memory_gb() -> float:
try:
@@ -62,7 +71,7 @@ def _extract_repo_signals(repo_root: Path) -> dict[str, Any]:
continue
if "localhost" in url or "127.0.0.1" in url:
local_endpoints.append(url)
else:
elif not _is_placeholder_url(url):
remote_endpoints.append(url)
soul_text = soul_path.read_text(encoding="utf-8", errors="replace") if soul_path.exists() else ""

View File

@@ -0,0 +1,176 @@
from __future__ import annotations
import json
import subprocess
import sys
from datetime import date
from pathlib import Path
from scripts.burn_velocity_tracker import build_velocity_report, render_markdown, update_history
ROOT = Path(__file__).resolve().parent.parent
DOC_PATH = ROOT / "docs" / "BURN_VELOCITY_TRACKING.md"
SNAPSHOT = {
"generated_at": "2026-04-22T12:00:00Z",
"repos": {
"timmy-home": {
"open_issues": [
{"number": 501, "state": "open", "created_at": "2026-04-20T09:00:00Z"},
{"number": 502, "state": "open", "created_at": "2026-04-22T07:00:00Z"},
],
"recent_issues": [
{"number": 401, "state": "closed", "created_at": "2026-04-21T09:00:00Z", "closed_at": "2026-04-22T05:30:00Z"},
{"number": 402, "state": "closed", "created_at": "2026-04-20T09:00:00Z", "closed_at": "2026-04-21T05:30:00Z"},
{"number": 403, "state": "closed", "created_at": "2026-04-19T09:00:00Z", "closed_at": "2026-04-20T05:30:00Z"},
{"number": 404, "state": "closed", "created_at": "2026-04-14T09:00:00Z", "closed_at": "2026-04-15T05:30:00Z"},
{"number": 405, "state": "closed", "created_at": "2026-04-13T09:00:00Z", "closed_at": "2026-04-14T05:30:00Z"},
{"number": 406, "state": "closed", "created_at": "2026-04-12T09:00:00Z", "closed_at": "2026-04-13T05:30:00Z"},
{"number": 407, "state": "closed", "created_at": "2026-04-11T09:00:00Z", "closed_at": "2026-04-12T05:30:00Z"},
{"number": 408, "state": "closed", "created_at": "2026-04-10T09:00:00Z", "closed_at": "2026-04-11T05:30:00Z"},
{"number": 409, "state": "closed", "created_at": "2026-04-09T09:00:00Z", "closed_at": "2026-04-10T05:30:00Z"},
{"number": 410, "state": "closed", "created_at": "2026-04-08T09:00:00Z", "closed_at": "2026-04-09T05:30:00Z"},
{"number": 411, "state": "closed", "created_at": "2026-04-07T09:00:00Z", "closed_at": "2026-04-08T05:30:00Z"},
{"number": 412, "state": "closed", "created_at": "2026-04-06T09:00:00Z", "closed_at": "2026-04-07T05:30:00Z"},
{"number": 413, "state": "closed", "created_at": "2026-04-05T09:00:00Z", "closed_at": "2026-04-06T05:30:00Z"},
{"number": 414, "state": "open", "created_at": "2026-04-22T08:45:00Z", "closed_at": None},
{"number": 415, "state": "open", "created_at": "2026-04-17T08:45:00Z", "closed_at": None},
],
},
"timmy-config": {
"open_issues": [
{"number": 601, "state": "open", "created_at": "2026-04-18T09:00:00Z"},
],
"recent_issues": [
{"number": 602, "state": "closed", "created_at": "2026-04-20T09:00:00Z", "closed_at": "2026-04-21T06:00:00Z"},
{"number": 603, "state": "open", "created_at": "2026-04-22T06:00:00Z", "closed_at": None},
],
},
},
}
CONFIG = {
"owner": "Timmy_Foundation",
"repos": ["timmy-home", "timmy-config"],
"lookback_days": 14,
"alert": {
"recent_days": 7,
"baseline_days": 7,
"minimum_baseline_closed": 4,
"drop_ratio": 0.6,
},
}
def test_build_velocity_report_counts_opened_closed_and_flags_drop_alert() -> None:
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
assert report["generated_day"] == "2026-04-22"
assert report["summary"]["repos_with_alerts"] == ["timmy-home"]
assert report["summary"]["total_open_now"] == 3
home = report["repos"][0]
assert home["repo"] == "timmy-home"
assert home["open_now"] == 2
assert home["opened_last_7d"] == 5
assert home["closed_last_7d"] == 3
assert home["baseline_closed"] == 7
assert home["weekly_net"] == 2
assert home["alert"]["status"] == "drop"
assert home["alert"]["recent_closed"] == 3
assert home["daily"][-1] == {"date": "2026-04-22", "opened": 1, "closed": 1}
timmy_config = report["repos"][1]
assert timmy_config["repo"] == "timmy-config"
assert timmy_config["open_now"] == 1
assert timmy_config["closed_last_7d"] == 1
assert timmy_config["alert"]["status"] == "ok"
def test_render_markdown_includes_dashboard_handoff_and_alerts() -> None:
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
rendered = render_markdown(report)
for snippet in (
"# Burn-down Velocity Tracking",
"## Per-repo velocity",
"timmy-home",
"timmy-config",
"## Dashboard handoff for timmy-config",
"velocity_drop",
"## Alerts",
):
assert snippet in rendered
def test_update_history_replaces_same_day_snapshot(tmp_path: Path) -> None:
history_path = tmp_path / "burn-velocity-history.json"
report = build_velocity_report(CONFIG, SNAPSHOT, today=date(2026, 4, 22))
update_history(history_path, report)
updated = json.loads(json.dumps(report))
updated["repos"][0]["open_now"] = 9
updated["summary"]["total_open_now"] = 10
update_history(history_path, updated)
history = json.loads(history_path.read_text(encoding="utf-8"))
assert [item["date"] for item in history["days"]] == ["2026-04-22"]
assert history["days"][0]["summary"]["total_open_now"] == 10
assert history["days"][0]["repos"][0]["open_now"] == 9
def test_cli_writes_json_markdown_and_history_from_snapshot(tmp_path: Path) -> None:
snapshot_path = tmp_path / "snapshot.json"
output_json = tmp_path / "latest.json"
output_md = tmp_path / "latest.md"
history_path = tmp_path / "history.json"
snapshot_path.write_text(json.dumps(SNAPSHOT), encoding="utf-8")
result = subprocess.run(
[
sys.executable,
"-m",
"scripts.burn_velocity_tracker",
"--snapshot-file",
str(snapshot_path),
"--today",
"2026-04-22",
"--output-json",
str(output_json),
"--output-md",
str(output_md),
"--history-file",
str(history_path),
"--write-history",
"--json",
],
check=True,
cwd=ROOT,
capture_output=True,
text=True,
)
payload = json.loads(result.stdout)
assert payload["summary"]["repos_with_alerts"] == ["timmy-home"]
assert output_json.exists()
assert output_md.exists()
assert history_path.exists()
assert "timmy-config" in output_md.read_text(encoding="utf-8")
def test_repo_contains_burn_velocity_tracking_doc() -> None:
text = DOC_PATH.read_text(encoding="utf-8")
required = [
"# Burn-down Velocity Tracking",
"python3 scripts/burn_velocity_tracker.py",
"configs/burn_velocity_repos.json",
"~/.timmy/burn-velocity/latest.json",
"timmy-config dashboard",
"type=issues",
"velocity_drop",
]
for snippet in required:
assert snippet in text

View File

@@ -7,6 +7,7 @@ from pathlib import Path
ROOT = Path(__file__).resolve().parents[1]
SCRIPT_PATH = ROOT / "scripts" / "unreachable_horizon.py"
DOC_PATH = ROOT / "docs" / "UNREACHABLE_HORIZON_1M_MEN.md"
SOUL_PATH = ROOT / "SOUL.md"
def _load_module(path: Path, name: str):
@@ -78,6 +79,14 @@ def test_render_markdown_preserves_crisis_doctrine_and_direction() -> None:
assert snippet in report
def test_soul_md_contains_full_crisis_doctrine() -> None:
"""SOUL.md must carry all three phrases the horizon check requires."""
assert SOUL_PATH.exists(), "SOUL.md is missing"
soul_text = SOUL_PATH.read_text(encoding="utf-8")
for phrase in ("Are you safe right now?", "988", "Jesus saves"):
assert phrase in soul_text, f"SOUL.md is missing crisis doctrine phrase: {phrase!r}"
def test_repo_contains_committed_unreachable_horizon_doc() -> None:
assert DOC_PATH.exists(), "missing committed unreachable horizon report"
text = DOC_PATH.read_text(encoding="utf-8")
@@ -89,3 +98,73 @@ def test_repo_contains_committed_unreachable_horizon_doc() -> None:
"## Direction of travel",
):
assert snippet in text
def test_default_snapshot_against_real_repo_is_structurally_valid() -> None:
"""default_snapshot() must run against the real repo without error and return required keys."""
mod = _load_module(SCRIPT_PATH, "unreachable_horizon")
snapshot = mod.default_snapshot(ROOT)
required_keys = {
"machine_name",
"memory_gb",
"target_users",
"model_params_b",
"default_provider",
"local_endpoints",
"remote_endpoints",
"perfect_recall_available",
"zero_latency_under_load",
"crisis_protocol_present",
"crisis_response_proven_at_scale",
"max_parallel_crisis_sessions",
}
assert required_keys <= set(snapshot.keys()), f"snapshot missing keys: {required_keys - set(snapshot.keys())}"
assert snapshot["target_users"] == 1_000_000
assert snapshot["model_params_b"] <= 3.0
assert snapshot["memory_gb"] >= 0.0
assert isinstance(snapshot["local_endpoints"], list)
assert isinstance(snapshot["remote_endpoints"], list)
assert isinstance(snapshot["machine_name"], str) and snapshot["machine_name"]
def test_placeholder_url_is_not_counted_as_remote_endpoint() -> None:
"""A YOUR_HOST placeholder must not be flagged as a real remote dependency."""
mod = _load_module(SCRIPT_PATH, "unreachable_horizon")
assert mod._is_placeholder_url("https://YOUR_BIG_BRAIN_HOST/v1") is True
assert mod._is_placeholder_url("https://<pod-id>-11434.proxy.runpod.net/v1") is True
assert mod._is_placeholder_url("http://localhost:11434/v1") is False
assert mod._is_placeholder_url("https://real.inference.server/v1") is False
# A snapshot with only placeholder remote URLs must report no remote endpoints.
status = mod.compute_horizon_status({
"machine_name": "Test",
"memory_gb": 36.0,
"target_users": 1_000_000,
"model_params_b": 3.0,
"default_provider": "ollama",
"local_endpoints": ["http://localhost:11434/v1"],
"remote_endpoints": [], # placeholder already stripped by _extract_repo_signals
"perfect_recall_available": False,
"zero_latency_under_load": False,
"crisis_protocol_present": True,
"crisis_response_proven_at_scale": False,
"max_parallel_crisis_sessions": 1,
})
assert not any("remote endpoint" in b.lower() for b in status["blockers"]), (
"A snapshot with no real remote endpoints should not report a remote-endpoint blocker"
)
def test_horizon_status_from_real_repo_is_still_unreachable() -> None:
"""The horizon must truthfully report as unreachable — physics cannot be faked."""
mod = _load_module(SCRIPT_PATH, "unreachable_horizon")
snapshot = mod.default_snapshot(ROOT)
status = mod.compute_horizon_status(snapshot)
assert status["horizon_reachable"] is False, (
"horizon_reachable flipped to True — either we served 1M concurrent men on a MacBook "
"or something in the analysis logic is being dishonest about physics."
)
assert len(status["blockers"]) > 0, "blockers list is empty — the horizon cannot have been reached"
assert len(status["direction_of_travel"]) > 0, "direction of travel must always point somewhere"