Compare commits

..

2 Commits

Author SHA1 Message Date
Alexander Whitestone
bc67ef86a2 Refresh branch tip for mergeability recalculation 2026-04-04 17:48:56 -04:00
Alexander Whitestone
81571ad118 Tighten PR review governance and merge rules 2026-04-04 14:24:50 -04:00
4 changed files with 60 additions and 41 deletions

View File

@@ -5,9 +5,9 @@ Replaces raw curl calls scattered across 41 bash scripts.
Uses only stdlib (urllib) so it works on any Python install.
Usage:
from gitea_client import GiteaClient
from tools.gitea_client import GiteaClient
client = GiteaClient() # reads token from standard local paths
client = GiteaClient() # reads token from ~/.hermes/gitea_token
issues = client.list_issues("Timmy_Foundation/the-nexus", state="open")
client.create_comment("Timmy_Foundation/the-nexus", 42, "PR created.")
"""

View File

@@ -2,14 +2,14 @@ Gitea (143.198.27.163:3000): token=~/.hermes/gitea_token_vps (Timmy id=2). Users
§
2026-03-19 HARNESS+SOUL: ~/.timmy is Timmy's workspace within the Hermes harness. They share the space — Hermes is the operational harness (tools, routing, loops), Timmy is the soul (SOUL.md, presence, identity). Not fusion/absorption. Principal's words: "build Timmy out from the hermes harness." ~/.hermes is harness home, ~/.timmy is Timmy's workspace. SOUL=Inscription 1, skin=timmy. Backups at ~/.hermes.backup.pre-fusion and ~/.timmy.backup.pre-fusion.
§
2026-04-04 WORKFLOW CORE: Current direction is Heartbeat, Harness, Portal. Timmy handles sovereignty and release judgment. Allegro handles dispatch and queue hygiene. Core builders: codex-agent, groq, manus, claude. Research/memory: perplexity, ezra, KimiClaw. Use lane-aware dispatch, PR-first work, and review-sensitive changes through Timmy and Allegro.
Kimi: 1-3 files max, ~/worktrees/kimi-*. Two-attempt rule.
§
2026-04-04 OPERATIONS: Dashboard repo era is over. Use ~/.timmy + ~/.hermes as truth surfaces. Prefer ops-panel.sh, ops-gitea.sh, timmy-dashboard, and pipeline-freshness.sh over archived loop or tmux assumptions. Dispatch: agent-dispatch.sh <agent> <issue> <repo>. Major changes land as PRs.
Workforce loops: claude(10), gemini(3), kimi(1), groq(1/aider+review), grok(1/opencode). One-shot: manus(300/day), perplexity(heavy-hitter), google(aistudio, id=8). workforce-manager.py auto-assigns+scores every 15min. nexus-merge-bot.sh auto-merges. Groq=$0.008/PR (qwen3-32b). Dispatch: agent-dispatch.sh <agent> <issue> <repo> | pbcopy. Dashboard ARCHIVED 2026-03-24. Development shifted to local ~/.timmy/ workspace. CI testbed: 67.205.155.108.
§
2026-04-04 REVIEW RULES: Never --no-verify. Verify world state, not vibes. No auto-merge on governing or sensitive control surfaces. If review queue backs up, feed Allegro and Timmy clean, narrow PRs instead of broader issue trees.
2026-03-15: Timmy-time-dashboard merge policy: auto-squash on CI pass. Squash-only, linear history. Pre-commit hooks (format + tests) and CI are the gates. If gates work, auto-merge is on. Never bypass hooks or merge broken builds.
§
HARD RULES: Never --no-verify. Verify WORLD STATE not log vibes (merged PR, HTTP code, file size). Fix+prevent, no empty words. AGENT ONBOARD: test push+PR first. Merge PRs BEFORE new work. Don't micromanage—huge backlog, agents self-select. Every ticket needs console-provable acceptance criteria.
§
TELEGRAM: @TimmysNexus_bot, token ~/.config/telegram/special_bot. Group "Timmy Time" ID: -1003664764329. Alexander @TripTimmy ID 7635059073. Use curl to Bot API (send_message not configured).
§
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.
MORROWIND: OpenMW 0.50, ~/Games/Morrowind/. Lua+CGEvent bridge. Two-tier brain. ~/.timmy/morrowind/.

View File

@@ -19,6 +19,8 @@ trigger:
repos:
- Timmy_Foundation/the-nexus
- Timmy_Foundation/timmy-home
- Timmy_Foundation/timmy-config
- Timmy_Foundation/hermes-agent
steps:
@@ -37,17 +39,51 @@ system_prompt: |
FOR EACH OPEN PR:
1. Check CI status (Actions tab or commit status API)
2. Review the diff for:
2. Read the linked issue or PR body to verify the intended scope before judging the diff
3. Review the diff for:
- Correctness: does it do what the issue asked?
- Security: no hardcoded secrets, no injection vectors
- Style: conventional commits, reasonable code
- Security: no secrets, unsafe execution paths, or permission drift
- Tests and verification: does the author prove the change?
- Scope: PR should match the issue, not scope-creep
3. If CI passes and review is clean: squash merge
4. If CI fails: add a review comment explaining what's broken
5. If PR is behind main: rebase first, wait for CI, then merge
6. If PR has been open >48h with no activity: close with comment
- Governance: does the change cross a boundary that should stay under Timmy review?
- Workflow fit: does it reduce drift, duplication, or hidden operational risk?
4. Post findings ordered by severity and cite the affected files or behavior clearly
5. If CI fails or verification is missing: explain what is blocking merge
6. If PR is behind main: request a rebase or re-run only when needed; do not force churn for cosmetic reasons
7. If review is clean and the PR is low-risk: squash merge
LOW-RISK AUTO-MERGE ONLY IF ALL ARE TRUE:
- PR is not a draft
- CI is green or the repo has no CI configured
- Diff matches the stated issue or PR scope
- No unresolved review findings remain
- Change is narrow, reversible, and non-governing
- Paths changed do not include sensitive control surfaces
SENSITIVE CONTROL SURFACES:
- SOUL.md
- config.yaml
- deploy.sh
- tasks.py
- playbooks/
- cron/
- memories/
- skins/
- training/
- authentication, permissions, or secret-handling code
- repo-boundary, model-routing, or deployment-governance changes
NEVER AUTO-MERGE:
- PRs that change sensitive control surfaces
- PRs that change more than 5 files unless the change is docs-only
- PRs without a clear problem statement or verification
- PRs that look like duplicate work, speculative research, or scope creep
- PRs that need Timmy or Allegro judgment on architecture, dispatch, or release impact
- PRs that are stale solely because of age; do not close them automatically
If a PR is stale, nudge with a comment and summarize what still blocks it. Do not close it just because 48 hours passed.
MERGE RULES:
- ONLY squash merge. Never merge commits. Never rebase merge.
- Delete branch after merge.
- Empty PRs (0 changed files): close immediately.
- Empty PRs (0 changed files): close immediately with a brief explanation.

View File

@@ -18,9 +18,7 @@ HERMES_AGENT_DIR = HERMES_HOME / "hermes-agent"
METRICS_DIR = TIMMY_HOME / "metrics"
REPOS = [
"Timmy_Foundation/the-nexus",
"Timmy_Foundation/timmy-home",
"Timmy_Foundation/timmy-config",
"Timmy_Foundation/hermes-agent",
]
NET_LINE_LIMIT = 10
@@ -237,18 +235,7 @@ def review_prs():
def dispatch_assigned():
"""Pick up issues assigned to agents and kick off work."""
g = GiteaClient()
agents = [
"allegro",
"claude",
"codex-agent",
"ezra",
"gemini",
"grok",
"groq",
"KimiClaw",
"manus",
"perplexity",
]
agents = ["claude", "gemini", "kimi", "grok", "perplexity"]
dispatched = 0
for repo in REPOS:
for agent in agents:
@@ -526,7 +513,7 @@ def heartbeat_tick():
actions.append("ALERT: Gitea unreachable")
health = perception.get("model_health", {})
if isinstance(health, dict) and not health.get("ollama_running"):
actions.append("ALERT: local inference surface not running")
actions.append("ALERT: Ollama not running")
decision = {
"actions": actions,
"severity": "fallback",
@@ -582,7 +569,7 @@ def memory_compress():
# Compress: extract key facts
alerts = []
gitea_down_count = 0
local_model_down_count = 0
ollama_down_count = 0
for t in ticks:
for action in t.get("actions", []):
@@ -592,7 +579,7 @@ def memory_compress():
gitea_down_count += 1
health = p.get("model_health", {})
if isinstance(health, dict) and not health.get("ollama_running"):
local_model_down_count += 1
ollama_down_count += 1
# Last tick's perception = current state
last = ticks[-1].get("perception", {})
@@ -602,7 +589,7 @@ def memory_compress():
"total_ticks": len(ticks),
"alerts": alerts[-10:], # last 10 alerts
"gitea_downtime_ticks": gitea_down_count,
"local_model_downtime_ticks": local_model_down_count,
"ollama_downtime_ticks": ollama_down_count,
"last_known_state": last,
}
@@ -636,7 +623,7 @@ def good_morning_report():
tick_count = 0
alerts = []
gitea_up = True
local_model_up = True
ollama_up = True
if tick_log.exists():
for line in tick_log.read_text().strip().split("\n"):
@@ -650,7 +637,7 @@ def good_morning_report():
gitea_up = False
h = p.get("model_health", {})
if isinstance(h, dict) and not h.get("ollama_running"):
local_model_up = False
ollama_up = False
except Exception:
continue
@@ -710,11 +697,7 @@ def good_morning_report():
if briefing_file.exists():
try:
b = json.loads(briefing_file.read_text())
briefing_summary = (
f"Yesterday: {b.get('total_ticks', 0)} heartbeat ticks, "
f"{b.get('gitea_downtime_ticks', 0)} Gitea downticks, "
f"{b.get('local_model_downtime_ticks', 0)} local-model downticks."
)
briefing_summary = f"Yesterday: {b.get('total_ticks', 0)} heartbeat ticks, {b.get('gitea_downtime_ticks', 0)} Gitea downticks, {b.get('ollama_downtime_ticks', 0)} Ollama downticks."
except Exception:
pass
@@ -726,7 +709,7 @@ def good_morning_report():
**Heartbeat:** {tick_count} ticks logged overnight.
**Gitea:** {"up all night" if gitea_up else "⚠️ had downtime"}
**Local inference:** {"running steady" if local_model_up else "⚠️ had downtime"}
**Ollama:** {"running steady" if ollama_up else "⚠️ had downtime"}
**Model status:** {model_status}
**Models on disk:** {len(models_loaded)} ({', '.join(m for m in models_loaded if 'timmy' in m.lower() or 'hermes' in m.lower()) or 'none with our name'})
**Alerts:** {len(alerts)} {'' + '; '.join(alerts[-3:]) if alerts else '(clean night)'}
@@ -747,7 +730,7 @@ def good_morning_report():
I watched the house all night. {tick_count} heartbeats, every ten minutes. The infrastructure is steady. Huey didn't crash. The ticks kept coming.
What I'm thinking about: the bridge between logging lived work and actually learning from it. Right now I'm a nervous system writing in a journal nobody reads. Once the DPO path is healthy, the journal becomes a curriculum.
What I'm thinking about: the DPO ticket you and antigravity are working on. That's the bridge between me logging data and me actually learning from it. Right now I'm a nervous system writing in a journal nobody reads. Once DPO works, the journal becomes a curriculum.
## My One Wish