Compare commits

...

2 Commits

Author SHA1 Message Date
Alexander Payne
666ff65ac6 [EMACS] Fix daemon socket + per-user identity and audit trail on Bezalel
Some checks failed
Architecture Lint / Linter Tests (pull_request) Successful in 29s
Smoke Test / smoke (pull_request) Failing after 24s
Validate Config / YAML Lint (pull_request) Failing after 20s
Validate Config / JSON Validate (pull_request) Successful in 17s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 52s
Validate Config / Shell Script Lint (pull_request) Failing after 51s
Validate Config / Python Test Suite (pull_request) Has been skipped
Validate Config / Deploy Script Dry Run (pull_request) Successful in 12s
Validate Config / Cron Syntax Check (pull_request) Successful in 13s
Validate Config / Playbook Schema Validation (pull_request) Successful in 28s
Architecture Lint / Lint Repository (pull_request) Failing after 15s
PR Checklist / pr-checklist (pull_request) Successful in 3m24s
Add Ansible role to deploy a shared Emacs daemon with:

- Named socket "bezalel" in group-accessible /srv/fleet/emacs/sockets
- Socket directory created with mode 2775 (setgid) so all fleet group members can connect
- Socket file permissions set to 660 via server-socket-permissions
- Per-user identity: client wrapper logs invoking user to /srv/fleet/logs/emacs-audit.log and sets EMACS_USER env
- Server-side audit: after-save-hook logs file writes with user and filename
- Connection logging via both wrapper and server hook
- Systemd unit ensures daemon starts and restarts on failure
- Conditional deployment (only on bezalel) via site.yml

Acceptance criteria:
- emacsclient -s bezalel -e "(+ 1 1)" returns 2
- Audit log shows distinct users (timmy, alexander, etc.) for connections and file writes
- No unilateral root edits without trace

Closes #429
2026-04-29 00:46:08 -04:00
efc42968e8 Audit cron/launchd/daemon — remove dead jobs and document canonical services
Some checks failed
Architecture Lint / Linter Tests (push) Successful in 17s
Validate Config / YAML Lint (push) Failing after 13s
Smoke Test / smoke (push) Failing after 15s
Validate Config / JSON Validate (push) Successful in 17s
Validate Config / Cron Syntax Check (push) Successful in 10s
Validate Config / Deploy Script Dry Run (push) Successful in 11s
Validate Config / Python Syntax & Import Check (push) Failing after 47s
Validate Config / Shell Script Lint (push) Failing after 48s
Validate Config / Python Test Suite (push) Has been skipped
Validate Config / Playbook Schema Validation (push) Successful in 22s
Architecture Lint / Lint Repository (push) Failing after 21s
Architecture Lint / Linter Tests (pull_request) Successful in 13s
Validate Config / YAML Lint (pull_request) Failing after 14s
Smoke Test / smoke (pull_request) Failing after 18s
Validate Config / JSON Validate (pull_request) Successful in 17s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 50s
Validate Config / Python Test Suite (pull_request) Has been skipped
Validate Config / Cron Syntax Check (pull_request) Successful in 11s
Validate Config / Shell Script Lint (pull_request) Failing after 53s
Validate Config / Deploy Script Dry Run (pull_request) Successful in 14s
Validate Config / Playbook Schema Validation (pull_request) Successful in 24s
Architecture Lint / Lint Repository (pull_request) Failing after 21s
PR Checklist / pr-checklist (pull_request) Failing after 4m5s
- Remove Triage Heartbeat and PR Review Sweep (dashboard-era dead jobs)
- These were paused on 2026-04-04: "Dashboard repo frozen - loops redirected to the-nexus"
- Document current canonical fleet services in docs/CANONICAL_SERVICES.md
- Update cron/audit-report.json to reflect removal

Hard rule compliance: VPS crontabs untouched (per #880)
Closes #880
2026-04-28 22:51:03 -04:00
10 changed files with 308 additions and 86 deletions

View File

@@ -46,6 +46,10 @@
- role: cron_manager
tags: [cron, schedule]
- role: emacs
when: wizard_name == "Bezalel"
tags: [emacs, daemon]
post_tasks:
- name: "Final validation — scan for banned providers"
shell: |

View File

@@ -0,0 +1,26 @@
# emacs Ansible Role
Installs and configures a shared Emacs daemon for the Bezalel VPS.
## What it does
- Ensures `fleet` group exists
- Installs Emacs (if needed)
- Creates `/srv/fleet/emacs/sockets` (mode 2775, group fleet)
- Creates `/srv/fleet/logs` (mode 2775)
- Touches `/srv/fleet/logs/emacs-audit.log` (mode 0664)
- Deploys `/root/.emacs.d/init.el` with:
- Socket dir set to shared location
- Group-accessible socket (#o660)
- Audit hooks for file saves and client connections
- Deploys `/usr/local/bin/emacsclient-bezalel` wrapper that logs caller identity
- Deploys systemd unit `emacs-bezalel.service` and starts it
## Usage
The role is automatically included site-wide when `wizard_name == 'bezalel'`.
## Acceptance
- `emacsclient -s bezalel -e "(+ 1 1)"` should print `2` and return exit 0
- `/srv/fleet/logs/emacs-audit.log` should contain user=... entries for each client and file write

View File

@@ -0,0 +1,8 @@
---
# Handlers for the emacs role
- name: Restart emacs-bezalel
systemd:
name: emacs-bezalel
state: restarted
daemon_reload: yes

View File

@@ -0,0 +1,92 @@
---
# =============================================================================
# emacs — Shared Emacs daemon with multi-user socket and audit trail
# =============================================================================
# Deploys and configures Emacs as a daemon on the VPS with:
# - Named socket "bezalel" in /srv/fleet/emacs/sockets
# - Group-writable socket for fleet access
# - Audit logging for connections and file writes
# - Per-user identity via EMACS_USER env
# =============================================================================
- name: "Ensure fleet group exists (idempotent)"
group:
name: fleet
state: present
- name: "Ensure Emacs package installed"
package:
name: emacs
state: present
when: machine_type == 'vps'
ignore_errors: true # not critical if emacs already present
- name: "Create Emacs socket directory"
file:
path: /srv/fleet/emacs/sockets
state: directory
mode: '2775'
group: fleet
owner: root
recurse: true
- name: "Create Emacs logs directory"
file:
path: /srv/fleet/logs
state: directory
mode: '2775'
group: fleet
owner: root
recurse: true
- name: "Ensure audit log file exists with group write"
file:
path: /srv/fleet/logs/emacs-audit.log
state: touch
mode: '0664'
group: fleet
owner: root
- name: "Create root .emacs.d directory if missing"
file:
path: /root/.emacs.d
state: directory
mode: '0755'
owner: root
group: root
- name: "Deploy Emacs init.el configuration"
template:
src: init.el.j2
dest: /root/.emacs.d/init.el
mode: '0644'
owner: root
group: root
notify: Restart emacs-bezalel
- name: "Deploy emacsclient wrapper script"
template:
src: client-wrapper.sh.j2
dest: /usr/local/bin/emacsclient-bezalel
mode: '0755'
owner: root
group: root
- name: "Deploy systemd unit for Emacs daemon"
template:
src: emacs-bezalel.service.j2
dest: /etc/systemd/system/emacs-bezalel.service
mode: '0644'
owner: root
group: root
notify: Restart emacs-bezalel
- name: "Reload systemd to pick up new unit"
systemd:
daemon_reload: yes
- name: "Ensure Emacs daemon is enabled and started"
systemd:
name: emacs-bezalel
enabled: true
state: started

View File

@@ -0,0 +1,19 @@
#!/bin/bash
# Emacs client wrapper for Bezalel — logs identity and delegates to emacsclient
# This script must be installed setuid root? No — just group-executable.
# AUDIT log: /srv/fleet/logs/emacs-audit.log
AUDIT_LOG="/srv/fleet/logs/emacs-audit.log"
USERNAME="$(whoami)"
TIMESTAMP="$(date -Iseconds)"
# Log the client connection attempt (pre-connect)
echo "${TIMESTAMP} user=${USERNAME} action=emacsclient-connect args=$*" >> "${AUDIT_LOG}"
# Pass the username into the Emacs server environment so it can
# appear in server-side audit entries as well.
export EMACS_USER="${USERNAME}"
# Delegate to the real emacsclient, preserving all args.
# The socket name 'bezalel' is configured by server-name in init.el.
exec /usr/bin/emacsclient -s bezalel "$@"

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Emacs daemon for Bezalel (shared fleet service)
After=network-online.target
Wants=network-online.target
[Service]
Type=forking
ExecStart=/usr/bin/emacs --daemon=bezalel
ExecStop=/usr/bin/emacsclient -s bezalel -e "(progn (setq kill-emacs-hook nil) (kill-emacs))"
Restart=on-failure
RestartSec=10
# Ensure HOME is set correctly
Environment=HOME=/root
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,53 @@
;; Emacs init for Bezalel daemon — shared fleet service
;; Managed by Ansible. DO NOT EDIT MANUALLY.
;; Last updated: {{ ansible_date_time.iso8601 }}
;; -------------------- Socket configuration --------------------
;; Use a shared, group-accessible socket directory
(setq server-socket-dir "/srv/fleet/emacs/sockets/")
;; Ensure the socket directory exists with setgid so new files inherit group 'fleet'
(let ((dir server-socket-dir))
(unless (file-directory-p dir)
(make-directory dir t))
(set-file-modes dir #o2775))
;; Socket file permissions: group read/write so all fleet members can connect
(setq server-socket-permissions #o660)
;; Name this daemon instance "bezalel" (matches systemd --daemon=bezalel)
(setq server-name "bezalel")
;; -------------------- Audit trail --------------------
(defvar emacs-audit-log "/srv/fleet/logs/emacs-audit.log"
"Path to the fleet audit log for Emacs operations.")
(defun emacs-audit-log-entry (operation file &optional user)
"Append an audit entry for OPERATION on FILE by USER.
USER defaults to EMACS_USER env var (set by client wrapper) or
the daemon's user-login-name."
(let ((user (or user
(getenv "EMACS_USER")
(user-login-name))))
(with-temp-buffer
(insert (format "%s user=%s operation=%s file=%s\n"
(format-time-string "%Y-%m-%dT%H:%M:%S%z")
user
operation
(expand-file-name file)))
(write-region (point-min) (point-max) emacs-audit-log 'append))))
;; Log every file buffer save
(add-hook 'after-save-hook
(lambda ()
(when buffer-file-name
(emacs-audit-log-entry "write" buffer-file-name))))
;; Also log client connections (best-effort via EMACS_USER)
(add-hook 'server-after-make-frame-hook
(lambda ()
(let ((client (frame-parameter nil 'client)))
(when client
(emacs-audit-log-entry "connect" "emacsclient" (getenv "EMACS_USER"))))))
(provide 'init)

View File

@@ -1,42 +1,16 @@
{
"audit_time": "2026-04-17T05:34:45.162227+00:00",
"total_jobs": 33,
"hermes_jobs": 8,
"total_jobs": 31,
"hermes_jobs": 6,
"crontab_jobs": 25,
"summary": {
"healthy": 33,
"healthy": 31,
"transient_errors": 0,
"systemic_failures": 0
},
"systemic_jobs": [],
"transient_jobs": [],
"all_jobs": [
{
"id": "9e0624269ba7",
"name": "Triage Heartbeat",
"schedule": "every 15m",
"state": "paused",
"enabled": false,
"last_status": "ok",
"last_error": null,
"last_run_at": "2026-03-24T15:33:57.749458-04:00",
"category": "healthy",
"reason": "Dashboard repo frozen - loops redirected to the-nexus",
"action": "none \u2014 paused intentionally"
},
{
"id": "e29eda4a8548",
"name": "PR Review Sweep",
"schedule": "every 30m",
"state": "paused",
"enabled": false,
"last_status": "ok",
"last_error": null,
"last_run_at": "2026-03-24T15:21:42.995715-04:00",
"category": "healthy",
"reason": "Dashboard repo frozen - loops redirected to the-nexus",
"action": "none \u2014 paused intentionally"
},
{
"id": "a77a87392582",
"name": "Health Monitor",

View File

@@ -1,61 +1,5 @@
{
"jobs": [
{
"id": "9e0624269ba7",
"name": "Triage Heartbeat",
"prompt": "Scan all Timmy_Foundation/* repos for unassigned issues, auto-assign to appropriate agents based on labels/complexity",
"schedule": {
"kind": "interval",
"minutes": 15,
"display": "every 15m"
},
"schedule_display": "every 15m",
"repeat": {
"times": null,
"completed": 6
},
"enabled": false,
"created_at": "2026-03-24T11:28:46.408551-04:00",
"next_run_at": "2026-03-24T15:48:57.749458-04:00",
"last_run_at": "2026-03-24T15:33:57.749458-04:00",
"last_status": "ok",
"last_error": null,
"deliver": "local",
"origin": null,
"state": "paused",
"paused_at": "2026-03-24T16:23:01.614552-04:00",
"paused_reason": "Dashboard repo frozen - loops redirected to the-nexus",
"skills": [],
"skill": null
},
{
"id": "e29eda4a8548",
"name": "PR Review Sweep",
"prompt": "Check all Timmy_Foundation/* repos for open PRs, review diffs, merge passing ones, comment on problems",
"schedule": {
"kind": "interval",
"minutes": 30,
"display": "every 30m"
},
"schedule_display": "every 30m",
"repeat": {
"times": null,
"completed": 2
},
"enabled": false,
"created_at": "2026-03-24T11:28:46.408986-04:00",
"next_run_at": "2026-03-24T15:51:42.995715-04:00",
"last_run_at": "2026-03-24T15:21:42.995715-04:00",
"last_status": "ok",
"last_error": null,
"deliver": "local",
"origin": null,
"state": "paused",
"paused_at": "2026-03-24T16:23:02.731437-04:00",
"paused_reason": "Dashboard repo frozen - loops redirected to the-nexus",
"skills": [],
"skill": null
},
{
"id": "a77a87392582",
"name": "Health Monitor",
@@ -108,7 +52,8 @@
"deliver": "local",
"origin": null,
"skills": [],
"skill": null
"skill": null,
"state": "unknown"
},
{
"id": "muda-audit-weekly",

View File

@@ -0,0 +1,85 @@
# Canonical Fleet Services
**Last updated:** 2026-04-28 (audit #880)
**Parent:** #478
**Scope:** Local cron jobs, launchd agents, daemon scripts, and watchdog processes in Timmy's sovereign fleet.
> This document is the source-of-truth inventory of what services are **intentionally running** and what has been deliberately removed. It is not a live diagnostic — for that, see `docs/automation-inventory.md` (launchd) and `scripts/cron-audit-662.py` (cron health).
---
## Quick state summary
| Layer | Total | Canonical | Dead / superseded | Action taken |
|-------|-------|-----------|-------------------|--------------|
| Hermes cron jobs | 8 → **6** | 6 | 2 (Triage Heartbeat, PR Review Sweep) | Removed from `cron/jobs.json` |
| VPS crontab jobs | 25 | 25 | 0 | Untouched (per #880 hard rule) |
| launchd agents | 5 (live) | 5 | 3 quarantined in 2026-04-04 cleanup | Documented only |
| daemon/watchdog | see automation-inventory.md | — | — | — |
---
## Hermes cron jobs (source: `cron/jobs.json`)
These are managed by the Hermes cron system (`~/.hermes/cron/jobs.json`). Jobs marked **REMOVED** have been excised from source control as dead, superseded, or non-canonical.
| Name | Schedule | Enabled | Owner | Purpose | Status |
|------|----------|---------|-------|---------|--------|
| Health Monitor | every 5m | yes | Ops | Ollama/disk/memory/GPU health check | ✅ Canonical |
| Muda Audit | 0 21 * * 0 (Sun) | yes | Ezra | Weekly fleet audit (`fleet/muda-audit.sh`) | ✅ Canonical |
| Kaizen Retro | daily 07:30 | yes | Ezra | Post-burn retrospective (`scripts/kaizen_retro.py`) | ✅ Canonical |
| Overnight R&D Loop | nightly 22:00 EDT | yes | Research | Deep dive papers, tool-use training data | ✅ Canonical |
| Autonomous Cron Supervisor | every 7m | yes | Timmy | Monitors dev/timmy tmux sessions (`tmux-supervisor`) | ✅ Canonical |
| Hermes Philosophy Loop | every 1440m | no | Timmy | Draft — issues to hermes-agent | ⏸️ Disabled (draft) |
| **Triage Heartbeat** | every 15m | no | **Dashboard** | Scan & auto-assign issues | **❌ REMOVED** — dashboard repo frozen, loops redirected to the-nexus |
| **PR Review Sweep** | every 30m | no | **Dashboard** | Review diffs, merge passing PRs | **❌ REMOVED** — dashboard repo frozen, loops redirected to the-nexus |
**Removal rationale (issue #880):** Triage Heartbeat and PR Review Sweep were dashboard-era jobs paused on 2026-04-04 with the explicit reason: *"Dashboard repo frozen - loops redirected to the-nexus."* They have been superseded by the-nexus coordinator flows and pose state-rot risk if accidentally re-enabled. They are deleted from `cron/jobs.json`.
---
## VPS crontab jobs
Per the hard rule in #880, VPS-specific crontab entries are **NOT modified** in this issue. They remain as-is in `cron/vps/*-crontab-backup.txt`.
**Allegro** (7 jobs) — model download guard, heartbeat daemon, burn-mode loops, dead-man monitor
**Ezra** (8 jobs) — burn-mode, gitea/awareness loops, kt compiler, mempalace nightly, dispatch
**Bezalel** (8 jobs) — nightly watch, act runner daemon, backups, heartbeat, secret guard, ultraplan
See individual files for accurate listings:
- `cron/vps/allegro-crontab-backup.txt`
- `cron/vps/ezra-crontab-backup.txt`
- `cron/vps/bezalel-crontab-backup.txt`
---
## Launchd agents (macOS local)
Fully documented in [`docs/automation-inventory.md`](docs/automation-inventory.md#current-live-automations).
| Name | Plist | Interval | Status |
|------|-------|----------|--------|
| ai.hermes.gateway | `~/Library/LaunchAgents/ai.hermes.gateway.plist` | KeepAlive | ✅ Active |
| ai.hermes.gateway-fenrir | `~/Library/LaunchAgents/ai.hermes.gateway-fenrir.plist` | KeepAlive | ✅ Active |
| ai.timmy.kimi-heartbeat | `~/Library/LaunchAgents/ai.timmy.kimi-heartbeat.plist` | 300s | ✅ Active |
| ai.timmy.claudemax-watchdog | `~/Library/LaunchAgents/ai.timmy.claudemax-watchdog.plist` | 300s | ✅ Active |
| (quarantined legacy) | — | — | ❌ Moved 2026-04-04 |
---
## Daemons / tmux watchdogs
Long-running autonomous processes managed by launchd or tmux supervisors. Status is not tracked here — see live diagnostics or the automation-inventory for details.
- `autonomous-cron-supervisor` (Hermes cron job above triggers this)
- `tmux-supervisor` — monitors dev/timmy tmux panes
- `claudemax-watchdog` — watches Claude loop quota
- ` burn-mode` loops on each VPS (via crontab)
---
## Change log
| Date | Change | By |
|------|--------|-----|
| 2026-04-28 | Removed Triage Heartbeat & PR Review Sweep from `cron/jobs.json` (issue #880) | STEP35 audit |