Files
Timmy-time-dashboard/Makefile
Alexander Whitestone 6303a77f6e Consolidate test & dev workflows into tox as single source of truth (#160)
* Centralize all Python environments on tox

tox.ini is now the single source of truth for how every Python
environment runs — tests, linting, formatting, dev server, and CI.
No more bare `poetry run` outside of tox.

- Expand tox.ini from 4 to 15 environments (lint, format, typecheck,
  unit, integration, functional, e2e, fast, ollama, ci, coverage,
  coverage-html, pre-commit, dev, all)
- Rewire all Makefile test/lint/format/dev targets to delegate to tox
- Update .githooks/pre-commit to run `tox -e pre-commit`
- Update .pre-commit-config.yaml to use tox instead of poetry run
- Update CI workflow (lint + test jobs) to use `tox -e lint` and
  `tox -e ci` instead of ad-hoc pytest/black/isort invocations
- Update CLAUDE.md to mandate tox usage and document all environments

https://claude.ai/code/session_01MTUpqms1fgezZFrodGA8H5

* refactor: modernize tox.ini for tox 4.x conventions

- Replace `skipsdist = true` (tox 3 alias) with `no_package = true`
- Use `poetry install --no-root --sync` for faster, cleaner dep installs

https://claude.ai/code/session_01MTUpqms1fgezZFrodGA8H5

* fix(ci): drop poetry install from lint/format tox envs

Lint and format only need black, isort, and bandit — not the full
project dependency tree. Override commands_pre to empty and use tox
deps instead. Fixes CI failure where poetry is not on PATH.

https://claude.ai/code/session_01MTUpqms1fgezZFrodGA8H5

* fix(ci): remove poetry run wrapper from all tox commands

Since commands_pre runs poetry install into the tox-managed venv,
all tools (pytest, mypy, black, etc.) are already on the venv PATH.
The poetry run wrapper is redundant and fails in CI where poetry
may not be installed globally.

https://claude.ai/code/session_01MTUpqms1fgezZFrodGA8H5

* fix(ci): remove poetry dependency, align local and CI processes

- Replace `poetry install` with `pip install -e ".[dev]"` in tox
  commands_pre so all envs work without poetry installed
- Remove Poetry cache from GitHub Actions (only pip cache needed)
- Rename pre-commit env to pre-push: runs lint + full CI suite
  (same checks as GitHub Actions, reports generated locally)
- Update CLAUDE.md to reflect new pre-push workflow

The local `tox -e pre-push` now runs the exact same lint + test +
coverage checks as CI, so failures are caught before pushing.

https://claude.ai/code/session_01MTUpqms1fgezZFrodGA8H5

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-03-10 15:54:09 -04:00

359 lines
15 KiB
Makefile

.PHONY: install install-bigbrain install-hooks dev nuke fresh test test-cov test-cov-html watch lint clean help \
up down logs \
docker-build docker-up docker-down docker-agent docker-logs docker-shell \
test-docker test-docker-cov test-docker-functional test-docker-build test-docker-down \
cloud-deploy cloud-up cloud-down cloud-logs cloud-status cloud-update \
logs-up logs-down logs-kibana
TOX := tox
# ── Setup ─────────────────────────────────────────────────────────────────────
install:
poetry install --with dev
git config core.hooksPath .githooks
@echo "✓ Ready. Git hooks active. Run 'make dev' to start the dashboard."
install-hooks:
git config core.hooksPath .githooks
@echo "✓ Git hooks active via core.hooksPath (works in worktrees too)."
install-bigbrain:
poetry install --with dev --extras bigbrain
@if [ "$$(uname -m)" = "arm64" ] && [ "$$(uname -s)" = "Darwin" ]; then \
poetry run pip install --quiet "airllm[mlx]"; \
echo "✓ AirLLM + MLX installed (Apple Silicon detected)"; \
else \
echo "✓ AirLLM installed (PyTorch backend)"; \
fi
# ── Development ───────────────────────────────────────────────────────────────
dev: nuke
PYTHONDONTWRITEBYTECODE=1 $(TOX) -e dev
# Kill anything on port 8000, stop Docker containers, clear stale state.
# Safe to run anytime — idempotent, never errors out.
nuke:
@echo " Cleaning up dev environment..."
@# Stop Docker containers (if any are running)
@docker compose down --remove-orphans 2>/dev/null || true
@# Kill any process holding port 8000 (errno 48 fix)
@lsof -ti :8000 | xargs kill -9 2>/dev/null || true
@# Purge stale bytecache to prevent loading old .pyc files
@find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
@find . -name "*.pyc" -delete 2>/dev/null || true
@# Brief pause to let the OS release the socket
@sleep 0.5
@echo " ✓ Port 8000 free, containers stopped, caches cleared"
# Full clean rebuild: wipe containers, images, volumes, rebuild from scratch.
# Ensures no stale code, cached layers, or old DB state persists.
fresh: nuke
docker compose down -v --rmi local 2>/dev/null || true
DOCKER_BUILDKIT=1 docker compose build --no-cache
mkdir -p data
docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d
@echo ""
@echo " ✓ Fresh rebuild complete — Timmy Time at http://localhost:8000"
@echo " Hot-reload active. Logs: make logs"
@echo ""
# Print the local IP addresses your phone can use to reach this machine.
# Connect your phone to the same hotspot your Mac is sharing from,
# then open http://<IP>:8000 in your phone browser.
# The server auto-reloads on Python/template changes (--reload above).
# For CSS/static changes, just pull-to-refresh on your phone.
ip:
@echo ""
@echo " Open one of these on your phone: http://<IP>:8000"
@echo ""
@if [ "$$(uname -s)" = "Darwin" ]; then \
ipconfig getifaddr en0 2>/dev/null | awk '{print " en0 (Wi-Fi): http://" $$1 ":8000"}' || true; \
ipconfig getifaddr en1 2>/dev/null | awk '{print " en1 (Ethernet): http://" $$1 ":8000"}' || true; \
ipconfig getifaddr en2 2>/dev/null | awk '{print " en2: http://" $$1 ":8000"}' || true; \
fi
@# Generic fallback — works on both macOS and Linux
@ifconfig 2>/dev/null | awk '/inet / && !/127\.0\.0\.1/ && !/::1/{print " " $$2 " → http://" $$2 ":8000"}' | head -5 \
|| ip -4 addr show 2>/dev/null | awk '/inet / && !/127\.0\.0\.1/{split($$2,a,"/"); print " " a[1] " → http://" a[1] ":8000"}' | head -5 \
|| true
@echo ""
watch:
poetry run self-tdd watch --interval 60
# ── Testing (all via tox) ─────────────────────────────────────────────────────
test:
$(TOX) -e all
test-unit:
$(TOX) -e unit
test-integration:
$(TOX) -e integration
test-functional:
$(TOX) -e functional
test-e2e:
$(TOX) -e e2e
test-fast:
$(TOX) -e fast
test-ci:
$(TOX) -e ci
test-cov:
$(TOX) -e coverage
test-cov-html:
$(TOX) -e coverage-html
@echo "✓ HTML coverage report: open htmlcov/index.html"
test-ollama:
$(TOX) -e ollama
# ── Docker test containers ───────────────────────────────────────────────────
# Clean containers from cached images; source bind-mounted for fast iteration.
# Rebuild only needed when pyproject.toml / poetry.lock change.
# Build the test image (cached — fast unless deps change)
test-docker-build:
DOCKER_BUILDKIT=1 docker compose -f docker-compose.test.yml build
# Run all unit + integration tests in a clean container (default)
# Override: make test-docker ARGS="-k swarm -v"
test-docker: test-docker-build
docker compose -f docker-compose.test.yml run --rm test \
pytest tests/ -q --tb=short $(ARGS)
docker compose -f docker-compose.test.yml down -v
# Run tests with coverage inside a container
test-docker-cov: test-docker-build
docker compose -f docker-compose.test.yml run --rm test \
pytest tests/ --cov=src --cov-report=term-missing -q $(ARGS)
docker compose -f docker-compose.test.yml down -v
# Spin up the full stack (dashboard + optional Ollama) and run functional tests
test-docker-functional: test-docker-build
docker compose -f docker-compose.test.yml --profile functional up -d --wait
docker compose -f docker-compose.test.yml run --rm test \
pytest tests/functional/ -v --tb=short $(ARGS) || true
docker compose -f docker-compose.test.yml --profile functional down -v
# Tear down any leftover test containers and volumes
test-docker-down:
docker compose -f docker-compose.test.yml --profile functional --profile ollama --profile agents down -v
# ── Code quality ──────────────────────────────────────────────────────────────
lint:
$(TOX) -e lint
format:
$(TOX) -e format
type-check:
$(TOX) -e typecheck
pre-commit-install:
pre-commit install
pre-commit-run:
pre-commit run --all-files
# ── Housekeeping ──────────────────────────────────────────────────────────────
# ── One-command startup ──────────────────────────────────────────────────────
# make up build + start everything in Docker
# make up DEV=1 same, with hot-reload on Python/template/CSS changes
up:
mkdir -p data
ifdef DEV
DOCKER_BUILDKIT=1 docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build
@echo ""
@echo " ✓ Timmy Time running in DEV mode at http://localhost:8000"
@echo " Hot-reload active — Python, template, and CSS changes auto-apply"
@echo " Logs: make logs"
@echo ""
else
DOCKER_BUILDKIT=1 docker compose up -d --build
@echo ""
@echo " ✓ Timmy Time running at http://localhost:8000"
@echo " Logs: make logs"
@echo ""
endif
down:
docker compose down
logs:
docker compose logs -f
# ── Docker ────────────────────────────────────────────────────────────────────
docker-build:
DOCKER_BUILDKIT=1 docker build -t timmy-time:latest .
docker-up:
mkdir -p data
docker compose up -d dashboard
docker-prod:
mkdir -p data
DOCKER_BUILDKIT=1 docker build -t timmy-time:latest .
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d dashboard
docker-down:
docker compose down
# Spawn one agent worker connected to the running dashboard.
# Override name/capabilities: make docker-agent AGENT_NAME=Echo AGENT_CAPABILITIES=summarise
docker-agent:
AGENT_NAME=$${AGENT_NAME:-Worker} \
AGENT_CAPABILITIES=$${AGENT_CAPABILITIES:-general} \
docker compose --profile agents up -d --scale agent=1 agent
docker-logs:
docker compose logs -f
docker-shell:
docker compose exec dashboard bash
# ── Cloud Deploy ─────────────────────────────────────────────────────────────
# One-click production deployment (run on your cloud server)
cloud-deploy:
@bash deploy/setup.sh
# Start the production stack (Caddy + Ollama + Dashboard + Timmy)
cloud-up:
docker compose -f docker-compose.prod.yml up -d
# Stop the production stack
cloud-down:
docker compose -f docker-compose.prod.yml down
# Tail production logs
cloud-logs:
docker compose -f docker-compose.prod.yml logs -f
# Show status of all production containers
cloud-status:
docker compose -f docker-compose.prod.yml ps
# Pull latest code and rebuild
cloud-update:
git pull
docker compose -f docker-compose.prod.yml up -d --build
# Create a DigitalOcean droplet (requires doctl CLI)
cloud-droplet:
@bash deploy/digitalocean/create-droplet.sh
# Scale agent workers in production: make cloud-scale N=4
cloud-scale:
docker compose -f docker-compose.prod.yml --profile agents up -d --scale agent=$${N:-2}
# Pull a model into Ollama: make cloud-pull-model MODEL=llama3.2
cloud-pull-model:
docker exec timmy-ollama ollama pull $${MODEL:-llama3.2}
# ── ELK Logging ──────────────────────────────────────────────────────────────
# Overlay on top of the production stack for centralised log aggregation.
# Kibana UI: http://localhost:5601
logs-up:
docker compose -f docker-compose.prod.yml -f docker-compose.logging.yml up -d
logs-down:
docker compose -f docker-compose.prod.yml -f docker-compose.logging.yml down
logs-kibana:
@echo "Opening Kibana at http://localhost:5601 ..."
@command -v open >/dev/null 2>&1 && open http://localhost:5601 || \
command -v xdg-open >/dev/null 2>&1 && xdg-open http://localhost:5601 || \
echo " → Open http://localhost:5601 in your browser"
# ── Housekeeping ──────────────────────────────────────────────────────────────
clean:
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name "*.egg-info" -exec rm -rf {} + 2>/dev/null || true
find . -name "*.pyc" -delete 2>/dev/null || true
rm -rf .pytest_cache htmlcov .coverage coverage.xml
help:
@echo ""
@echo " Quick Start"
@echo " ─────────────────────────────────────────────────"
@echo " make up build + start everything in Docker"
@echo " make up DEV=1 same, with hot-reload on file changes"
@echo " make down stop all containers"
@echo " make logs tail container logs"
@echo ""
@echo " Local Development"
@echo " ─────────────────────────────────────────────────"
@echo " make install install deps via Poetry"
@echo " make install-bigbrain install with AirLLM (big-model backend)"
@echo " make dev clean up + start dashboard (auto-fixes errno 48)"
@echo " make nuke kill port 8000, stop containers, reset state"
@echo " make fresh full clean rebuild (no cached layers/volumes)"
@echo " make ip print local IP addresses for phone testing"
@echo " make test run all tests"
@echo " make test-cov tests + coverage report (terminal + XML)"
@echo " make test-cov-html tests + HTML coverage report"
@echo " make watch self-TDD watchdog (60s poll)"
@echo " make lint run ruff or flake8"
@echo " make format format code (black, isort)"
@echo " make type-check run type checking (mypy)"
@echo " make pre-commit-run run all pre-commit checks"
@echo " make test-unit run unit tests only"
@echo " make test-integration run integration tests only"
@echo " make test-functional run functional tests only"
@echo " make test-e2e run E2E tests only"
@echo " make test-fast run fast tests (unit + integration)"
@echo " make test-ci run CI tests (exclude skip_ci)"
@echo " make pre-commit-install install pre-commit hooks"
@echo " make clean remove build artefacts and caches"
@echo ""
@echo " Docker Testing (Clean Containers)"
@echo " ─────────────────────────────────────────────────"
@echo " make test-docker run tests in clean container"
@echo " make test-docker ARGS=\"-k swarm\" filter tests in container"
@echo " make test-docker-cov tests + coverage in container"
@echo " make test-docker-functional full-stack functional tests"
@echo " make test-docker-build build test image (cached)"
@echo " make test-docker-down tear down test containers"
@echo ""
@echo " Docker (Advanced)"
@echo " ─────────────────────────────────────────────────"
@echo " make docker-build build the timmy-time:latest image"
@echo " make docker-up start dashboard container"
@echo " make docker-agent add one agent worker (AGENT_NAME=Echo)"
@echo " make docker-down stop all containers"
@echo " make docker-logs tail container logs"
@echo " make docker-shell open a bash shell in the dashboard container"
@echo ""
@echo " Cloud Deploy (Production)"
@echo " ─────────────────────────────────────────────────"
@echo " make cloud-deploy one-click server setup (run as root)"
@echo " make cloud-up start production stack"
@echo " make cloud-down stop production stack"
@echo " make cloud-logs tail production logs"
@echo " make cloud-status show container status"
@echo " make cloud-update pull + rebuild from git"
@echo " make cloud-droplet create DigitalOcean droplet (needs doctl)"
@echo " make cloud-scale N=4 scale agent workers"
@echo " make cloud-pull-model MODEL=llama3.2 pull LLM model"
@echo ""
@echo " ELK Log Aggregation"
@echo " ─────────────────────────────────────────────────"
@echo " make logs-up start prod + ELK stack"
@echo " make logs-down stop prod + ELK stack"
@echo " make logs-kibana open Kibana UI (http://localhost:5601)"
@echo ""