This repository has been archived on 2026-03-24. You can view files and clone it. You cannot open issues or pull requests or push a commit.
Files
Timmy-time-dashboard/tests/functional/conftest.py
Alexander Payne d8d976aa60 feat: complete Event Log, Ledger, Memory, Cascade Router, Upgrade Queue, Activity Feed
This commit implements six major features:

1. Event Log System (src/swarm/event_log.py)
   - SQLite-based audit trail for all swarm events
   - Task lifecycle tracking (created, assigned, completed, failed)
   - Agent lifecycle tracking (joined, left, status changes)
   - Integrated with coordinator for automatic logging
   - Dashboard page at /swarm/events

2. Lightning Ledger (src/lightning/ledger.py)
   - Transaction tracking for Lightning Network payments
   - Balance calculations (incoming, outgoing, net, available)
   - Integrated with payment_handler for automatic logging
   - Dashboard page at /lightning/ledger

3. Semantic Memory / Vector Store (src/memory/vector_store.py)
   - Embedding-based similarity search for Echo agent
   - Fallback to keyword matching if sentence-transformers unavailable
   - Personal facts storage and retrieval
   - Dashboard page at /memory

4. Cascade Router Integration (src/timmy/cascade_adapter.py)
   - Automatic LLM failover between providers (Ollama → AirLLM → API)
   - Circuit breaker pattern for failing providers
   - Metrics tracking per provider (latency, error rates)
   - Dashboard status page at /router/status

5. Self-Upgrade Approval Queue (src/upgrades/)
   - State machine for self-modifications: proposed → approved/rejected → applied/failed
   - Human approval required before applying changes
   - Git integration for branch management
   - Dashboard queue at /self-modify/queue

6. Real-Time Activity Feed (src/events/broadcaster.py)
   - WebSocket-based live activity streaming
   - Bridges event_log to dashboard clients
   - Activity panel on /swarm/live

Tests:
- 101 unit tests passing
- 4 new E2E test files for Selenium testing
- Run with: SELENIUM_UI=1 pytest tests/functional/ -v --headed

Documentation:
- 6 ADRs (017-022) documenting architecture decisions
- Implementation summary in docs/IMPLEMENTATION_SUMMARY.md
- Architecture diagram in docs/architecture-v2.md
2026-02-26 08:01:01 -05:00

97 lines
2.5 KiB
Python

"""Shared fixtures for functional/E2E tests."""
import os
import subprocess
import sys
import time
import urllib.request
import pytest
# Default dashboard URL - override with DASHBOARD_URL env var
DASHBOARD_URL = os.environ.get("DASHBOARD_URL", "http://localhost:8000")
def is_server_running():
"""Check if dashboard is already running."""
try:
urllib.request.urlopen(f"{DASHBOARD_URL}/health", timeout=2)
return True
except Exception:
return False
@pytest.fixture(scope="session")
def live_server():
"""Start the real Timmy server for E2E tests.
Yields the base URL (http://localhost:8000).
Kills the server after tests complete.
"""
# Check if server already running
if is_server_running():
print(f"\n📡 Using existing server at {DASHBOARD_URL}")
yield DASHBOARD_URL
return
# Start server in subprocess
print(f"\n🚀 Starting server on {DASHBOARD_URL}...")
env = os.environ.copy()
env["PYTHONPATH"] = "src"
env["TIMMY_ENV"] = "test" # Use test config if available
# Determine project root
project_root = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
proc = subprocess.Popen(
[sys.executable, "-m", "uvicorn", "dashboard.app:app",
"--host", "127.0.0.1", "--port", "8000",
"--log-level", "warning"],
cwd=project_root,
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# Wait for server to start
max_retries = 30
for i in range(max_retries):
if is_server_running():
print(f"✅ Server ready!")
break
time.sleep(1)
print(f"⏳ Waiting for server... ({i+1}/{max_retries})")
else:
proc.terminate()
proc.wait()
raise RuntimeError("Server failed to start")
yield DASHBOARD_URL
# Cleanup
print("\n🛑 Stopping server...")
proc.terminate()
try:
proc.wait(timeout=5)
except subprocess.TimeoutExpired:
proc.kill()
proc.wait()
print("✅ Server stopped")
# Add custom pytest option for headed mode
def pytest_addoption(parser):
parser.addoption(
"--headed",
action="store_true",
default=False,
help="Run browser in non-headless mode (visible)",
)
@pytest.fixture
def headed_mode(request):
"""Check if --headed flag was passed."""
return request.config.getoption("--headed")