Merge pull request #20 from AlexanderWhitestone/claude/integrate-spark-timmy-e5D1i
This commit is contained in:
478
PLAN.md
Normal file
478
PLAN.md
Normal file
@@ -0,0 +1,478 @@
|
||||
# Plan: Full Creative & DevOps Capabilities for Timmy
|
||||
|
||||
## Overview
|
||||
|
||||
Add five major capability domains to Timmy's agent system, turning it into a
|
||||
sovereign creative studio and full-stack DevOps operator. All tools are
|
||||
open-source, self-hosted, and GPU-accelerated where needed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Git & DevOps Tools (Forge + Helm personas)
|
||||
|
||||
**Goal:** Timmy can observe local/remote repos, read code, create branches,
|
||||
stage changes, commit, diff, log, and manage PRs — all through the swarm
|
||||
task system with Spark event capture.
|
||||
|
||||
### New module: `src/tools/git_tools.py`
|
||||
|
||||
Tools to add (using **GitPython** — BSD-3, `pip install GitPython`):
|
||||
|
||||
| Tool | Function | Persona Access |
|
||||
|---|---|---|
|
||||
| `git_clone` | Clone a remote repo to local path | Forge, Helm |
|
||||
| `git_status` | Show working tree status | Forge, Helm, Timmy |
|
||||
| `git_diff` | Show staged/unstaged diffs | Forge, Helm, Timmy |
|
||||
| `git_log` | Show recent commit history | Forge, Helm, Echo, Timmy |
|
||||
| `git_branch` | List/create/switch branches | Forge, Helm |
|
||||
| `git_add` | Stage files for commit | Forge, Helm |
|
||||
| `git_commit` | Create a commit with message | Forge, Helm |
|
||||
| `git_push` | Push to remote | Forge, Helm |
|
||||
| `git_pull` | Pull from remote | Forge, Helm |
|
||||
| `git_blame` | Show line-by-line authorship | Forge, Echo |
|
||||
| `git_stash` | Stash/pop changes | Forge, Helm |
|
||||
|
||||
### Changes to existing files
|
||||
|
||||
- **`src/timmy/tools.py`** — Add `create_git_tools()` factory, wire into
|
||||
`PERSONA_TOOLKITS` for Forge and Helm
|
||||
- **`src/swarm/tool_executor.py`** — Enhance `_infer_tools_needed()` with
|
||||
git keywords (commit, branch, push, pull, diff, clone, merge)
|
||||
- **`src/config.py`** — Add `git_default_repo_dir: str = "~/repos"` setting
|
||||
- **`src/spark/engine.py`** — Add `on_tool_executed()` method to capture
|
||||
individual tool invocations (not just task-level events)
|
||||
- **`src/swarm/personas.py`** — Add git-related keywords to Forge and Helm
|
||||
preferred_keywords
|
||||
|
||||
### New dependency
|
||||
|
||||
```toml
|
||||
# pyproject.toml
|
||||
dependencies = [
|
||||
...,
|
||||
"GitPython>=3.1.40",
|
||||
]
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- **`/tools`** page updated to show git tools in the catalog
|
||||
- Git tool usage stats visible per agent
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_git_tools.py` — test all git tool functions against tmp repos
|
||||
- Mock GitPython's `Repo` class for unit tests
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Image Generation (new "Pixel" persona)
|
||||
|
||||
**Goal:** Generate storyboard frames and standalone images from text prompts
|
||||
using FLUX.2 Klein 4B locally.
|
||||
|
||||
### New persona: Pixel — Visual Architect
|
||||
|
||||
```python
|
||||
"pixel": {
|
||||
"id": "pixel",
|
||||
"name": "Pixel",
|
||||
"role": "Visual Architect",
|
||||
"description": "Image generation, storyboard frames, and visual design.",
|
||||
"capabilities": "image-generation,storyboard,design",
|
||||
"rate_sats": 80,
|
||||
"bid_base": 60,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"image", "picture", "photo", "draw", "illustration",
|
||||
"storyboard", "frame", "visual", "design", "generate",
|
||||
"portrait", "landscape", "scene", "artwork",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/image_tools.py`
|
||||
|
||||
Tools (using **diffusers** + **FLUX.2 Klein 4B** — Apache 2.0):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_image` | Text-to-image generation (returns file path) |
|
||||
| `generate_storyboard` | Generate N frames from scene descriptions |
|
||||
| `image_variations` | Generate variations of an existing image |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_image(prompt, width=1024, height=1024, steps=4)
|
||||
→ loads FLUX.2 Klein via diffusers FluxPipeline
|
||||
→ saves to data/images/{uuid}.png
|
||||
→ returns path + metadata
|
||||
```
|
||||
|
||||
- Model loaded lazily on first use, kept in memory for subsequent calls
|
||||
- Falls back to CPU generation (slower) if no GPU
|
||||
- Output saved to `data/images/` with metadata JSON sidecar
|
||||
|
||||
### New dependency (optional extra)
|
||||
|
||||
```toml
|
||||
[project.optional-dependencies]
|
||||
creative = [
|
||||
"diffusers>=0.30.0",
|
||||
"transformers>=4.40.0",
|
||||
"accelerate>=0.30.0",
|
||||
"torch>=2.2.0",
|
||||
"safetensors>=0.4.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
# config.py additions
|
||||
flux_model_id: str = "black-forest-labs/FLUX.2-klein-4b"
|
||||
image_output_dir: str = "data/images"
|
||||
image_default_steps: int = 4
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` — new Creative Studio page (image gallery + generation form)
|
||||
- HTMX-powered: submit prompt, poll for result, display inline
|
||||
- Gallery view of all generated images with metadata
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_image_tools.py` — mock diffusers pipeline, test prompt handling,
|
||||
file output, storyboard generation
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Music Generation (new "Lyra" persona)
|
||||
|
||||
**Goal:** Generate full songs with vocals, instrumentals, and lyrics using
|
||||
ACE-Step 1.5 locally.
|
||||
|
||||
### New persona: Lyra — Sound Weaver
|
||||
|
||||
```python
|
||||
"lyra": {
|
||||
"id": "lyra",
|
||||
"name": "Lyra",
|
||||
"role": "Sound Weaver",
|
||||
"description": "Music and song generation with vocals, instrumentals, and lyrics.",
|
||||
"capabilities": "music-generation,vocals,composition",
|
||||
"rate_sats": 90,
|
||||
"bid_base": 70,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"music", "song", "sing", "vocal", "instrumental",
|
||||
"melody", "beat", "track", "compose", "lyrics",
|
||||
"audio", "sound", "album", "remix",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/music_tools.py`
|
||||
|
||||
Tools (using **ACE-Step 1.5** — Apache 2.0, `pip install ace-step`):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_song` | Text/lyrics → full song (vocals + instrumentals) |
|
||||
| `generate_instrumental` | Text prompt → instrumental track |
|
||||
| `generate_vocals` | Lyrics + style → vocal track |
|
||||
| `list_genres` | Return supported genre/style tags |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_song(lyrics, genre="pop", duration=120, language="en")
|
||||
→ loads ACE-Step model (lazy, cached)
|
||||
→ generates audio
|
||||
→ saves to data/music/{uuid}.wav
|
||||
→ returns path + metadata (duration, genre, etc.)
|
||||
```
|
||||
|
||||
- Model loaded lazily, ~4GB VRAM minimum
|
||||
- Output saved to `data/music/` with metadata sidecar
|
||||
- Supports 19 languages, genre tags, tempo control
|
||||
|
||||
### New dependency (optional extra, extends `creative`)
|
||||
|
||||
```toml
|
||||
[project.optional-dependencies]
|
||||
creative = [
|
||||
...,
|
||||
"ace-step>=1.5.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
music_output_dir: str = "data/music"
|
||||
ace_step_model: str = "ace-step/ACE-Step-v1.5"
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` expanded with Music tab
|
||||
- Audio player widget (HTML5 `<audio>` element)
|
||||
- Lyrics input form with genre/style selector
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_music_tools.py` — mock ACE-Step model, test generation params
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Video Generation (new "Reel" persona)
|
||||
|
||||
**Goal:** Generate video clips from text/image prompts using Wan 2.1 locally.
|
||||
|
||||
### New persona: Reel — Motion Director
|
||||
|
||||
```python
|
||||
"reel": {
|
||||
"id": "reel",
|
||||
"name": "Reel",
|
||||
"role": "Motion Director",
|
||||
"description": "Video generation from text and image prompts.",
|
||||
"capabilities": "video-generation,animation,motion",
|
||||
"rate_sats": 100,
|
||||
"bid_base": 80,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"video", "clip", "animate", "motion", "film",
|
||||
"scene", "cinematic", "footage", "render", "timelapse",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/video_tools.py`
|
||||
|
||||
Tools (using **Wan 2.1** via diffusers — Apache 2.0):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_video_clip` | Text → short video clip (3–6 seconds) |
|
||||
| `image_to_video` | Image + prompt → animated video from still |
|
||||
| `list_video_styles` | Return supported style presets |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_video_clip(prompt, duration=5, resolution="480p", fps=24)
|
||||
→ loads Wan 2.1 via diffusers pipeline (lazy, cached)
|
||||
→ generates frames
|
||||
→ encodes to MP4 via FFmpeg
|
||||
→ saves to data/video/{uuid}.mp4
|
||||
→ returns path + metadata
|
||||
```
|
||||
|
||||
- Wan 2.1 1.3B model: ~16GB VRAM
|
||||
- Output saved to `data/video/`
|
||||
- Resolution options: 480p (16GB), 720p (24GB+)
|
||||
|
||||
### New dependency (extends `creative` extra)
|
||||
|
||||
```toml
|
||||
creative = [
|
||||
...,
|
||||
# Wan 2.1 uses diffusers (already listed) + model weights downloaded on first use
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
video_output_dir: str = "data/video"
|
||||
wan_model_id: str = "Wan-AI/Wan2.1-T2V-1.3B"
|
||||
video_default_resolution: str = "480p"
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_video_tools.py` — mock diffusers pipeline, test clip generation
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Creative Director — Storyboard & Assembly Pipeline
|
||||
|
||||
**Goal:** Orchestrate multi-persona workflows to produce 3+ minute creative
|
||||
videos with music, narration, and stitched scenes.
|
||||
|
||||
### New module: `src/creative/director.py`
|
||||
|
||||
The Creative Director is a **multi-step pipeline** that coordinates Pixel,
|
||||
Lyra, and Reel to produce complete creative works:
|
||||
|
||||
```
|
||||
User: "Create a 3-minute music video about a sunrise over mountains"
|
||||
│
|
||||
Creative Director
|
||||
┌─────────┼──────────┐
|
||||
│ │ │
|
||||
1. STORYBOARD 2. MUSIC 3. GENERATE
|
||||
(Pixel) (Lyra) (Reel)
|
||||
│ │ │
|
||||
N scene Full song N video clips
|
||||
descriptions with from storyboard
|
||||
+ keyframes vocals frames
|
||||
│ │ │
|
||||
└─────────┼──────────┘
|
||||
│
|
||||
4. ASSEMBLE
|
||||
(MoviePy + FFmpeg)
|
||||
│
|
||||
Final video with
|
||||
music, transitions,
|
||||
titles
|
||||
```
|
||||
|
||||
### Pipeline steps
|
||||
|
||||
1. **Script** — Timmy (or Quill) writes scene descriptions and lyrics
|
||||
2. **Storyboard** — Pixel generates keyframe images for each scene
|
||||
3. **Music** — Lyra generates the soundtrack (vocals + instrumentals)
|
||||
4. **Video clips** — Reel generates video for each scene (image-to-video
|
||||
from storyboard frames, or text-to-video from descriptions)
|
||||
5. **Assembly** — MoviePy stitches clips together with cross-fades,
|
||||
overlays the music track, adds title cards
|
||||
|
||||
### New module: `src/creative/assembler.py`
|
||||
|
||||
Video assembly engine (using **MoviePy** — MIT, `pip install moviepy`):
|
||||
|
||||
| Function | Purpose |
|
||||
|---|---|
|
||||
| `stitch_clips` | Concatenate video clips with transitions |
|
||||
| `overlay_audio` | Mix music track onto video |
|
||||
| `add_title_card` | Prepend/append title/credits |
|
||||
| `add_subtitles` | Burn lyrics/captions onto video |
|
||||
| `export_final` | Encode final video (H.264 + AAC) |
|
||||
|
||||
### New dependency
|
||||
|
||||
```toml
|
||||
dependencies = [
|
||||
...,
|
||||
"moviepy>=2.0.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
creative_output_dir: str = "data/creative"
|
||||
video_transition_duration: float = 1.0 # seconds
|
||||
default_video_codec: str = "libx264"
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` — Full Creative Studio with tabs:
|
||||
- **Images** — gallery + generation form
|
||||
- **Music** — player + generation form
|
||||
- **Video** — player + generation form
|
||||
- **Director** — multi-step pipeline builder with storyboard view
|
||||
- `/creative/projects` — saved projects with all assets
|
||||
- `/creative/projects/{id}` — project detail with timeline view
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_assembler.py` — test stitching, audio overlay, title cards
|
||||
- `tests/test_director.py` — test pipeline orchestration with mocks
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Spark Integration for All New Tools
|
||||
|
||||
**Goal:** Every tool invocation and creative pipeline step gets captured by
|
||||
Spark Intelligence for learning and advisory.
|
||||
|
||||
### Changes to `src/spark/engine.py`
|
||||
|
||||
```python
|
||||
def on_tool_executed(
|
||||
self, agent_id: str, tool_name: str,
|
||||
task_id: Optional[str], success: bool,
|
||||
duration_ms: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture individual tool invocations."""
|
||||
|
||||
def on_creative_step(
|
||||
self, project_id: str, step_name: str,
|
||||
agent_id: str, output_path: Optional[str],
|
||||
) -> Optional[str]:
|
||||
"""Capture creative pipeline progress."""
|
||||
```
|
||||
|
||||
### New advisor patterns
|
||||
|
||||
- "Pixel generates storyboards 40% faster than individual image calls"
|
||||
- "Lyra's pop genre tracks have 85% higher completion rate than jazz"
|
||||
- "Video generation on 480p uses 60% less GPU time than 720p for similar quality"
|
||||
- "Git commits from Forge average 3 files per commit"
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
| Phase | What | New Files | Est. Tests |
|
||||
|---|---|---|---|
|
||||
| 1 | Git/DevOps tools | 2 source + 1 test | ~25 |
|
||||
| 2 | Image generation | 2 source + 1 test + 1 template | ~15 |
|
||||
| 3 | Music generation | 1 source + 1 test | ~12 |
|
||||
| 4 | Video generation | 1 source + 1 test | ~12 |
|
||||
| 5 | Creative Director pipeline | 2 source + 2 tests + 1 template | ~20 |
|
||||
| 6 | Spark tool-level capture | 1 modified + 1 test update | ~8 |
|
||||
|
||||
**Total: ~10 new source files, ~6 new test files, ~92 new tests**
|
||||
|
||||
---
|
||||
|
||||
## New Dependencies Summary
|
||||
|
||||
**Required (always installed):**
|
||||
```
|
||||
GitPython>=3.1.40
|
||||
moviepy>=2.0.0
|
||||
```
|
||||
|
||||
**Optional `creative` extra (GPU features):**
|
||||
```
|
||||
diffusers>=0.30.0
|
||||
transformers>=4.40.0
|
||||
accelerate>=0.30.0
|
||||
torch>=2.2.0
|
||||
safetensors>=0.4.0
|
||||
ace-step>=1.5.0
|
||||
```
|
||||
|
||||
**Install:** `pip install ".[creative]"` for full creative stack
|
||||
|
||||
---
|
||||
|
||||
## New Persona Summary
|
||||
|
||||
| ID | Name | Role | Tools |
|
||||
|---|---|---|---|
|
||||
| pixel | Pixel | Visual Architect | generate_image, generate_storyboard, image_variations |
|
||||
| lyra | Lyra | Sound Weaver | generate_song, generate_instrumental, generate_vocals |
|
||||
| reel | Reel | Motion Director | generate_video_clip, image_to_video |
|
||||
|
||||
These join the existing 6 personas (Echo, Mace, Helm, Seer, Forge, Quill)
|
||||
for a total of **9 specialized agents** in the swarm.
|
||||
|
||||
---
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
- **CPU only:** Git tools, MoviePy assembly, all tests (mocked)
|
||||
- **8GB VRAM:** FLUX.2 Klein 4B (images)
|
||||
- **4GB VRAM:** ACE-Step 1.5 (music)
|
||||
- **16GB VRAM:** Wan 2.1 1.3B (video at 480p)
|
||||
- **Recommended:** RTX 4090 24GB runs the entire stack comfortably
|
||||
@@ -23,6 +23,8 @@ dependencies = [
|
||||
"rich>=13.0.0",
|
||||
"pydantic-settings>=2.0.0",
|
||||
"websockets>=12.0",
|
||||
"GitPython>=3.1.40",
|
||||
"moviepy>=2.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
@@ -52,6 +54,16 @@ voice = [
|
||||
telegram = [
|
||||
"python-telegram-bot>=21.0",
|
||||
]
|
||||
# Creative: GPU-accelerated image, music, and video generation.
|
||||
# pip install ".[creative]"
|
||||
creative = [
|
||||
"diffusers>=0.30.0",
|
||||
"transformers>=4.40.0",
|
||||
"accelerate>=0.30.0",
|
||||
"torch>=2.2.0",
|
||||
"safetensors>=0.4.0",
|
||||
"ace-step>=1.5.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
timmy = "timmy.cli:main"
|
||||
@@ -72,11 +84,14 @@ include = [
|
||||
"src/notifications",
|
||||
"src/shortcuts",
|
||||
"src/telegram_bot",
|
||||
"src/spark",
|
||||
"src/tools",
|
||||
"src/creative",
|
||||
]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
pythonpath = ["src"]
|
||||
pythonpath = ["src", "tests"]
|
||||
asyncio_mode = "auto"
|
||||
asyncio_default_fixture_loop_scope = "function"
|
||||
addopts = "-v --tb=short"
|
||||
|
||||
@@ -28,6 +28,34 @@ class Settings(BaseSettings):
|
||||
# 8b ~16 GB | 70b ~140 GB | 405b ~810 GB
|
||||
airllm_model_size: Literal["8b", "70b", "405b"] = "70b"
|
||||
|
||||
# ── Spark Intelligence ────────────────────────────────────────────────
|
||||
# Enable/disable the Spark cognitive layer.
|
||||
# When enabled, Spark captures swarm events, runs EIDOS predictions,
|
||||
# consolidates memories, and generates advisory recommendations.
|
||||
spark_enabled: bool = True
|
||||
|
||||
# ── Git / DevOps ──────────────────────────────────────────────────────
|
||||
git_default_repo_dir: str = "~/repos"
|
||||
|
||||
# ── Creative — Image Generation (Pixel) ───────────────────────────────
|
||||
flux_model_id: str = "black-forest-labs/FLUX.1-schnell"
|
||||
image_output_dir: str = "data/images"
|
||||
image_default_steps: int = 4
|
||||
|
||||
# ── Creative — Music Generation (Lyra) ────────────────────────────────
|
||||
music_output_dir: str = "data/music"
|
||||
ace_step_model: str = "ace-step/ACE-Step-v1.5"
|
||||
|
||||
# ── Creative — Video Generation (Reel) ────────────────────────────────
|
||||
video_output_dir: str = "data/video"
|
||||
wan_model_id: str = "Wan-AI/Wan2.1-T2V-1.3B"
|
||||
video_default_resolution: str = "480p"
|
||||
|
||||
# ── Creative — Pipeline / Assembly ────────────────────────────────────
|
||||
creative_output_dir: str = "data/creative"
|
||||
video_transition_duration: float = 1.0
|
||||
default_video_codec: str = "libx264"
|
||||
|
||||
model_config = SettingsConfigDict(
|
||||
env_file=".env",
|
||||
env_file_encoding="utf-8",
|
||||
|
||||
1
src/creative/__init__.py
Normal file
1
src/creative/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Creative pipeline — orchestrates image, music, and video generation."""
|
||||
304
src/creative/assembler.py
Normal file
304
src/creative/assembler.py
Normal file
@@ -0,0 +1,304 @@
|
||||
"""Video assembly engine — stitch clips, overlay audio, add titles.
|
||||
|
||||
Uses MoviePy + FFmpeg to combine generated video clips, music tracks,
|
||||
and title cards into 3+ minute final videos.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_MOVIEPY_AVAILABLE = True
|
||||
try:
|
||||
from moviepy import (
|
||||
VideoFileClip,
|
||||
AudioFileClip,
|
||||
TextClip,
|
||||
CompositeVideoClip,
|
||||
ImageClip,
|
||||
concatenate_videoclips,
|
||||
vfx,
|
||||
)
|
||||
except ImportError:
|
||||
_MOVIEPY_AVAILABLE = False
|
||||
|
||||
# Resolve a font that actually exists on this system.
|
||||
_DEFAULT_FONT = "/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf"
|
||||
|
||||
|
||||
def _require_moviepy() -> None:
|
||||
if not _MOVIEPY_AVAILABLE:
|
||||
raise ImportError(
|
||||
"MoviePy is not installed. Run: pip install moviepy"
|
||||
)
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "creative_output_dir", "data/creative"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
# ── Stitching ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def stitch_clips(
|
||||
clip_paths: list[str],
|
||||
transition_duration: float = 1.0,
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Concatenate video clips with cross-fade transitions.
|
||||
|
||||
Args:
|
||||
clip_paths: Ordered list of MP4 file paths.
|
||||
transition_duration: Cross-fade duration in seconds.
|
||||
output_path: Optional output path. Auto-generated if omitted.
|
||||
|
||||
Returns dict with ``path`` and ``total_duration``.
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
clips = [VideoFileClip(p) for p in clip_paths]
|
||||
|
||||
# Apply cross-fade between consecutive clips
|
||||
if transition_duration > 0 and len(clips) > 1:
|
||||
processed = [clips[0]]
|
||||
for clip in clips[1:]:
|
||||
clip = clip.with_start(
|
||||
processed[-1].end - transition_duration
|
||||
).with_effects([vfx.CrossFadeIn(transition_duration)])
|
||||
processed.append(clip)
|
||||
final = CompositeVideoClip(processed)
|
||||
else:
|
||||
final = concatenate_videoclips(clips, method="compose")
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"stitched_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
total_duration = final.duration
|
||||
# Clean up
|
||||
for c in clips:
|
||||
c.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"total_duration": total_duration,
|
||||
"clip_count": len(clip_paths),
|
||||
}
|
||||
|
||||
|
||||
# ── Audio overlay ─────────────────────────────────────────────────────────────
|
||||
|
||||
def overlay_audio(
|
||||
video_path: str,
|
||||
audio_path: str,
|
||||
output_path: Optional[str] = None,
|
||||
volume: float = 1.0,
|
||||
) -> dict:
|
||||
"""Mix an audio track onto a video file.
|
||||
|
||||
The audio is trimmed or looped to match the video duration.
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
audio = AudioFileClip(audio_path)
|
||||
|
||||
# Trim audio to video length
|
||||
if audio.duration > video.duration:
|
||||
audio = audio.subclipped(0, video.duration)
|
||||
|
||||
if volume != 1.0:
|
||||
audio = audio.with_volume_scaled(volume)
|
||||
|
||||
video = video.with_audio(audio)
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"mixed_{uid}.mp4"
|
||||
video.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = video.duration
|
||||
video.close()
|
||||
audio.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
}
|
||||
|
||||
|
||||
# ── Title cards ───────────────────────────────────────────────────────────────
|
||||
|
||||
def add_title_card(
|
||||
video_path: str,
|
||||
title: str,
|
||||
subtitle: str = "",
|
||||
duration: float = 4.0,
|
||||
position: str = "start",
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Add a title card at the start or end of a video.
|
||||
|
||||
Args:
|
||||
video_path: Source video path.
|
||||
title: Title text.
|
||||
subtitle: Optional subtitle text.
|
||||
duration: Title card display duration in seconds.
|
||||
position: "start" or "end".
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
w, h = video.size
|
||||
|
||||
# Build title card as a text clip on black background
|
||||
txt = TextClip(
|
||||
text=title,
|
||||
font_size=60,
|
||||
color="white",
|
||||
size=(w, h),
|
||||
method="caption",
|
||||
font=_DEFAULT_FONT,
|
||||
).with_duration(duration)
|
||||
|
||||
clips = [txt, video] if position == "start" else [video, txt]
|
||||
final = concatenate_videoclips(clips, method="compose")
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"titled_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = final.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"title": title,
|
||||
}
|
||||
|
||||
|
||||
# ── Subtitles / captions ─────────────────────────────────────────────────────
|
||||
|
||||
def add_subtitles(
|
||||
video_path: str,
|
||||
captions: list[dict],
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Burn subtitle captions onto a video.
|
||||
|
||||
Args:
|
||||
captions: List of dicts with ``text``, ``start``, ``end`` keys
|
||||
(times in seconds).
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
w, h = video.size
|
||||
|
||||
text_clips = []
|
||||
for cap in captions:
|
||||
txt = (
|
||||
TextClip(
|
||||
text=cap["text"],
|
||||
font_size=36,
|
||||
color="white",
|
||||
stroke_color="black",
|
||||
stroke_width=2,
|
||||
size=(w - 40, None),
|
||||
method="caption",
|
||||
font=_DEFAULT_FONT,
|
||||
)
|
||||
.with_start(cap["start"])
|
||||
.with_end(cap["end"])
|
||||
.with_position(("center", h - 100))
|
||||
)
|
||||
text_clips.append(txt)
|
||||
|
||||
final = CompositeVideoClip([video] + text_clips)
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"subtitled_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = final.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"caption_count": len(captions),
|
||||
}
|
||||
|
||||
|
||||
# ── Final export helper ──────────────────────────────────────────────────────
|
||||
|
||||
def export_final(
|
||||
video_path: str,
|
||||
output_path: Optional[str] = None,
|
||||
codec: str = "libx264",
|
||||
audio_codec: str = "aac",
|
||||
bitrate: str = "8000k",
|
||||
) -> dict:
|
||||
"""Re-encode a video with specific codec settings for distribution."""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"final_{uid}.mp4"
|
||||
video.write_videofile(
|
||||
str(out), codec=codec, audio_codec=audio_codec,
|
||||
bitrate=bitrate, logger=None,
|
||||
)
|
||||
|
||||
result_duration = video.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"codec": codec,
|
||||
}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
ASSEMBLER_TOOL_CATALOG: dict[str, dict] = {
|
||||
"stitch_clips": {
|
||||
"name": "Stitch Clips",
|
||||
"description": "Concatenate video clips with cross-fade transitions",
|
||||
"fn": stitch_clips,
|
||||
},
|
||||
"overlay_audio": {
|
||||
"name": "Overlay Audio",
|
||||
"description": "Mix a music track onto a video",
|
||||
"fn": overlay_audio,
|
||||
},
|
||||
"add_title_card": {
|
||||
"name": "Add Title Card",
|
||||
"description": "Add a title card at the start or end of a video",
|
||||
"fn": add_title_card,
|
||||
},
|
||||
"add_subtitles": {
|
||||
"name": "Add Subtitles",
|
||||
"description": "Burn subtitle captions onto a video",
|
||||
"fn": add_subtitles,
|
||||
},
|
||||
"export_final": {
|
||||
"name": "Export Final",
|
||||
"description": "Re-encode video with specific codec settings",
|
||||
"fn": export_final,
|
||||
},
|
||||
}
|
||||
378
src/creative/director.py
Normal file
378
src/creative/director.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""Creative Director — multi-persona pipeline for 3+ minute creative works.
|
||||
|
||||
Orchestrates Pixel (images), Lyra (music), and Reel (video) to produce
|
||||
complete music videos, cinematic shorts, and other creative works.
|
||||
|
||||
Pipeline stages:
|
||||
1. Script — Generate scene descriptions and lyrics
|
||||
2. Storyboard — Generate keyframe images (Pixel)
|
||||
3. Music — Generate soundtrack (Lyra)
|
||||
4. Video — Generate clips per scene (Reel)
|
||||
5. Assembly — Stitch clips + overlay audio (MoviePy)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class CreativeProject:
|
||||
"""Tracks all assets and state for a creative production."""
|
||||
id: str = field(default_factory=lambda: uuid.uuid4().hex[:12])
|
||||
title: str = ""
|
||||
description: str = ""
|
||||
created_at: str = field(
|
||||
default_factory=lambda: datetime.now(timezone.utc).isoformat()
|
||||
)
|
||||
status: str = "planning" # planning|scripting|storyboard|music|video|assembly|complete|failed
|
||||
|
||||
# Pipeline outputs
|
||||
scenes: list[dict] = field(default_factory=list)
|
||||
lyrics: str = ""
|
||||
storyboard_frames: list[dict] = field(default_factory=list)
|
||||
music_track: Optional[dict] = None
|
||||
video_clips: list[dict] = field(default_factory=list)
|
||||
final_video: Optional[dict] = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
"id": self.id, "title": self.title,
|
||||
"description": self.description,
|
||||
"created_at": self.created_at, "status": self.status,
|
||||
"scene_count": len(self.scenes),
|
||||
"has_storyboard": len(self.storyboard_frames) > 0,
|
||||
"has_music": self.music_track is not None,
|
||||
"clip_count": len(self.video_clips),
|
||||
"has_final": self.final_video is not None,
|
||||
}
|
||||
|
||||
|
||||
# In-memory project store
|
||||
_projects: dict[str, CreativeProject] = {}
|
||||
|
||||
|
||||
def _project_dir(project_id: str) -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "creative_output_dir", "data/creative")) / project_id
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_project(project: CreativeProject) -> None:
|
||||
"""Persist project metadata to disk."""
|
||||
path = _project_dir(project.id) / "project.json"
|
||||
path.write_text(json.dumps(project.to_dict(), indent=2))
|
||||
|
||||
|
||||
# ── Project management ────────────────────────────────────────────────────────
|
||||
|
||||
def create_project(
|
||||
title: str,
|
||||
description: str,
|
||||
scenes: Optional[list[dict]] = None,
|
||||
lyrics: str = "",
|
||||
) -> dict:
|
||||
"""Create a new creative project.
|
||||
|
||||
Args:
|
||||
title: Project title.
|
||||
description: High-level creative brief.
|
||||
scenes: Optional pre-written scene descriptions.
|
||||
Each scene is a dict with ``description`` key.
|
||||
lyrics: Optional song lyrics for the soundtrack.
|
||||
|
||||
Returns dict with project metadata.
|
||||
"""
|
||||
project = CreativeProject(
|
||||
title=title,
|
||||
description=description,
|
||||
scenes=scenes or [],
|
||||
lyrics=lyrics,
|
||||
)
|
||||
_projects[project.id] = project
|
||||
_save_project(project)
|
||||
logger.info("Creative project created: %s (%s)", project.id, title)
|
||||
return {"success": True, "project": project.to_dict()}
|
||||
|
||||
|
||||
def get_project(project_id: str) -> Optional[dict]:
|
||||
"""Get project metadata."""
|
||||
project = _projects.get(project_id)
|
||||
if project:
|
||||
return project.to_dict()
|
||||
return None
|
||||
|
||||
|
||||
def list_projects() -> list[dict]:
|
||||
"""List all creative projects."""
|
||||
return [p.to_dict() for p in _projects.values()]
|
||||
|
||||
|
||||
# ── Pipeline steps ────────────────────────────────────────────────────────────
|
||||
|
||||
def run_storyboard(project_id: str) -> dict:
|
||||
"""Generate storyboard frames for all scenes in a project.
|
||||
|
||||
Calls Pixel's generate_storyboard tool.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.scenes:
|
||||
return {"success": False, "error": "No scenes defined"}
|
||||
|
||||
project.status = "storyboard"
|
||||
|
||||
from tools.image_tools import generate_storyboard
|
||||
|
||||
scene_descriptions = [s["description"] for s in project.scenes]
|
||||
result = generate_storyboard(scene_descriptions)
|
||||
|
||||
if result["success"]:
|
||||
project.storyboard_frames = result["frames"]
|
||||
_save_project(project)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def run_music(
|
||||
project_id: str,
|
||||
genre: str = "pop",
|
||||
duration: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate the soundtrack for a project.
|
||||
|
||||
Calls Lyra's generate_song tool.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
|
||||
project.status = "music"
|
||||
|
||||
from tools.music_tools import generate_song
|
||||
|
||||
# Default duration: ~15s per scene, minimum 60s
|
||||
target_duration = duration or max(60, len(project.scenes) * 15)
|
||||
|
||||
result = generate_song(
|
||||
lyrics=project.lyrics,
|
||||
genre=genre,
|
||||
duration=target_duration,
|
||||
title=project.title,
|
||||
)
|
||||
|
||||
if result["success"]:
|
||||
project.music_track = result
|
||||
_save_project(project)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def run_video_generation(project_id: str) -> dict:
|
||||
"""Generate video clips for each scene.
|
||||
|
||||
Uses storyboard frames (image-to-video) if available,
|
||||
otherwise falls back to text-to-video.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.scenes:
|
||||
return {"success": False, "error": "No scenes defined"}
|
||||
|
||||
project.status = "video"
|
||||
|
||||
from tools.video_tools import generate_video_clip, image_to_video
|
||||
|
||||
clips = []
|
||||
for i, scene in enumerate(project.scenes):
|
||||
desc = scene["description"]
|
||||
|
||||
# Prefer image-to-video if storyboard frame exists
|
||||
if i < len(project.storyboard_frames):
|
||||
frame = project.storyboard_frames[i]
|
||||
result = image_to_video(
|
||||
image_path=frame["path"],
|
||||
prompt=desc,
|
||||
duration=scene.get("duration", 5),
|
||||
)
|
||||
else:
|
||||
result = generate_video_clip(
|
||||
prompt=desc,
|
||||
duration=scene.get("duration", 5),
|
||||
)
|
||||
|
||||
result["scene_index"] = i
|
||||
clips.append(result)
|
||||
|
||||
project.video_clips = clips
|
||||
_save_project(project)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"clip_count": len(clips),
|
||||
"clips": clips,
|
||||
}
|
||||
|
||||
|
||||
def run_assembly(project_id: str, transition_duration: float = 1.0) -> dict:
|
||||
"""Assemble all clips into the final video with music.
|
||||
|
||||
Pipeline:
|
||||
1. Stitch clips with transitions
|
||||
2. Overlay music track
|
||||
3. Add title card
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.video_clips:
|
||||
return {"success": False, "error": "No video clips generated"}
|
||||
|
||||
project.status = "assembly"
|
||||
|
||||
from creative.assembler import stitch_clips, overlay_audio, add_title_card
|
||||
|
||||
# 1. Stitch clips
|
||||
clip_paths = [c["path"] for c in project.video_clips if c.get("success")]
|
||||
if not clip_paths:
|
||||
return {"success": False, "error": "No successful clips to assemble"}
|
||||
|
||||
stitched = stitch_clips(clip_paths, transition_duration=transition_duration)
|
||||
if not stitched["success"]:
|
||||
return stitched
|
||||
|
||||
# 2. Overlay music (if available)
|
||||
current_video = stitched["path"]
|
||||
if project.music_track and project.music_track.get("path"):
|
||||
mixed = overlay_audio(current_video, project.music_track["path"])
|
||||
if mixed["success"]:
|
||||
current_video = mixed["path"]
|
||||
|
||||
# 3. Add title card
|
||||
titled = add_title_card(current_video, title=project.title)
|
||||
if titled["success"]:
|
||||
current_video = titled["path"]
|
||||
|
||||
project.final_video = {
|
||||
"path": current_video,
|
||||
"duration": titled.get("duration", stitched["total_duration"]),
|
||||
}
|
||||
project.status = "complete"
|
||||
_save_project(project)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": current_video,
|
||||
"duration": project.final_video["duration"],
|
||||
"project_id": project_id,
|
||||
}
|
||||
|
||||
|
||||
def run_full_pipeline(
|
||||
title: str,
|
||||
description: str,
|
||||
scenes: list[dict],
|
||||
lyrics: str = "",
|
||||
genre: str = "pop",
|
||||
) -> dict:
|
||||
"""Run the entire creative pipeline end-to-end.
|
||||
|
||||
This is the top-level orchestration function that:
|
||||
1. Creates the project
|
||||
2. Generates storyboard frames
|
||||
3. Generates music
|
||||
4. Generates video clips
|
||||
5. Assembles the final video
|
||||
|
||||
Args:
|
||||
title: Project title.
|
||||
description: Creative brief.
|
||||
scenes: List of scene dicts with ``description`` keys.
|
||||
lyrics: Song lyrics for the soundtrack.
|
||||
genre: Music genre.
|
||||
|
||||
Returns dict with final video path and project metadata.
|
||||
"""
|
||||
# Create project
|
||||
project_result = create_project(title, description, scenes, lyrics)
|
||||
if not project_result["success"]:
|
||||
return project_result
|
||||
project_id = project_result["project"]["id"]
|
||||
|
||||
# Run pipeline steps
|
||||
steps = [
|
||||
("storyboard", lambda: run_storyboard(project_id)),
|
||||
("music", lambda: run_music(project_id, genre=genre)),
|
||||
("video", lambda: run_video_generation(project_id)),
|
||||
("assembly", lambda: run_assembly(project_id)),
|
||||
]
|
||||
|
||||
for step_name, step_fn in steps:
|
||||
logger.info("Creative pipeline step: %s (project %s)", step_name, project_id)
|
||||
result = step_fn()
|
||||
if not result.get("success"):
|
||||
project = _projects.get(project_id)
|
||||
if project:
|
||||
project.status = "failed"
|
||||
_save_project(project)
|
||||
return {
|
||||
"success": False,
|
||||
"failed_step": step_name,
|
||||
"error": result.get("error", "Unknown error"),
|
||||
"project_id": project_id,
|
||||
}
|
||||
|
||||
project = _projects.get(project_id)
|
||||
return {
|
||||
"success": True,
|
||||
"project_id": project_id,
|
||||
"final_video": project.final_video if project else None,
|
||||
"project": project.to_dict() if project else None,
|
||||
}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
DIRECTOR_TOOL_CATALOG: dict[str, dict] = {
|
||||
"create_project": {
|
||||
"name": "Create Creative Project",
|
||||
"description": "Create a new creative production project",
|
||||
"fn": create_project,
|
||||
},
|
||||
"run_storyboard": {
|
||||
"name": "Generate Storyboard",
|
||||
"description": "Generate keyframe images for all project scenes",
|
||||
"fn": run_storyboard,
|
||||
},
|
||||
"run_music": {
|
||||
"name": "Generate Music",
|
||||
"description": "Generate the project soundtrack with vocals and instrumentals",
|
||||
"fn": run_music,
|
||||
},
|
||||
"run_video_generation": {
|
||||
"name": "Generate Video Clips",
|
||||
"description": "Generate video clips for each project scene",
|
||||
"fn": run_video_generation,
|
||||
},
|
||||
"run_assembly": {
|
||||
"name": "Assemble Final Video",
|
||||
"description": "Stitch clips, overlay music, and add title cards",
|
||||
"fn": run_assembly,
|
||||
},
|
||||
"run_full_pipeline": {
|
||||
"name": "Run Full Pipeline",
|
||||
"description": "Execute entire creative pipeline end-to-end",
|
||||
"fn": run_full_pipeline,
|
||||
},
|
||||
}
|
||||
@@ -23,6 +23,8 @@ from dashboard.routes.briefing import router as briefing_router
|
||||
from dashboard.routes.telegram import router as telegram_router
|
||||
from dashboard.routes.swarm_internal import router as swarm_internal_router
|
||||
from dashboard.routes.tools import router as tools_router
|
||||
from dashboard.routes.spark import router as spark_router
|
||||
from dashboard.routes.creative import router as creative_router
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
@@ -97,6 +99,11 @@ async def lifespan(app: FastAPI):
|
||||
except Exception as exc:
|
||||
logger.error("Failed to spawn persona agents: %s", exc)
|
||||
|
||||
# Initialise Spark Intelligence engine
|
||||
from spark.engine import spark_engine
|
||||
if spark_engine.enabled:
|
||||
logger.info("Spark Intelligence active — event capture enabled")
|
||||
|
||||
# Auto-start Telegram bot if a token is configured
|
||||
from telegram_bot.bot import telegram_bot
|
||||
await telegram_bot.start()
|
||||
@@ -136,6 +143,8 @@ app.include_router(briefing_router)
|
||||
app.include_router(telegram_router)
|
||||
app.include_router(swarm_internal_router)
|
||||
app.include_router(tools_router)
|
||||
app.include_router(spark_router)
|
||||
app.include_router(creative_router)
|
||||
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
|
||||
87
src/dashboard/routes/creative.py
Normal file
87
src/dashboard/routes/creative.py
Normal file
@@ -0,0 +1,87 @@
|
||||
"""Creative Studio dashboard route — /creative endpoints.
|
||||
|
||||
Provides a dashboard page for the creative pipeline: image generation,
|
||||
music generation, video generation, and the full director pipeline.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
router = APIRouter(tags=["creative"])
|
||||
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
|
||||
|
||||
|
||||
@router.get("/creative/ui", response_class=HTMLResponse)
|
||||
async def creative_studio(request: Request):
|
||||
"""Render the Creative Studio page."""
|
||||
# Collect existing outputs
|
||||
image_dir = Path("data/images")
|
||||
music_dir = Path("data/music")
|
||||
video_dir = Path("data/video")
|
||||
creative_dir = Path("data/creative")
|
||||
|
||||
images = sorted(image_dir.glob("*.png"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if image_dir.exists() else []
|
||||
music_files = sorted(music_dir.glob("*.wav"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if music_dir.exists() else []
|
||||
videos = sorted(video_dir.glob("*.mp4"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if video_dir.exists() else []
|
||||
|
||||
# Load projects
|
||||
projects = []
|
||||
if creative_dir.exists():
|
||||
for proj_dir in sorted(creative_dir.iterdir(), reverse=True):
|
||||
meta_path = proj_dir / "project.json"
|
||||
if meta_path.exists():
|
||||
import json
|
||||
projects.append(json.loads(meta_path.read_text()))
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"creative.html",
|
||||
{
|
||||
"page_title": "Creative Studio",
|
||||
"images": [{"name": p.name, "path": str(p)} for p in images],
|
||||
"music_files": [{"name": p.name, "path": str(p)} for p in music_files],
|
||||
"videos": [{"name": p.name, "path": str(p)} for p in videos],
|
||||
"projects": projects[:10],
|
||||
"image_count": len(images),
|
||||
"music_count": len(music_files),
|
||||
"video_count": len(videos),
|
||||
"project_count": len(projects),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/creative/api/projects")
|
||||
async def creative_projects_api():
|
||||
"""Return creative projects as JSON."""
|
||||
try:
|
||||
from creative.director import list_projects
|
||||
return {"projects": list_projects()}
|
||||
except ImportError:
|
||||
return {"projects": []}
|
||||
|
||||
|
||||
@router.get("/creative/api/genres")
|
||||
async def creative_genres_api():
|
||||
"""Return supported music genres."""
|
||||
try:
|
||||
from tools.music_tools import GENRES
|
||||
return {"genres": GENRES}
|
||||
except ImportError:
|
||||
return {"genres": []}
|
||||
|
||||
|
||||
@router.get("/creative/api/video-styles")
|
||||
async def creative_video_styles_api():
|
||||
"""Return supported video styles and resolutions."""
|
||||
try:
|
||||
from tools.video_tools import VIDEO_STYLES, RESOLUTION_PRESETS
|
||||
return {
|
||||
"styles": VIDEO_STYLES,
|
||||
"resolutions": list(RESOLUTION_PRESETS.keys()),
|
||||
}
|
||||
except ImportError:
|
||||
return {"styles": [], "resolutions": []}
|
||||
147
src/dashboard/routes/spark.py
Normal file
147
src/dashboard/routes/spark.py
Normal file
@@ -0,0 +1,147 @@
|
||||
"""Spark Intelligence dashboard routes.
|
||||
|
||||
GET /spark — JSON status (API)
|
||||
GET /spark/ui — HTML Spark Intelligence dashboard
|
||||
GET /spark/timeline — HTMX partial: recent event timeline
|
||||
GET /spark/insights — HTMX partial: advisories and insights
|
||||
GET /spark/predictions — HTMX partial: EIDOS predictions
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
from spark.engine import spark_engine
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter(prefix="/spark", tags=["spark"])
|
||||
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
|
||||
|
||||
|
||||
@router.get("/ui", response_class=HTMLResponse)
|
||||
async def spark_ui(request: Request):
|
||||
"""Render the Spark Intelligence dashboard page."""
|
||||
status = spark_engine.status()
|
||||
advisories = spark_engine.get_advisories()
|
||||
timeline = spark_engine.get_timeline(limit=20)
|
||||
predictions = spark_engine.get_predictions(limit=10)
|
||||
memories = spark_engine.get_memories(limit=10)
|
||||
|
||||
# Parse event data JSON for template display
|
||||
timeline_enriched = []
|
||||
for ev in timeline:
|
||||
entry = {
|
||||
"id": ev.id,
|
||||
"event_type": ev.event_type,
|
||||
"agent_id": ev.agent_id,
|
||||
"task_id": ev.task_id,
|
||||
"description": ev.description,
|
||||
"importance": ev.importance,
|
||||
"created_at": ev.created_at,
|
||||
}
|
||||
try:
|
||||
entry["data"] = json.loads(ev.data)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
entry["data"] = {}
|
||||
timeline_enriched.append(entry)
|
||||
|
||||
# Enrich predictions for display
|
||||
predictions_enriched = []
|
||||
for p in predictions:
|
||||
entry = {
|
||||
"id": p.id,
|
||||
"task_id": p.task_id,
|
||||
"prediction_type": p.prediction_type,
|
||||
"accuracy": p.accuracy,
|
||||
"created_at": p.created_at,
|
||||
"evaluated_at": p.evaluated_at,
|
||||
}
|
||||
try:
|
||||
entry["predicted"] = json.loads(p.predicted_value)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
entry["predicted"] = {}
|
||||
try:
|
||||
entry["actual"] = json.loads(p.actual_value) if p.actual_value else None
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
entry["actual"] = None
|
||||
predictions_enriched.append(entry)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"spark.html",
|
||||
{
|
||||
"status": status,
|
||||
"advisories": advisories,
|
||||
"timeline": timeline_enriched,
|
||||
"predictions": predictions_enriched,
|
||||
"memories": memories,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("", response_class=HTMLResponse)
|
||||
async def spark_status_json():
|
||||
"""Return Spark Intelligence status as JSON."""
|
||||
from fastapi.responses import JSONResponse
|
||||
status = spark_engine.status()
|
||||
advisories = spark_engine.get_advisories()
|
||||
return JSONResponse({
|
||||
"status": status,
|
||||
"advisories": [
|
||||
{
|
||||
"category": a.category,
|
||||
"priority": a.priority,
|
||||
"title": a.title,
|
||||
"detail": a.detail,
|
||||
"suggested_action": a.suggested_action,
|
||||
"subject": a.subject,
|
||||
"evidence_count": a.evidence_count,
|
||||
}
|
||||
for a in advisories
|
||||
],
|
||||
})
|
||||
|
||||
|
||||
@router.get("/timeline", response_class=HTMLResponse)
|
||||
async def spark_timeline(request: Request):
|
||||
"""HTMX partial: recent event timeline."""
|
||||
timeline = spark_engine.get_timeline(limit=20)
|
||||
timeline_enriched = []
|
||||
for ev in timeline:
|
||||
entry = {
|
||||
"id": ev.id,
|
||||
"event_type": ev.event_type,
|
||||
"agent_id": ev.agent_id,
|
||||
"task_id": ev.task_id,
|
||||
"description": ev.description,
|
||||
"importance": ev.importance,
|
||||
"created_at": ev.created_at,
|
||||
}
|
||||
try:
|
||||
entry["data"] = json.loads(ev.data)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
entry["data"] = {}
|
||||
timeline_enriched.append(entry)
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"partials/spark_timeline.html",
|
||||
{"timeline": timeline_enriched},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/insights", response_class=HTMLResponse)
|
||||
async def spark_insights(request: Request):
|
||||
"""HTMX partial: advisories and consolidated memories."""
|
||||
advisories = spark_engine.get_advisories()
|
||||
memories = spark_engine.get_memories(limit=10)
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"partials/spark_insights.html",
|
||||
{"advisories": advisories, "memories": memories},
|
||||
)
|
||||
@@ -23,8 +23,10 @@
|
||||
<div class="mc-header-right">
|
||||
<a href="/briefing" class="mc-test-link">BRIEFING</a>
|
||||
<a href="/swarm/live" class="mc-test-link">SWARM</a>
|
||||
<a href="/spark/ui" class="mc-test-link">SPARK</a>
|
||||
<a href="/marketplace/ui" class="mc-test-link">MARKET</a>
|
||||
<a href="/tools" class="mc-test-link">TOOLS</a>
|
||||
<a href="/creative/ui" class="mc-test-link">CREATIVE</a>
|
||||
<a href="/mobile" class="mc-test-link">MOBILE</a>
|
||||
<button id="enable-notifications" class="mc-test-link" style="background:none;border:none;cursor:pointer;" title="Enable notifications">🔔</button>
|
||||
<span class="mc-time" id="clock"></span>
|
||||
|
||||
198
src/dashboard/templates/creative.html
Normal file
198
src/dashboard/templates/creative.html
Normal file
@@ -0,0 +1,198 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Creative Studio — Mission Control{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="container-fluid py-4">
|
||||
<div class="row mb-4">
|
||||
<div class="col">
|
||||
<h1 class="display-6">Creative Studio</h1>
|
||||
<p class="text-secondary">Image, music, and video generation — powered by Pixel, Lyra, and Reel</p>
|
||||
</div>
|
||||
<div class="col-auto d-flex gap-3">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ image_count }}</h4>
|
||||
<small class="text-secondary">Images</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ music_count }}</h4>
|
||||
<small class="text-secondary">Tracks</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ video_count }}</h4>
|
||||
<small class="text-secondary">Clips</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ project_count }}</h4>
|
||||
<small class="text-secondary">Projects</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Tab Navigation -->
|
||||
<ul class="nav nav-tabs mb-4" role="tablist">
|
||||
<li class="nav-item">
|
||||
<button class="nav-link active" data-bs-toggle="tab" data-bs-target="#tab-images" type="button">Images</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-music" type="button">Music</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-video" type="button">Video</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-director" type="button">Director</button>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<div class="tab-content">
|
||||
<!-- Images Tab -->
|
||||
<div class="tab-pane fade show active" id="tab-images" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Pixel</strong> — Visual Architect (FLUX)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate images by sending a task to the swarm: <code>"Generate an image of ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-primary">generate_image</span> <span class="badge bg-primary">generate_storyboard</span> <span class="badge bg-primary">image_variations</span></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if images %}
|
||||
<div class="row g-3">
|
||||
{% for img in images %}
|
||||
<div class="col-md-3">
|
||||
<div class="card bg-dark border-secondary h-100">
|
||||
<div class="card-body text-center">
|
||||
<small class="text-secondary">{{ img.name }}</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No images generated yet. Send an image generation task to the swarm to get started.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Music Tab -->
|
||||
<div class="tab-pane fade" id="tab-music" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Lyra</strong> — Sound Weaver (ACE-Step 1.5)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate music by sending a task: <code>"Compose a pop song about ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-success">generate_song</span> <span class="badge bg-success">generate_instrumental</span> <span class="badge bg-success">generate_vocals</span> <span class="badge bg-success">list_genres</span></p>
|
||||
<p class="text-secondary small mb-0">Genres: pop, rock, hip-hop, r&b, jazz, blues, country, electronic, classical, folk, reggae, metal, punk, soul, funk, latin, ambient, lo-fi, cinematic</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if music_files %}
|
||||
<div class="list-group">
|
||||
{% for track in music_files %}
|
||||
<div class="list-group-item bg-dark border-secondary d-flex justify-content-between align-items-center">
|
||||
<span>{{ track.name }}</span>
|
||||
<audio controls preload="none"><source src="/static/{{ track.path }}" type="audio/wav"></audio>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No music tracks generated yet. Send a music generation task to the swarm.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Video Tab -->
|
||||
<div class="tab-pane fade" id="tab-video" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Reel</strong> — Motion Director (Wan 2.1)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate video clips: <code>"Create a cinematic clip of ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-warning text-dark">generate_video_clip</span> <span class="badge bg-warning text-dark">image_to_video</span> <span class="badge bg-warning text-dark">stitch_clips</span> <span class="badge bg-warning text-dark">overlay_audio</span></p>
|
||||
<p class="text-secondary small mb-0">Resolutions: 480p, 720p | Styles: cinematic, anime, documentary, abstract, timelapse, slow-motion, music-video, vlog</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if videos %}
|
||||
<div class="row g-3">
|
||||
{% for vid in videos %}
|
||||
<div class="col-md-4">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center">
|
||||
<small class="text-secondary">{{ vid.name }}</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No video clips generated yet. Send a video generation task to the swarm.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Director Tab -->
|
||||
<div class="tab-pane fade" id="tab-director" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Creative Director</strong> — Full Pipeline
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Orchestrate all three creative personas to produce a 3+ minute music video or cinematic short.</p>
|
||||
<p class="text-secondary small">Pipeline: <span class="badge bg-info">Script</span> → <span class="badge bg-primary">Storyboard</span> → <span class="badge bg-success">Music</span> → <span class="badge bg-warning text-dark">Video</span> → <span class="badge bg-danger">Assembly</span></p>
|
||||
<p class="text-secondary small mb-0">Tools: <span class="badge bg-secondary">create_project</span> <span class="badge bg-secondary">run_storyboard</span> <span class="badge bg-secondary">run_music</span> <span class="badge bg-secondary">run_video_generation</span> <span class="badge bg-secondary">run_assembly</span> <span class="badge bg-secondary">run_full_pipeline</span></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h5 class="mb-3">Projects</h5>
|
||||
{% if projects %}
|
||||
<div class="row g-3">
|
||||
{% for proj in projects %}
|
||||
<div class="col-md-6">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header d-flex justify-content-between">
|
||||
<strong>{{ proj.title or proj.id }}</strong>
|
||||
<span class="badge {% if proj.status == 'complete' %}bg-success{% elif proj.status == 'failed' %}bg-danger{% else %}bg-info{% endif %}">{{ proj.status }}</span>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="d-flex gap-3 small text-secondary">
|
||||
<span>Scenes: {{ proj.scene_count }}</span>
|
||||
<span>Storyboard: {{ 'Yes' if proj.has_storyboard else 'No' }}</span>
|
||||
<span>Music: {{ 'Yes' if proj.has_music else 'No' }}</span>
|
||||
<span>Clips: {{ proj.clip_count }}</span>
|
||||
<span>Final: {{ 'Yes' if proj.has_final else 'No' }}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No creative projects yet. Use the swarm to create one: <code>"Create a music video about sunrise over mountains"</code></div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endblock %}
|
||||
32
src/dashboard/templates/partials/spark_insights.html
Normal file
32
src/dashboard/templates/partials/spark_insights.html
Normal file
@@ -0,0 +1,32 @@
|
||||
{% if advisories %}
|
||||
{% for adv in advisories %}
|
||||
<div class="spark-advisory priority-{{ 'high' if adv.priority >= 0.7 else ('medium' if adv.priority >= 0.4 else 'low') }}">
|
||||
<div class="spark-advisory-header">
|
||||
<span class="spark-advisory-cat">{{ adv.category | replace("_", " ") | upper }}</span>
|
||||
<span class="spark-advisory-priority">{{ "%.0f"|format(adv.priority * 100) }}%</span>
|
||||
</div>
|
||||
<div class="spark-advisory-title">{{ adv.title }}</div>
|
||||
<div class="spark-advisory-detail">{{ adv.detail }}</div>
|
||||
<div class="spark-advisory-action">{{ adv.suggested_action }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No advisories yet. Run more tasks to build intelligence.</div>
|
||||
{% endif %}
|
||||
|
||||
{% if memories %}
|
||||
<hr class="border-secondary my-3">
|
||||
<div class="small text-muted mb-2" style="letter-spacing:.08em">CONSOLIDATED MEMORIES</div>
|
||||
{% for mem in memories %}
|
||||
<div class="spark-memory-card mem-{{ mem.memory_type }}">
|
||||
<div class="spark-mem-header">
|
||||
<span class="spark-mem-type">{{ mem.memory_type | upper }}</span>
|
||||
<span class="spark-mem-confidence">{{ "%.0f"|format(mem.confidence * 100) }}% conf</span>
|
||||
</div>
|
||||
<div class="spark-mem-content">{{ mem.content }}</div>
|
||||
<div class="spark-mem-meta">
|
||||
{{ mem.source_events }} events • {{ mem.created_at[:10] }}
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
19
src/dashboard/templates/partials/spark_timeline.html
Normal file
19
src/dashboard/templates/partials/spark_timeline.html
Normal file
@@ -0,0 +1,19 @@
|
||||
{% if timeline %}
|
||||
{% for ev in timeline %}
|
||||
<div class="spark-event spark-type-{{ ev.event_type }}">
|
||||
<div class="spark-event-header">
|
||||
<span class="spark-event-type-badge">{{ ev.event_type | replace("_", " ") | upper }}</span>
|
||||
<span class="spark-event-importance" title="Importance: {{ ev.importance }}">
|
||||
{% if ev.importance >= 0.8 %}●●●{% elif ev.importance >= 0.5 %}●●{% else %}●{% endif %}
|
||||
</span>
|
||||
</div>
|
||||
<div class="spark-event-desc">{{ ev.description }}</div>
|
||||
{% if ev.task_id %}
|
||||
<div class="spark-event-meta">task: {{ ev.task_id[:8] }}{% if ev.agent_id %} • agent: {{ ev.agent_id[:8] }}{% endif %}</div>
|
||||
{% endif %}
|
||||
<div class="spark-event-time">{{ ev.created_at[:19] }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No events captured yet.</div>
|
||||
{% endif %}
|
||||
556
src/dashboard/templates/spark.html
Normal file
556
src/dashboard/templates/spark.html
Normal file
@@ -0,0 +1,556 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Timmy Time — Spark Intelligence{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="container-fluid spark-container py-4">
|
||||
|
||||
<!-- Header -->
|
||||
<div class="spark-header mb-4">
|
||||
<div class="spark-title">SPARK INTELLIGENCE</div>
|
||||
<div class="spark-subtitle">
|
||||
Self-evolving cognitive layer —
|
||||
<span class="spark-status-val">{{ status.events_captured }}</span> events captured,
|
||||
<span class="spark-status-val">{{ status.memories_stored }}</span> memories,
|
||||
<span class="spark-status-val">{{ status.predictions.evaluated }}</span> predictions evaluated
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row g-3">
|
||||
|
||||
<!-- Left column: Status + Advisories -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
|
||||
<!-- EIDOS Status -->
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// EIDOS LOOP</div>
|
||||
<div class="card-body p-3">
|
||||
<div class="spark-stat-grid">
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">PREDICTIONS</span>
|
||||
<span class="spark-stat-value">{{ status.predictions.total_predictions }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">EVALUATED</span>
|
||||
<span class="spark-stat-value">{{ status.predictions.evaluated }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">PENDING</span>
|
||||
<span class="spark-stat-value">{{ status.predictions.pending }}</span>
|
||||
</div>
|
||||
<div class="spark-stat">
|
||||
<span class="spark-stat-label">ACCURACY</span>
|
||||
<span class="spark-stat-value {% if status.predictions.avg_accuracy >= 0.7 %}text-success{% elif status.predictions.avg_accuracy < 0.4 %}text-danger{% else %}text-warning{% endif %}">
|
||||
{{ "%.0f"|format(status.predictions.avg_accuracy * 100) }}%
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Event Counts -->
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// EVENT PIPELINE</div>
|
||||
<div class="card-body p-3">
|
||||
{% for event_type, count in status.event_types.items() %}
|
||||
<div class="spark-event-row">
|
||||
<span class="spark-event-type-badge spark-type-{{ event_type }}">{{ event_type | replace("_", " ") | upper }}</span>
|
||||
<span class="spark-event-count">{{ count }}</span>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Advisories -->
|
||||
<div class="card mc-panel"
|
||||
hx-get="/spark/insights"
|
||||
hx-trigger="load, every 30s"
|
||||
hx-target="#spark-insights-body"
|
||||
hx-swap="innerHTML">
|
||||
<div class="card-header mc-panel-header d-flex justify-content-between align-items-center">
|
||||
<span>// ADVISORIES</span>
|
||||
<span class="badge bg-info">{{ advisories | length }}</span>
|
||||
</div>
|
||||
<div class="card-body p-3" id="spark-insights-body">
|
||||
{% if advisories %}
|
||||
{% for adv in advisories %}
|
||||
<div class="spark-advisory priority-{{ 'high' if adv.priority >= 0.7 else ('medium' if adv.priority >= 0.4 else 'low') }}">
|
||||
<div class="spark-advisory-header">
|
||||
<span class="spark-advisory-cat">{{ adv.category | replace("_", " ") | upper }}</span>
|
||||
<span class="spark-advisory-priority">{{ "%.0f"|format(adv.priority * 100) }}%</span>
|
||||
</div>
|
||||
<div class="spark-advisory-title">{{ adv.title }}</div>
|
||||
<div class="spark-advisory-detail">{{ adv.detail }}</div>
|
||||
<div class="spark-advisory-action">{{ adv.suggested_action }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No advisories yet. Run more tasks to build intelligence.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Middle column: Predictions -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
|
||||
<!-- EIDOS Predictions -->
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// EIDOS PREDICTIONS</div>
|
||||
<div class="card-body p-3">
|
||||
{% if predictions %}
|
||||
{% for pred in predictions %}
|
||||
<div class="spark-prediction {% if pred.evaluated_at %}evaluated{% else %}pending{% endif %}">
|
||||
<div class="spark-pred-header">
|
||||
<span class="spark-pred-task">{{ pred.task_id[:8] }}...</span>
|
||||
{% if pred.accuracy is not none %}
|
||||
<span class="spark-pred-accuracy {% if pred.accuracy >= 0.7 %}text-success{% elif pred.accuracy < 0.4 %}text-danger{% else %}text-warning{% endif %}">
|
||||
{{ "%.0f"|format(pred.accuracy * 100) }}%
|
||||
</span>
|
||||
{% else %}
|
||||
<span class="spark-pred-pending-badge">PENDING</span>
|
||||
{% endif %}
|
||||
</div>
|
||||
<div class="spark-pred-detail">
|
||||
{% if pred.predicted %}
|
||||
<div class="spark-pred-item">
|
||||
<span class="spark-pred-label">Winner:</span>
|
||||
{{ (pred.predicted.likely_winner or "?")[:8] }}
|
||||
</div>
|
||||
<div class="spark-pred-item">
|
||||
<span class="spark-pred-label">Success:</span>
|
||||
{{ "%.0f"|format((pred.predicted.success_probability or 0) * 100) }}%
|
||||
</div>
|
||||
<div class="spark-pred-item">
|
||||
<span class="spark-pred-label">Bid range:</span>
|
||||
{{ pred.predicted.estimated_bid_range | join("–") }} sats
|
||||
</div>
|
||||
{% endif %}
|
||||
{% if pred.actual %}
|
||||
<div class="spark-pred-actual">
|
||||
<span class="spark-pred-label">Actual:</span>
|
||||
{% if pred.actual.succeeded %}completed{% else %}failed{% endif %}
|
||||
by {{ (pred.actual.winner or "?")[:8] }}
|
||||
{% if pred.actual.winning_bid %} at {{ pred.actual.winning_bid }} sats{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
<div class="spark-pred-time">{{ pred.created_at[:19] }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No predictions yet. Post tasks to activate the EIDOS loop.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Consolidated Memories -->
|
||||
<div class="card mc-panel">
|
||||
<div class="card-header mc-panel-header">// MEMORIES</div>
|
||||
<div class="card-body p-3">
|
||||
{% if memories %}
|
||||
{% for mem in memories %}
|
||||
<div class="spark-memory-card mem-{{ mem.memory_type }}">
|
||||
<div class="spark-mem-header">
|
||||
<span class="spark-mem-type">{{ mem.memory_type | upper }}</span>
|
||||
<span class="spark-mem-confidence">{{ "%.0f"|format(mem.confidence * 100) }}% conf</span>
|
||||
</div>
|
||||
<div class="spark-mem-content">{{ mem.content }}</div>
|
||||
<div class="spark-mem-meta">
|
||||
{{ mem.source_events }} events • {{ mem.created_at[:10] }}
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">Memories will form as patterns emerge.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Right column: Event Timeline -->
|
||||
<div class="col-12 col-lg-4 d-flex flex-column gap-3">
|
||||
|
||||
<div class="card mc-panel"
|
||||
hx-get="/spark/timeline"
|
||||
hx-trigger="load, every 15s"
|
||||
hx-target="#spark-timeline-body"
|
||||
hx-swap="innerHTML">
|
||||
<div class="card-header mc-panel-header d-flex justify-content-between align-items-center">
|
||||
<span>// EVENT TIMELINE</span>
|
||||
<span class="badge bg-secondary">{{ status.events_captured }} total</span>
|
||||
</div>
|
||||
<div class="card-body p-3 spark-timeline-scroll" id="spark-timeline-body">
|
||||
{% if timeline %}
|
||||
{% for ev in timeline %}
|
||||
<div class="spark-event spark-type-{{ ev.event_type }}">
|
||||
<div class="spark-event-header">
|
||||
<span class="spark-event-type-badge">{{ ev.event_type | replace("_", " ") | upper }}</span>
|
||||
<span class="spark-event-importance" title="Importance: {{ ev.importance }}">
|
||||
{% if ev.importance >= 0.8 %}●●●{% elif ev.importance >= 0.5 %}●●{% else %}●{% endif %}
|
||||
</span>
|
||||
</div>
|
||||
<div class="spark-event-desc">{{ ev.description }}</div>
|
||||
{% if ev.task_id %}
|
||||
<div class="spark-event-meta">task: {{ ev.task_id[:8] }}{% if ev.agent_id %} • agent: {{ ev.agent_id[:8] }}{% endif %}</div>
|
||||
{% endif %}
|
||||
<div class="spark-event-time">{{ ev.created_at[:19] }}</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% else %}
|
||||
<div class="text-center text-muted py-3">No events captured yet.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<style>
|
||||
/* ------------------------------------------------------------------ */
|
||||
/* Spark Intelligence — Mission Control theme */
|
||||
/* ------------------------------------------------------------------ */
|
||||
|
||||
.spark-container {
|
||||
max-width: 1400px;
|
||||
margin: 0 auto;
|
||||
}
|
||||
|
||||
.spark-header {
|
||||
border-left: 3px solid #00d4ff;
|
||||
padding-left: 1rem;
|
||||
}
|
||||
|
||||
.spark-title {
|
||||
font-size: 1.6rem;
|
||||
font-weight: 700;
|
||||
color: #00d4ff;
|
||||
letter-spacing: 0.08em;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
.spark-subtitle {
|
||||
font-size: 0.75rem;
|
||||
color: #6c757d;
|
||||
margin-top: 0.25rem;
|
||||
}
|
||||
|
||||
.spark-status-val {
|
||||
color: #00d4ff;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Stat grid */
|
||||
.spark-stat-grid {
|
||||
display: grid;
|
||||
grid-template-columns: 1fr 1fr;
|
||||
gap: 0.75rem;
|
||||
}
|
||||
|
||||
.spark-stat {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
padding: 0.5rem;
|
||||
border: 1px solid #1a2a3a;
|
||||
border-radius: 4px;
|
||||
background: #0a1520;
|
||||
}
|
||||
|
||||
.spark-stat-label {
|
||||
font-size: 0.65rem;
|
||||
color: #6c757d;
|
||||
letter-spacing: 0.1em;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.spark-stat-value {
|
||||
font-size: 1.3rem;
|
||||
font-weight: 700;
|
||||
color: #f8f9fa;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
/* Event pipeline rows */
|
||||
.spark-event-row {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 0.4rem 0;
|
||||
border-bottom: 1px solid #1a2a3a;
|
||||
}
|
||||
|
||||
.spark-event-row:last-child {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
.spark-event-count {
|
||||
font-weight: 600;
|
||||
color: #adb5bd;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
/* Event type badges */
|
||||
.spark-event-type-badge {
|
||||
font-size: 0.65rem;
|
||||
padding: 0.15em 0.5em;
|
||||
border-radius: 3px;
|
||||
letter-spacing: 0.05em;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.spark-type-task_posted .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-task_posted { background: #1a3a5a; color: #5baaff; }
|
||||
.spark-type-bid_submitted .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-bid_submitted { background: #3a2a1a; color: #ffaa5b; }
|
||||
.spark-type-task_assigned .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-task_assigned { background: #1a3a2a; color: #5bffaa; }
|
||||
.spark-type-task_completed .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-task_completed { background: #1a3a1a; color: #5bff5b; }
|
||||
.spark-type-task_failed .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-task_failed { background: #3a1a1a; color: #ff5b5b; }
|
||||
.spark-type-agent_joined .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-agent_joined { background: #2a1a3a; color: #aa5bff; }
|
||||
.spark-type-prediction_result .spark-event-type-badge,
|
||||
.spark-event-type-badge.spark-type-prediction_result { background: #1a2a3a; color: #00d4ff; }
|
||||
|
||||
/* Advisories */
|
||||
.spark-advisory {
|
||||
border: 1px solid #2a3a4a;
|
||||
border-radius: 6px;
|
||||
padding: 0.75rem;
|
||||
margin-bottom: 0.75rem;
|
||||
background: #0d1b2a;
|
||||
}
|
||||
|
||||
.spark-advisory.priority-high {
|
||||
border-left: 3px solid #dc3545;
|
||||
}
|
||||
|
||||
.spark-advisory.priority-medium {
|
||||
border-left: 3px solid #fd7e14;
|
||||
}
|
||||
|
||||
.spark-advisory.priority-low {
|
||||
border-left: 3px solid #198754;
|
||||
}
|
||||
|
||||
.spark-advisory-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.spark-advisory-cat {
|
||||
font-size: 0.6rem;
|
||||
color: #6c757d;
|
||||
letter-spacing: 0.08em;
|
||||
}
|
||||
|
||||
.spark-advisory-priority {
|
||||
font-size: 0.65rem;
|
||||
color: #adb5bd;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
.spark-advisory-title {
|
||||
font-weight: 600;
|
||||
font-size: 0.9rem;
|
||||
color: #f8f9fa;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.spark-advisory-detail {
|
||||
font-size: 0.8rem;
|
||||
color: #adb5bd;
|
||||
margin-bottom: 0.4rem;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.spark-advisory-action {
|
||||
font-size: 0.75rem;
|
||||
color: #00d4ff;
|
||||
font-style: italic;
|
||||
border-left: 2px solid #00d4ff;
|
||||
padding-left: 0.5rem;
|
||||
}
|
||||
|
||||
/* Predictions */
|
||||
.spark-prediction {
|
||||
border: 1px solid #1a2a3a;
|
||||
border-radius: 6px;
|
||||
padding: 0.6rem;
|
||||
margin-bottom: 0.6rem;
|
||||
background: #0a1520;
|
||||
}
|
||||
|
||||
.spark-prediction.evaluated {
|
||||
border-left: 3px solid #198754;
|
||||
}
|
||||
|
||||
.spark-prediction.pending {
|
||||
border-left: 3px solid #fd7e14;
|
||||
}
|
||||
|
||||
.spark-pred-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 0.3rem;
|
||||
}
|
||||
|
||||
.spark-pred-task {
|
||||
font-size: 0.75rem;
|
||||
color: #adb5bd;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
.spark-pred-accuracy {
|
||||
font-weight: 700;
|
||||
font-size: 0.85rem;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
.spark-pred-pending-badge {
|
||||
font-size: 0.6rem;
|
||||
background: #fd7e14;
|
||||
color: #fff;
|
||||
padding: 0.1em 0.4em;
|
||||
border-radius: 3px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.spark-pred-detail {
|
||||
font-size: 0.75rem;
|
||||
color: #adb5bd;
|
||||
}
|
||||
|
||||
.spark-pred-item {
|
||||
padding: 0.1rem 0;
|
||||
}
|
||||
|
||||
.spark-pred-label {
|
||||
color: #6c757d;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.spark-pred-actual {
|
||||
margin-top: 0.3rem;
|
||||
padding-top: 0.3rem;
|
||||
border-top: 1px dashed #1a2a3a;
|
||||
color: #dee2e6;
|
||||
}
|
||||
|
||||
.spark-pred-time {
|
||||
font-size: 0.6rem;
|
||||
color: #495057;
|
||||
margin-top: 0.3rem;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
/* Memories */
|
||||
.spark-memory-card {
|
||||
border: 1px solid #1a2a3a;
|
||||
border-radius: 6px;
|
||||
padding: 0.6rem;
|
||||
margin-bottom: 0.6rem;
|
||||
background: #0a1520;
|
||||
}
|
||||
|
||||
.spark-memory-card.mem-pattern {
|
||||
border-left: 3px solid #198754;
|
||||
}
|
||||
|
||||
.spark-memory-card.mem-anomaly {
|
||||
border-left: 3px solid #dc3545;
|
||||
}
|
||||
|
||||
.spark-memory-card.mem-insight {
|
||||
border-left: 3px solid #00d4ff;
|
||||
}
|
||||
|
||||
.spark-mem-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 0.25rem;
|
||||
}
|
||||
|
||||
.spark-mem-type {
|
||||
font-size: 0.6rem;
|
||||
letter-spacing: 0.08em;
|
||||
color: #6c757d;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.spark-mem-confidence {
|
||||
font-size: 0.65rem;
|
||||
color: #adb5bd;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
.spark-mem-content {
|
||||
font-size: 0.8rem;
|
||||
color: #dee2e6;
|
||||
line-height: 1.4;
|
||||
}
|
||||
|
||||
.spark-mem-meta {
|
||||
font-size: 0.6rem;
|
||||
color: #495057;
|
||||
margin-top: 0.3rem;
|
||||
}
|
||||
|
||||
/* Timeline */
|
||||
.spark-timeline-scroll {
|
||||
max-height: 70vh;
|
||||
overflow-y: auto;
|
||||
}
|
||||
|
||||
.spark-event {
|
||||
border: 1px solid #1a2a3a;
|
||||
border-radius: 4px;
|
||||
padding: 0.5rem;
|
||||
margin-bottom: 0.5rem;
|
||||
background: #0a1520;
|
||||
}
|
||||
|
||||
.spark-event-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 0.2rem;
|
||||
}
|
||||
|
||||
.spark-event-importance {
|
||||
font-size: 0.5rem;
|
||||
color: #00d4ff;
|
||||
}
|
||||
|
||||
.spark-event-desc {
|
||||
font-size: 0.8rem;
|
||||
color: #dee2e6;
|
||||
}
|
||||
|
||||
.spark-event-meta {
|
||||
font-size: 0.65rem;
|
||||
color: #6c757d;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
margin-top: 0.15rem;
|
||||
}
|
||||
|
||||
.spark-event-time {
|
||||
font-size: 0.6rem;
|
||||
color: #495057;
|
||||
font-family: 'JetBrains Mono', monospace;
|
||||
}
|
||||
|
||||
/* Responsive */
|
||||
@media (max-width: 992px) {
|
||||
.spark-title { font-size: 1.2rem; }
|
||||
.spark-stat-value { font-size: 1.1rem; }
|
||||
}
|
||||
</style>
|
||||
{% endblock %}
|
||||
0
src/spark/__init__.py
Normal file
0
src/spark/__init__.py
Normal file
278
src/spark/advisor.py
Normal file
278
src/spark/advisor.py
Normal file
@@ -0,0 +1,278 @@
|
||||
"""Spark advisor — generates ranked recommendations from accumulated intelligence.
|
||||
|
||||
The advisor examines Spark's event history, consolidated memories, and EIDOS
|
||||
prediction accuracy to produce actionable recommendations for the swarm.
|
||||
|
||||
Categories
|
||||
----------
|
||||
- agent_performance — "Agent X excels at Y, consider routing more Y tasks"
|
||||
- bid_optimization — "Bids on Z tasks are consistently high, room to save"
|
||||
- failure_prevention — "Agent A has failed 3 recent tasks, investigate"
|
||||
- system_health — "No events in 30 min, swarm may be idle"
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Optional
|
||||
|
||||
from spark import memory as spark_memory
|
||||
from spark import eidos as spark_eidos
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Minimum events before the advisor starts generating recommendations
|
||||
_MIN_EVENTS = 3
|
||||
|
||||
|
||||
@dataclass
|
||||
class Advisory:
|
||||
"""A single ranked recommendation."""
|
||||
category: str # agent_performance, bid_optimization, etc.
|
||||
priority: float # 0.0–1.0 (higher = more urgent)
|
||||
title: str # Short headline
|
||||
detail: str # Longer explanation
|
||||
suggested_action: str # What to do about it
|
||||
subject: Optional[str] = None # agent_id or None for system-level
|
||||
evidence_count: int = 0 # Number of supporting events
|
||||
|
||||
|
||||
def generate_advisories() -> list[Advisory]:
|
||||
"""Analyse Spark data and produce ranked recommendations.
|
||||
|
||||
Returns advisories sorted by priority (highest first).
|
||||
"""
|
||||
advisories: list[Advisory] = []
|
||||
|
||||
event_count = spark_memory.count_events()
|
||||
if event_count < _MIN_EVENTS:
|
||||
advisories.append(Advisory(
|
||||
category="system_health",
|
||||
priority=0.3,
|
||||
title="Insufficient data",
|
||||
detail=f"Only {event_count} events captured. "
|
||||
f"Spark needs at least {_MIN_EVENTS} events to generate insights.",
|
||||
suggested_action="Run more swarm tasks to build intelligence.",
|
||||
evidence_count=event_count,
|
||||
))
|
||||
return advisories
|
||||
|
||||
advisories.extend(_check_failure_patterns())
|
||||
advisories.extend(_check_agent_performance())
|
||||
advisories.extend(_check_bid_patterns())
|
||||
advisories.extend(_check_prediction_accuracy())
|
||||
advisories.extend(_check_system_activity())
|
||||
|
||||
advisories.sort(key=lambda a: a.priority, reverse=True)
|
||||
return advisories
|
||||
|
||||
|
||||
def _check_failure_patterns() -> list[Advisory]:
|
||||
"""Detect agents with recent failure streaks."""
|
||||
results: list[Advisory] = []
|
||||
failures = spark_memory.get_events(event_type="task_failed", limit=50)
|
||||
|
||||
# Group failures by agent
|
||||
agent_failures: dict[str, int] = {}
|
||||
for ev in failures:
|
||||
aid = ev.agent_id
|
||||
if aid:
|
||||
agent_failures[aid] = agent_failures.get(aid, 0) + 1
|
||||
|
||||
for aid, count in agent_failures.items():
|
||||
if count >= 2:
|
||||
results.append(Advisory(
|
||||
category="failure_prevention",
|
||||
priority=min(1.0, 0.5 + count * 0.15),
|
||||
title=f"Agent {aid[:8]} has {count} failures",
|
||||
detail=f"Agent {aid[:8]}... has failed {count} recent tasks. "
|
||||
f"This pattern may indicate a capability mismatch or "
|
||||
f"configuration issue.",
|
||||
suggested_action=f"Review task types assigned to {aid[:8]}... "
|
||||
f"and consider adjusting routing preferences.",
|
||||
subject=aid,
|
||||
evidence_count=count,
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _check_agent_performance() -> list[Advisory]:
|
||||
"""Identify top-performing and underperforming agents."""
|
||||
results: list[Advisory] = []
|
||||
completions = spark_memory.get_events(event_type="task_completed", limit=100)
|
||||
failures = spark_memory.get_events(event_type="task_failed", limit=100)
|
||||
|
||||
# Build success/failure counts per agent
|
||||
agent_success: dict[str, int] = {}
|
||||
agent_fail: dict[str, int] = {}
|
||||
|
||||
for ev in completions:
|
||||
aid = ev.agent_id
|
||||
if aid:
|
||||
agent_success[aid] = agent_success.get(aid, 0) + 1
|
||||
|
||||
for ev in failures:
|
||||
aid = ev.agent_id
|
||||
if aid:
|
||||
agent_fail[aid] = agent_fail.get(aid, 0) + 1
|
||||
|
||||
all_agents = set(agent_success) | set(agent_fail)
|
||||
for aid in all_agents:
|
||||
wins = agent_success.get(aid, 0)
|
||||
fails = agent_fail.get(aid, 0)
|
||||
total = wins + fails
|
||||
if total < 2:
|
||||
continue
|
||||
|
||||
rate = wins / total
|
||||
if rate >= 0.8 and total >= 3:
|
||||
results.append(Advisory(
|
||||
category="agent_performance",
|
||||
priority=0.6,
|
||||
title=f"Agent {aid[:8]} excels ({rate:.0%} success)",
|
||||
detail=f"Agent {aid[:8]}... has completed {wins}/{total} tasks "
|
||||
f"successfully. Consider routing more tasks to this agent.",
|
||||
suggested_action="Increase task routing weight for this agent.",
|
||||
subject=aid,
|
||||
evidence_count=total,
|
||||
))
|
||||
elif rate <= 0.3 and total >= 3:
|
||||
results.append(Advisory(
|
||||
category="agent_performance",
|
||||
priority=0.75,
|
||||
title=f"Agent {aid[:8]} struggling ({rate:.0%} success)",
|
||||
detail=f"Agent {aid[:8]}... has only succeeded on {wins}/{total} tasks. "
|
||||
f"May need different task types or capability updates.",
|
||||
suggested_action="Review this agent's capabilities and assigned task types.",
|
||||
subject=aid,
|
||||
evidence_count=total,
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _check_bid_patterns() -> list[Advisory]:
|
||||
"""Detect bid optimization opportunities."""
|
||||
results: list[Advisory] = []
|
||||
bids = spark_memory.get_events(event_type="bid_submitted", limit=100)
|
||||
|
||||
if len(bids) < 5:
|
||||
return results
|
||||
|
||||
# Extract bid amounts
|
||||
bid_amounts: list[int] = []
|
||||
for ev in bids:
|
||||
try:
|
||||
data = json.loads(ev.data)
|
||||
sats = data.get("bid_sats", 0)
|
||||
if sats > 0:
|
||||
bid_amounts.append(sats)
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
continue
|
||||
|
||||
if not bid_amounts:
|
||||
return results
|
||||
|
||||
avg_bid = sum(bid_amounts) / len(bid_amounts)
|
||||
max_bid = max(bid_amounts)
|
||||
min_bid = min(bid_amounts)
|
||||
spread = max_bid - min_bid
|
||||
|
||||
if spread > avg_bid * 1.5:
|
||||
results.append(Advisory(
|
||||
category="bid_optimization",
|
||||
priority=0.5,
|
||||
title=f"Wide bid spread ({min_bid}–{max_bid} sats)",
|
||||
detail=f"Bids range from {min_bid} to {max_bid} sats "
|
||||
f"(avg {avg_bid:.0f}). Large spread may indicate "
|
||||
f"inefficient auction dynamics.",
|
||||
suggested_action="Review agent bid strategies for consistency.",
|
||||
evidence_count=len(bid_amounts),
|
||||
))
|
||||
|
||||
if avg_bid > 70:
|
||||
results.append(Advisory(
|
||||
category="bid_optimization",
|
||||
priority=0.45,
|
||||
title=f"High average bid ({avg_bid:.0f} sats)",
|
||||
detail=f"The swarm average bid is {avg_bid:.0f} sats across "
|
||||
f"{len(bid_amounts)} bids. This may be above optimal.",
|
||||
suggested_action="Consider adjusting base bid rates for persona agents.",
|
||||
evidence_count=len(bid_amounts),
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _check_prediction_accuracy() -> list[Advisory]:
|
||||
"""Report on EIDOS prediction accuracy."""
|
||||
results: list[Advisory] = []
|
||||
stats = spark_eidos.get_accuracy_stats()
|
||||
|
||||
if stats["evaluated"] < 3:
|
||||
return results
|
||||
|
||||
avg = stats["avg_accuracy"]
|
||||
if avg < 0.4:
|
||||
results.append(Advisory(
|
||||
category="system_health",
|
||||
priority=0.65,
|
||||
title=f"Low prediction accuracy ({avg:.0%})",
|
||||
detail=f"EIDOS predictions have averaged {avg:.0%} accuracy "
|
||||
f"over {stats['evaluated']} evaluations. The learning "
|
||||
f"model needs more data or the swarm behaviour is changing.",
|
||||
suggested_action="Continue running tasks; accuracy should improve "
|
||||
"as the model accumulates more training data.",
|
||||
evidence_count=stats["evaluated"],
|
||||
))
|
||||
elif avg >= 0.75:
|
||||
results.append(Advisory(
|
||||
category="system_health",
|
||||
priority=0.3,
|
||||
title=f"Strong prediction accuracy ({avg:.0%})",
|
||||
detail=f"EIDOS predictions are performing well at {avg:.0%} "
|
||||
f"average accuracy over {stats['evaluated']} evaluations.",
|
||||
suggested_action="No action needed. Spark intelligence is learning effectively.",
|
||||
evidence_count=stats["evaluated"],
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _check_system_activity() -> list[Advisory]:
|
||||
"""Check for system idle patterns."""
|
||||
results: list[Advisory] = []
|
||||
recent = spark_memory.get_events(limit=5)
|
||||
|
||||
if not recent:
|
||||
results.append(Advisory(
|
||||
category="system_health",
|
||||
priority=0.4,
|
||||
title="No swarm activity detected",
|
||||
detail="Spark has not captured any events. "
|
||||
"The swarm may be idle or Spark event capture is not active.",
|
||||
suggested_action="Post a task to the swarm to activate the pipeline.",
|
||||
))
|
||||
return results
|
||||
|
||||
# Check event type distribution
|
||||
types = [e.event_type for e in spark_memory.get_events(limit=100)]
|
||||
type_counts = {}
|
||||
for t in types:
|
||||
type_counts[t] = type_counts.get(t, 0) + 1
|
||||
|
||||
if "task_completed" not in type_counts and "task_failed" not in type_counts:
|
||||
if type_counts.get("task_posted", 0) > 3:
|
||||
results.append(Advisory(
|
||||
category="system_health",
|
||||
priority=0.6,
|
||||
title="Tasks posted but none completing",
|
||||
detail=f"{type_counts.get('task_posted', 0)} tasks posted "
|
||||
f"but no completions or failures recorded.",
|
||||
suggested_action="Check agent availability and auction configuration.",
|
||||
evidence_count=type_counts.get("task_posted", 0),
|
||||
))
|
||||
|
||||
return results
|
||||
304
src/spark/eidos.py
Normal file
304
src/spark/eidos.py
Normal file
@@ -0,0 +1,304 @@
|
||||
"""EIDOS cognitive loop — prediction, evaluation, and learning.
|
||||
|
||||
Implements the core Spark learning cycle:
|
||||
1. PREDICT — Before a task is assigned, predict the outcome
|
||||
2. OBSERVE — Watch what actually happens
|
||||
3. EVALUATE — Compare prediction vs reality
|
||||
4. LEARN — Update internal models based on accuracy
|
||||
|
||||
All predictions and evaluations are stored in SQLite for
|
||||
transparency and audit. The loop runs passively, recording
|
||||
predictions when tasks are posted and evaluating them when
|
||||
tasks complete.
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DB_PATH = Path("data/spark.db")
|
||||
|
||||
|
||||
@dataclass
|
||||
class Prediction:
|
||||
"""A prediction made by the EIDOS loop."""
|
||||
id: str
|
||||
task_id: str
|
||||
prediction_type: str # outcome, best_agent, bid_range
|
||||
predicted_value: str # JSON-encoded prediction
|
||||
actual_value: Optional[str] # JSON-encoded actual (filled on evaluation)
|
||||
accuracy: Optional[float] # 0.0–1.0 (filled on evaluation)
|
||||
created_at: str
|
||||
evaluated_at: Optional[str]
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS spark_predictions (
|
||||
id TEXT PRIMARY KEY,
|
||||
task_id TEXT NOT NULL,
|
||||
prediction_type TEXT NOT NULL,
|
||||
predicted_value TEXT NOT NULL,
|
||||
actual_value TEXT,
|
||||
accuracy REAL,
|
||||
created_at TEXT NOT NULL,
|
||||
evaluated_at TEXT
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_pred_task ON spark_predictions(task_id)"
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_pred_type ON spark_predictions(prediction_type)"
|
||||
)
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
# ── Prediction phase ────────────────────────────────────────────────────────
|
||||
|
||||
def predict_task_outcome(
|
||||
task_id: str,
|
||||
task_description: str,
|
||||
candidate_agents: list[str],
|
||||
agent_history: Optional[dict] = None,
|
||||
) -> dict:
|
||||
"""Predict the outcome of a task before it's assigned.
|
||||
|
||||
Returns a prediction dict with:
|
||||
- likely_winner: agent_id most likely to win the auction
|
||||
- success_probability: 0.0–1.0 chance the task succeeds
|
||||
- estimated_bid_range: (low, high) sats range
|
||||
"""
|
||||
# Default prediction when no history exists
|
||||
prediction = {
|
||||
"likely_winner": candidate_agents[0] if candidate_agents else None,
|
||||
"success_probability": 0.7,
|
||||
"estimated_bid_range": [20, 80],
|
||||
"reasoning": "baseline prediction (no history)",
|
||||
}
|
||||
|
||||
if agent_history:
|
||||
# Adjust based on historical success rates
|
||||
best_agent = None
|
||||
best_rate = 0.0
|
||||
for aid, metrics in agent_history.items():
|
||||
if aid not in candidate_agents:
|
||||
continue
|
||||
rate = metrics.get("success_rate", 0.0)
|
||||
if rate > best_rate:
|
||||
best_rate = rate
|
||||
best_agent = aid
|
||||
|
||||
if best_agent:
|
||||
prediction["likely_winner"] = best_agent
|
||||
prediction["success_probability"] = round(
|
||||
min(1.0, 0.5 + best_rate * 0.4), 2
|
||||
)
|
||||
prediction["reasoning"] = (
|
||||
f"agent {best_agent[:8]} has {best_rate:.0%} success rate"
|
||||
)
|
||||
|
||||
# Adjust bid range from history
|
||||
all_bids = []
|
||||
for metrics in agent_history.values():
|
||||
avg = metrics.get("avg_winning_bid", 0)
|
||||
if avg > 0:
|
||||
all_bids.append(avg)
|
||||
if all_bids:
|
||||
prediction["estimated_bid_range"] = [
|
||||
max(1, int(min(all_bids) * 0.8)),
|
||||
int(max(all_bids) * 1.2),
|
||||
]
|
||||
|
||||
# Store prediction
|
||||
pred_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO spark_predictions
|
||||
(id, task_id, prediction_type, predicted_value, created_at)
|
||||
VALUES (?, ?, ?, ?, ?)
|
||||
""",
|
||||
(pred_id, task_id, "outcome", json.dumps(prediction), now),
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
prediction["prediction_id"] = pred_id
|
||||
return prediction
|
||||
|
||||
|
||||
# ── Evaluation phase ────────────────────────────────────────────────────────
|
||||
|
||||
def evaluate_prediction(
|
||||
task_id: str,
|
||||
actual_winner: Optional[str],
|
||||
task_succeeded: bool,
|
||||
winning_bid: Optional[int] = None,
|
||||
) -> Optional[dict]:
|
||||
"""Evaluate a stored prediction against actual outcomes.
|
||||
|
||||
Returns the evaluation result or None if no prediction exists.
|
||||
"""
|
||||
conn = _get_conn()
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT * FROM spark_predictions
|
||||
WHERE task_id = ? AND prediction_type = 'outcome' AND evaluated_at IS NULL
|
||||
ORDER BY created_at DESC LIMIT 1
|
||||
""",
|
||||
(task_id,),
|
||||
).fetchone()
|
||||
|
||||
if not row:
|
||||
conn.close()
|
||||
return None
|
||||
|
||||
predicted = json.loads(row["predicted_value"])
|
||||
actual = {
|
||||
"winner": actual_winner,
|
||||
"succeeded": task_succeeded,
|
||||
"winning_bid": winning_bid,
|
||||
}
|
||||
|
||||
# Calculate accuracy
|
||||
accuracy = _compute_accuracy(predicted, actual)
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
conn.execute(
|
||||
"""
|
||||
UPDATE spark_predictions
|
||||
SET actual_value = ?, accuracy = ?, evaluated_at = ?
|
||||
WHERE id = ?
|
||||
""",
|
||||
(json.dumps(actual), accuracy, now, row["id"]),
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
return {
|
||||
"prediction_id": row["id"],
|
||||
"predicted": predicted,
|
||||
"actual": actual,
|
||||
"accuracy": accuracy,
|
||||
}
|
||||
|
||||
|
||||
def _compute_accuracy(predicted: dict, actual: dict) -> float:
|
||||
"""Score prediction accuracy from 0.0–1.0.
|
||||
|
||||
Components:
|
||||
- Winner prediction: 0.4 weight (correct = 1.0, wrong = 0.0)
|
||||
- Success prediction: 0.4 weight (how close)
|
||||
- Bid range: 0.2 weight (was actual bid in predicted range)
|
||||
"""
|
||||
score = 0.0
|
||||
weights = 0.0
|
||||
|
||||
# Winner accuracy
|
||||
pred_winner = predicted.get("likely_winner")
|
||||
actual_winner = actual.get("winner")
|
||||
if pred_winner and actual_winner:
|
||||
score += 0.4 * (1.0 if pred_winner == actual_winner else 0.0)
|
||||
weights += 0.4
|
||||
|
||||
# Success probability accuracy
|
||||
pred_success = predicted.get("success_probability", 0.5)
|
||||
actual_success = 1.0 if actual.get("succeeded") else 0.0
|
||||
success_error = abs(pred_success - actual_success)
|
||||
score += 0.4 * (1.0 - success_error)
|
||||
weights += 0.4
|
||||
|
||||
# Bid range accuracy
|
||||
bid_range = predicted.get("estimated_bid_range", [20, 80])
|
||||
actual_bid = actual.get("winning_bid")
|
||||
if actual_bid is not None and len(bid_range) == 2:
|
||||
low, high = bid_range
|
||||
if low <= actual_bid <= high:
|
||||
score += 0.2
|
||||
else:
|
||||
# Partial credit: how far outside the range
|
||||
distance = min(abs(actual_bid - low), abs(actual_bid - high))
|
||||
range_size = max(1, high - low)
|
||||
score += 0.2 * max(0, 1.0 - distance / range_size)
|
||||
weights += 0.2
|
||||
|
||||
return round(score / max(weights, 0.01), 2)
|
||||
|
||||
|
||||
# ── Query helpers ──────────────────────────────────────────────────────────
|
||||
|
||||
def get_predictions(
|
||||
task_id: Optional[str] = None,
|
||||
evaluated_only: bool = False,
|
||||
limit: int = 50,
|
||||
) -> list[Prediction]:
|
||||
"""Query stored predictions."""
|
||||
conn = _get_conn()
|
||||
query = "SELECT * FROM spark_predictions WHERE 1=1"
|
||||
params: list = []
|
||||
|
||||
if task_id:
|
||||
query += " AND task_id = ?"
|
||||
params.append(task_id)
|
||||
if evaluated_only:
|
||||
query += " AND evaluated_at IS NOT NULL"
|
||||
|
||||
query += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
conn.close()
|
||||
return [
|
||||
Prediction(
|
||||
id=r["id"],
|
||||
task_id=r["task_id"],
|
||||
prediction_type=r["prediction_type"],
|
||||
predicted_value=r["predicted_value"],
|
||||
actual_value=r["actual_value"],
|
||||
accuracy=r["accuracy"],
|
||||
created_at=r["created_at"],
|
||||
evaluated_at=r["evaluated_at"],
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
def get_accuracy_stats() -> dict:
|
||||
"""Return aggregate accuracy statistics for the EIDOS loop."""
|
||||
conn = _get_conn()
|
||||
row = conn.execute(
|
||||
"""
|
||||
SELECT
|
||||
COUNT(*) AS total_predictions,
|
||||
COUNT(evaluated_at) AS evaluated,
|
||||
AVG(CASE WHEN accuracy IS NOT NULL THEN accuracy END) AS avg_accuracy,
|
||||
MIN(CASE WHEN accuracy IS NOT NULL THEN accuracy END) AS min_accuracy,
|
||||
MAX(CASE WHEN accuracy IS NOT NULL THEN accuracy END) AS max_accuracy
|
||||
FROM spark_predictions
|
||||
"""
|
||||
).fetchone()
|
||||
conn.close()
|
||||
|
||||
return {
|
||||
"total_predictions": row["total_predictions"] or 0,
|
||||
"evaluated": row["evaluated"] or 0,
|
||||
"pending": (row["total_predictions"] or 0) - (row["evaluated"] or 0),
|
||||
"avg_accuracy": round(row["avg_accuracy"] or 0.0, 2),
|
||||
"min_accuracy": round(row["min_accuracy"] or 0.0, 2),
|
||||
"max_accuracy": round(row["max_accuracy"] or 0.0, 2),
|
||||
}
|
||||
355
src/spark/engine.py
Normal file
355
src/spark/engine.py
Normal file
@@ -0,0 +1,355 @@
|
||||
"""Spark Intelligence engine — the top-level API for Spark integration.
|
||||
|
||||
The engine is the single entry point used by the swarm coordinator and
|
||||
dashboard routes. It wires together memory capture, EIDOS predictions,
|
||||
memory consolidation, and the advisory system.
|
||||
|
||||
Usage
|
||||
-----
|
||||
from spark.engine import spark_engine
|
||||
|
||||
# Capture a swarm event
|
||||
spark_engine.on_task_posted(task_id, description)
|
||||
spark_engine.on_bid_submitted(task_id, agent_id, bid_sats)
|
||||
spark_engine.on_task_completed(task_id, agent_id, result)
|
||||
spark_engine.on_task_failed(task_id, agent_id, reason)
|
||||
|
||||
# Query Spark intelligence
|
||||
spark_engine.status()
|
||||
spark_engine.get_advisories()
|
||||
spark_engine.get_timeline()
|
||||
"""
|
||||
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from spark import advisor as spark_advisor
|
||||
from spark import eidos as spark_eidos
|
||||
from spark import memory as spark_memory
|
||||
from spark.advisor import Advisory
|
||||
from spark.memory import SparkEvent, SparkMemory
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SparkEngine:
|
||||
"""Top-level Spark Intelligence controller."""
|
||||
|
||||
def __init__(self, enabled: bool = True) -> None:
|
||||
self._enabled = enabled
|
||||
if enabled:
|
||||
logger.info("Spark Intelligence engine initialised")
|
||||
|
||||
@property
|
||||
def enabled(self) -> bool:
|
||||
return self._enabled
|
||||
|
||||
# ── Event capture (called by coordinator) ────────────────────────────────
|
||||
|
||||
def on_task_posted(
|
||||
self,
|
||||
task_id: str,
|
||||
description: str,
|
||||
candidate_agents: Optional[list[str]] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture a task-posted event and generate a prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
event_id = spark_memory.record_event(
|
||||
event_type="task_posted",
|
||||
description=description,
|
||||
task_id=task_id,
|
||||
data=json.dumps({"candidates": candidate_agents or []}),
|
||||
)
|
||||
|
||||
# Generate EIDOS prediction
|
||||
if candidate_agents:
|
||||
spark_eidos.predict_task_outcome(
|
||||
task_id=task_id,
|
||||
task_description=description,
|
||||
candidate_agents=candidate_agents,
|
||||
)
|
||||
|
||||
logger.debug("Spark: captured task_posted %s", task_id[:8])
|
||||
return event_id
|
||||
|
||||
def on_bid_submitted(
|
||||
self, task_id: str, agent_id: str, bid_sats: int,
|
||||
) -> Optional[str]:
|
||||
"""Capture a bid event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
event_id = spark_memory.record_event(
|
||||
event_type="bid_submitted",
|
||||
description=f"Agent {agent_id[:8]} bid {bid_sats} sats",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
data=json.dumps({"bid_sats": bid_sats}),
|
||||
)
|
||||
|
||||
logger.debug("Spark: captured bid %s→%s (%d sats)",
|
||||
agent_id[:8], task_id[:8], bid_sats)
|
||||
return event_id
|
||||
|
||||
def on_task_assigned(
|
||||
self, task_id: str, agent_id: str,
|
||||
) -> Optional[str]:
|
||||
"""Capture a task-assigned event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
event_id = spark_memory.record_event(
|
||||
event_type="task_assigned",
|
||||
description=f"Task assigned to {agent_id[:8]}",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
)
|
||||
|
||||
logger.debug("Spark: captured assignment %s→%s",
|
||||
task_id[:8], agent_id[:8])
|
||||
return event_id
|
||||
|
||||
def on_task_completed(
|
||||
self,
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
result: str,
|
||||
winning_bid: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture a task-completed event and evaluate EIDOS prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
event_id = spark_memory.record_event(
|
||||
event_type="task_completed",
|
||||
description=f"Task completed by {agent_id[:8]}",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
data=json.dumps({
|
||||
"result_length": len(result),
|
||||
"winning_bid": winning_bid,
|
||||
}),
|
||||
)
|
||||
|
||||
# Evaluate EIDOS prediction
|
||||
evaluation = spark_eidos.evaluate_prediction(
|
||||
task_id=task_id,
|
||||
actual_winner=agent_id,
|
||||
task_succeeded=True,
|
||||
winning_bid=winning_bid,
|
||||
)
|
||||
if evaluation:
|
||||
accuracy = evaluation["accuracy"]
|
||||
spark_memory.record_event(
|
||||
event_type="prediction_result",
|
||||
description=f"Prediction accuracy: {accuracy:.0%}",
|
||||
task_id=task_id,
|
||||
data=json.dumps(evaluation, default=str),
|
||||
importance=0.7,
|
||||
)
|
||||
|
||||
# Consolidate memory if enough events for this agent
|
||||
self._maybe_consolidate(agent_id)
|
||||
|
||||
logger.debug("Spark: captured completion %s by %s",
|
||||
task_id[:8], agent_id[:8])
|
||||
return event_id
|
||||
|
||||
def on_task_failed(
|
||||
self,
|
||||
task_id: str,
|
||||
agent_id: str,
|
||||
reason: str,
|
||||
) -> Optional[str]:
|
||||
"""Capture a task-failed event and evaluate EIDOS prediction."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
event_id = spark_memory.record_event(
|
||||
event_type="task_failed",
|
||||
description=f"Task failed by {agent_id[:8]}: {reason[:80]}",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
data=json.dumps({"reason": reason}),
|
||||
)
|
||||
|
||||
# Evaluate EIDOS prediction
|
||||
spark_eidos.evaluate_prediction(
|
||||
task_id=task_id,
|
||||
actual_winner=agent_id,
|
||||
task_succeeded=False,
|
||||
)
|
||||
|
||||
# Failures always worth consolidating
|
||||
self._maybe_consolidate(agent_id)
|
||||
|
||||
logger.debug("Spark: captured failure %s by %s",
|
||||
task_id[:8], agent_id[:8])
|
||||
return event_id
|
||||
|
||||
def on_agent_joined(self, agent_id: str, name: str) -> Optional[str]:
|
||||
"""Capture an agent-joined event."""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
return spark_memory.record_event(
|
||||
event_type="agent_joined",
|
||||
description=f"Agent {name} ({agent_id[:8]}) joined the swarm",
|
||||
agent_id=agent_id,
|
||||
)
|
||||
|
||||
# ── Tool-level event capture ─────────────────────────────────────────────
|
||||
|
||||
def on_tool_executed(
|
||||
self,
|
||||
agent_id: str,
|
||||
tool_name: str,
|
||||
task_id: Optional[str] = None,
|
||||
success: bool = True,
|
||||
duration_ms: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture an individual tool invocation.
|
||||
|
||||
Tracks which tools each agent uses, success rates, and latency
|
||||
so Spark can generate tool-specific advisories.
|
||||
"""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
data = {"tool": tool_name, "success": success}
|
||||
if duration_ms is not None:
|
||||
data["duration_ms"] = duration_ms
|
||||
|
||||
return spark_memory.record_event(
|
||||
event_type="tool_executed",
|
||||
description=f"Agent {agent_id[:8]} used {tool_name} ({'ok' if success else 'FAIL'})",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
data=json.dumps(data),
|
||||
importance=0.3 if success else 0.6,
|
||||
)
|
||||
|
||||
# ── Creative pipeline event capture ──────────────────────────────────────
|
||||
|
||||
def on_creative_step(
|
||||
self,
|
||||
project_id: str,
|
||||
step_name: str,
|
||||
agent_id: str,
|
||||
output_path: Optional[str] = None,
|
||||
success: bool = True,
|
||||
) -> Optional[str]:
|
||||
"""Capture a creative pipeline step (storyboard, music, video, assembly).
|
||||
|
||||
Tracks pipeline progress and creative output quality metrics
|
||||
for Spark advisory generation.
|
||||
"""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
data = {
|
||||
"project_id": project_id,
|
||||
"step": step_name,
|
||||
"success": success,
|
||||
}
|
||||
if output_path:
|
||||
data["output_path"] = output_path
|
||||
|
||||
return spark_memory.record_event(
|
||||
event_type="creative_step",
|
||||
description=f"Creative pipeline: {step_name} by {agent_id[:8]} ({'ok' if success else 'FAIL'})",
|
||||
agent_id=agent_id,
|
||||
data=json.dumps(data),
|
||||
importance=0.5,
|
||||
)
|
||||
|
||||
# ── Memory consolidation ────────────────────────────────────────────────
|
||||
|
||||
def _maybe_consolidate(self, agent_id: str) -> None:
|
||||
"""Consolidate events into memories when enough data exists."""
|
||||
agent_events = spark_memory.get_events(agent_id=agent_id, limit=50)
|
||||
if len(agent_events) < 5:
|
||||
return
|
||||
|
||||
completions = [e for e in agent_events if e.event_type == "task_completed"]
|
||||
failures = [e for e in agent_events if e.event_type == "task_failed"]
|
||||
total = len(completions) + len(failures)
|
||||
|
||||
if total < 3:
|
||||
return
|
||||
|
||||
success_rate = len(completions) / total if total else 0
|
||||
|
||||
if success_rate >= 0.8:
|
||||
spark_memory.store_memory(
|
||||
memory_type="pattern",
|
||||
subject=agent_id,
|
||||
content=f"Agent {agent_id[:8]} has a strong track record: "
|
||||
f"{len(completions)}/{total} tasks completed successfully.",
|
||||
confidence=min(0.95, 0.6 + total * 0.05),
|
||||
source_events=total,
|
||||
)
|
||||
elif success_rate <= 0.3:
|
||||
spark_memory.store_memory(
|
||||
memory_type="anomaly",
|
||||
subject=agent_id,
|
||||
content=f"Agent {agent_id[:8]} is struggling: only "
|
||||
f"{len(completions)}/{total} tasks completed.",
|
||||
confidence=min(0.95, 0.6 + total * 0.05),
|
||||
source_events=total,
|
||||
)
|
||||
|
||||
# ── Query API ────────────────────────────────────────────────────────────
|
||||
|
||||
def status(self) -> dict:
|
||||
"""Return a summary of Spark Intelligence state."""
|
||||
eidos_stats = spark_eidos.get_accuracy_stats()
|
||||
return {
|
||||
"enabled": self._enabled,
|
||||
"events_captured": spark_memory.count_events(),
|
||||
"memories_stored": spark_memory.count_memories(),
|
||||
"predictions": eidos_stats,
|
||||
"event_types": {
|
||||
"task_posted": spark_memory.count_events("task_posted"),
|
||||
"bid_submitted": spark_memory.count_events("bid_submitted"),
|
||||
"task_assigned": spark_memory.count_events("task_assigned"),
|
||||
"task_completed": spark_memory.count_events("task_completed"),
|
||||
"task_failed": spark_memory.count_events("task_failed"),
|
||||
"agent_joined": spark_memory.count_events("agent_joined"),
|
||||
"tool_executed": spark_memory.count_events("tool_executed"),
|
||||
"creative_step": spark_memory.count_events("creative_step"),
|
||||
},
|
||||
}
|
||||
|
||||
def get_advisories(self) -> list[Advisory]:
|
||||
"""Generate current advisories based on accumulated intelligence."""
|
||||
if not self._enabled:
|
||||
return []
|
||||
return spark_advisor.generate_advisories()
|
||||
|
||||
def get_timeline(self, limit: int = 50) -> list[SparkEvent]:
|
||||
"""Return recent events as a timeline."""
|
||||
return spark_memory.get_events(limit=limit)
|
||||
|
||||
def get_memories(self, limit: int = 50) -> list[SparkMemory]:
|
||||
"""Return consolidated memories."""
|
||||
return spark_memory.get_memories(limit=limit)
|
||||
|
||||
def get_predictions(self, limit: int = 20) -> list:
|
||||
"""Return recent EIDOS predictions."""
|
||||
return spark_eidos.get_predictions(limit=limit)
|
||||
|
||||
|
||||
# Module-level singleton — respects SPARK_ENABLED config
|
||||
def _create_engine() -> SparkEngine:
|
||||
try:
|
||||
from config import settings
|
||||
return SparkEngine(enabled=settings.spark_enabled)
|
||||
except Exception:
|
||||
return SparkEngine(enabled=True)
|
||||
|
||||
|
||||
spark_engine = _create_engine()
|
||||
301
src/spark/memory.py
Normal file
301
src/spark/memory.py
Normal file
@@ -0,0 +1,301 @@
|
||||
"""Spark memory — SQLite-backed event capture and memory consolidation.
|
||||
|
||||
Captures swarm events (tasks posted, bids, assignments, completions,
|
||||
failures) and distills them into higher-level memories with importance
|
||||
scoring. This is the persistence layer for Spark Intelligence.
|
||||
|
||||
Tables
|
||||
------
|
||||
spark_events — raw event log (every swarm event)
|
||||
spark_memories — consolidated insights extracted from event patterns
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
DB_PATH = Path("data/spark.db")
|
||||
|
||||
# Importance thresholds
|
||||
IMPORTANCE_LOW = 0.3
|
||||
IMPORTANCE_MEDIUM = 0.6
|
||||
IMPORTANCE_HIGH = 0.8
|
||||
|
||||
|
||||
@dataclass
|
||||
class SparkEvent:
|
||||
"""A single captured swarm event."""
|
||||
id: str
|
||||
event_type: str # task_posted, bid, assignment, completion, failure
|
||||
agent_id: Optional[str]
|
||||
task_id: Optional[str]
|
||||
description: str
|
||||
data: str # JSON payload
|
||||
importance: float # 0.0–1.0
|
||||
created_at: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class SparkMemory:
|
||||
"""A consolidated memory distilled from event patterns."""
|
||||
id: str
|
||||
memory_type: str # pattern, insight, anomaly
|
||||
subject: str # agent_id or "system"
|
||||
content: str # Human-readable insight
|
||||
confidence: float # 0.0–1.0
|
||||
source_events: int # How many events contributed
|
||||
created_at: str
|
||||
expires_at: Optional[str]
|
||||
|
||||
|
||||
def _get_conn() -> sqlite3.Connection:
|
||||
DB_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
conn.row_factory = sqlite3.Row
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS spark_events (
|
||||
id TEXT PRIMARY KEY,
|
||||
event_type TEXT NOT NULL,
|
||||
agent_id TEXT,
|
||||
task_id TEXT,
|
||||
description TEXT NOT NULL DEFAULT '',
|
||||
data TEXT NOT NULL DEFAULT '{}',
|
||||
importance REAL NOT NULL DEFAULT 0.5,
|
||||
created_at TEXT NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS spark_memories (
|
||||
id TEXT PRIMARY KEY,
|
||||
memory_type TEXT NOT NULL,
|
||||
subject TEXT NOT NULL DEFAULT 'system',
|
||||
content TEXT NOT NULL,
|
||||
confidence REAL NOT NULL DEFAULT 0.5,
|
||||
source_events INTEGER NOT NULL DEFAULT 0,
|
||||
created_at TEXT NOT NULL,
|
||||
expires_at TEXT
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_events_type ON spark_events(event_type)"
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_events_agent ON spark_events(agent_id)"
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_events_task ON spark_events(task_id)"
|
||||
)
|
||||
conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_memories_subject ON spark_memories(subject)"
|
||||
)
|
||||
conn.commit()
|
||||
return conn
|
||||
|
||||
|
||||
# ── Importance scoring ──────────────────────────────────────────────────────
|
||||
|
||||
def score_importance(event_type: str, data: dict) -> float:
|
||||
"""Compute importance score for an event (0.0–1.0).
|
||||
|
||||
High-importance events: failures, large bids, first-time patterns.
|
||||
Low-importance events: routine bids, repeated successful completions.
|
||||
"""
|
||||
base_scores = {
|
||||
"task_posted": 0.4,
|
||||
"bid_submitted": 0.2,
|
||||
"task_assigned": 0.5,
|
||||
"task_completed": 0.6,
|
||||
"task_failed": 0.9,
|
||||
"agent_joined": 0.5,
|
||||
"prediction_result": 0.7,
|
||||
}
|
||||
score = base_scores.get(event_type, 0.5)
|
||||
|
||||
# Boost for failures (always important to learn from)
|
||||
if event_type == "task_failed":
|
||||
score = min(1.0, score + 0.1)
|
||||
|
||||
# Boost for high-value bids
|
||||
bid_sats = data.get("bid_sats", 0)
|
||||
if bid_sats and bid_sats > 80:
|
||||
score = min(1.0, score + 0.15)
|
||||
|
||||
return round(score, 2)
|
||||
|
||||
|
||||
# ── Event recording ─────────────────────────────────────────────────────────
|
||||
|
||||
def record_event(
|
||||
event_type: str,
|
||||
description: str,
|
||||
agent_id: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
data: str = "{}",
|
||||
importance: Optional[float] = None,
|
||||
) -> str:
|
||||
"""Record a swarm event. Returns the event id."""
|
||||
import json
|
||||
event_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
if importance is None:
|
||||
try:
|
||||
parsed = json.loads(data) if isinstance(data, str) else data
|
||||
except (json.JSONDecodeError, TypeError):
|
||||
parsed = {}
|
||||
importance = score_importance(event_type, parsed)
|
||||
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO spark_events
|
||||
(id, event_type, agent_id, task_id, description, data, importance, created_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(event_id, event_type, agent_id, task_id, description, data, importance, now),
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
return event_id
|
||||
|
||||
|
||||
def get_events(
|
||||
event_type: Optional[str] = None,
|
||||
agent_id: Optional[str] = None,
|
||||
task_id: Optional[str] = None,
|
||||
limit: int = 100,
|
||||
min_importance: float = 0.0,
|
||||
) -> list[SparkEvent]:
|
||||
"""Query events with optional filters."""
|
||||
conn = _get_conn()
|
||||
query = "SELECT * FROM spark_events WHERE importance >= ?"
|
||||
params: list = [min_importance]
|
||||
|
||||
if event_type:
|
||||
query += " AND event_type = ?"
|
||||
params.append(event_type)
|
||||
if agent_id:
|
||||
query += " AND agent_id = ?"
|
||||
params.append(agent_id)
|
||||
if task_id:
|
||||
query += " AND task_id = ?"
|
||||
params.append(task_id)
|
||||
|
||||
query += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
conn.close()
|
||||
return [
|
||||
SparkEvent(
|
||||
id=r["id"],
|
||||
event_type=r["event_type"],
|
||||
agent_id=r["agent_id"],
|
||||
task_id=r["task_id"],
|
||||
description=r["description"],
|
||||
data=r["data"],
|
||||
importance=r["importance"],
|
||||
created_at=r["created_at"],
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
def count_events(event_type: Optional[str] = None) -> int:
|
||||
"""Count events, optionally filtered by type."""
|
||||
conn = _get_conn()
|
||||
if event_type:
|
||||
row = conn.execute(
|
||||
"SELECT COUNT(*) FROM spark_events WHERE event_type = ?",
|
||||
(event_type,),
|
||||
).fetchone()
|
||||
else:
|
||||
row = conn.execute("SELECT COUNT(*) FROM spark_events").fetchone()
|
||||
conn.close()
|
||||
return row[0]
|
||||
|
||||
|
||||
# ── Memory consolidation ───────────────────────────────────────────────────
|
||||
|
||||
def store_memory(
|
||||
memory_type: str,
|
||||
subject: str,
|
||||
content: str,
|
||||
confidence: float = 0.5,
|
||||
source_events: int = 0,
|
||||
expires_at: Optional[str] = None,
|
||||
) -> str:
|
||||
"""Store a consolidated memory. Returns the memory id."""
|
||||
mem_id = str(uuid.uuid4())
|
||||
now = datetime.now(timezone.utc).isoformat()
|
||||
conn = _get_conn()
|
||||
conn.execute(
|
||||
"""
|
||||
INSERT INTO spark_memories
|
||||
(id, memory_type, subject, content, confidence, source_events, created_at, expires_at)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(mem_id, memory_type, subject, content, confidence, source_events, now, expires_at),
|
||||
)
|
||||
conn.commit()
|
||||
conn.close()
|
||||
return mem_id
|
||||
|
||||
|
||||
def get_memories(
|
||||
memory_type: Optional[str] = None,
|
||||
subject: Optional[str] = None,
|
||||
min_confidence: float = 0.0,
|
||||
limit: int = 50,
|
||||
) -> list[SparkMemory]:
|
||||
"""Query memories with optional filters."""
|
||||
conn = _get_conn()
|
||||
query = "SELECT * FROM spark_memories WHERE confidence >= ?"
|
||||
params: list = [min_confidence]
|
||||
|
||||
if memory_type:
|
||||
query += " AND memory_type = ?"
|
||||
params.append(memory_type)
|
||||
if subject:
|
||||
query += " AND subject = ?"
|
||||
params.append(subject)
|
||||
|
||||
query += " ORDER BY created_at DESC LIMIT ?"
|
||||
params.append(limit)
|
||||
|
||||
rows = conn.execute(query, params).fetchall()
|
||||
conn.close()
|
||||
return [
|
||||
SparkMemory(
|
||||
id=r["id"],
|
||||
memory_type=r["memory_type"],
|
||||
subject=r["subject"],
|
||||
content=r["content"],
|
||||
confidence=r["confidence"],
|
||||
source_events=r["source_events"],
|
||||
created_at=r["created_at"],
|
||||
expires_at=r["expires_at"],
|
||||
)
|
||||
for r in rows
|
||||
]
|
||||
|
||||
|
||||
def count_memories(memory_type: Optional[str] = None) -> int:
|
||||
"""Count memories, optionally filtered by type."""
|
||||
conn = _get_conn()
|
||||
if memory_type:
|
||||
row = conn.execute(
|
||||
"SELECT COUNT(*) FROM spark_memories WHERE memory_type = ?",
|
||||
(memory_type,),
|
||||
).fetchone()
|
||||
else:
|
||||
row = conn.execute("SELECT COUNT(*) FROM spark_memories").fetchone()
|
||||
conn.close()
|
||||
return row[0]
|
||||
@@ -29,6 +29,15 @@ from swarm.tasks import (
|
||||
update_task,
|
||||
)
|
||||
|
||||
# Spark Intelligence integration — lazy import to avoid circular deps
|
||||
def _get_spark():
|
||||
"""Lazily import the Spark engine singleton."""
|
||||
try:
|
||||
from spark.engine import spark_engine
|
||||
return spark_engine
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -100,6 +109,10 @@ class SwarmCoordinator:
|
||||
)
|
||||
# Broadcast bid via WebSocket
|
||||
self._broadcast(self._broadcast_bid, task_id, aid, bid_sats)
|
||||
# Spark: capture bid event
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
spark.on_bid_submitted(task_id, aid, bid_sats)
|
||||
|
||||
self.comms.subscribe("swarm:tasks", _bid_and_register)
|
||||
|
||||
@@ -109,15 +122,20 @@ class SwarmCoordinator:
|
||||
capabilities=meta["capabilities"],
|
||||
agent_id=aid,
|
||||
)
|
||||
|
||||
|
||||
# Register capability manifest with routing engine
|
||||
swarm_routing.routing_engine.register_persona(persona_id, aid)
|
||||
|
||||
|
||||
self._in_process_nodes.append(node)
|
||||
logger.info("Spawned persona %s (%s)", node.name, aid)
|
||||
|
||||
|
||||
# Broadcast agent join via WebSocket
|
||||
self._broadcast(self._broadcast_agent_joined, aid, node.name)
|
||||
|
||||
# Spark: capture agent join
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
spark.on_agent_joined(aid, node.name)
|
||||
|
||||
return {
|
||||
"agent_id": aid,
|
||||
@@ -193,6 +211,11 @@ class SwarmCoordinator:
|
||||
logger.info("Task posted: %s (%s)", task.id, description[:50])
|
||||
# Broadcast task posted via WebSocket
|
||||
self._broadcast(self._broadcast_task_posted, task.id, description)
|
||||
# Spark: capture task-posted event with candidate agents
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
candidates = [a.id for a in registry.list_agents()]
|
||||
spark.on_task_posted(task.id, description, candidates)
|
||||
return task
|
||||
|
||||
async def run_auction_and_assign(self, task_id: str) -> Optional[Bid]:
|
||||
@@ -259,6 +282,10 @@ class SwarmCoordinator:
|
||||
)
|
||||
# Broadcast task assigned via WebSocket
|
||||
self._broadcast(self._broadcast_task_assigned, task_id, winner.agent_id)
|
||||
# Spark: capture assignment
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
spark.on_task_assigned(task_id, winner.agent_id)
|
||||
else:
|
||||
update_task(task_id, status=TaskStatus.FAILED)
|
||||
logger.warning("Task %s: no bids received, marked as failed", task_id)
|
||||
@@ -286,6 +313,10 @@ class SwarmCoordinator:
|
||||
self._broadcast_task_completed,
|
||||
task_id, task.assigned_agent, result
|
||||
)
|
||||
# Spark: capture completion
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
spark.on_task_completed(task_id, task.assigned_agent, result)
|
||||
return updated
|
||||
|
||||
def fail_task(self, task_id: str, reason: str = "") -> Optional[Task]:
|
||||
@@ -304,6 +335,10 @@ class SwarmCoordinator:
|
||||
registry.update_status(task.assigned_agent, "idle")
|
||||
# Record failure in learner
|
||||
swarm_learner.record_task_result(task_id, task.assigned_agent, succeeded=False)
|
||||
# Spark: capture failure
|
||||
spark = _get_spark()
|
||||
if spark:
|
||||
spark.on_task_failed(task_id, task.assigned_agent, reason)
|
||||
return updated
|
||||
|
||||
def get_task(self, task_id: str) -> Optional[Task]:
|
||||
@@ -377,7 +412,7 @@ class SwarmCoordinator:
|
||||
"""Return a summary of the swarm state."""
|
||||
agents = registry.list_agents()
|
||||
tasks = list_tasks()
|
||||
return {
|
||||
status = {
|
||||
"agents": len(agents),
|
||||
"agents_idle": sum(1 for a in agents if a.status == "idle"),
|
||||
"agents_busy": sum(1 for a in agents if a.status == "busy"),
|
||||
@@ -388,6 +423,16 @@ class SwarmCoordinator:
|
||||
"active_auctions": len(self.auctions.active_auctions),
|
||||
"routing_manifests": len(swarm_routing.routing_engine._manifests),
|
||||
}
|
||||
# Include Spark Intelligence summary if available
|
||||
spark = _get_spark()
|
||||
if spark and spark.enabled:
|
||||
spark_status = spark.status()
|
||||
status["spark"] = {
|
||||
"events_captured": spark_status["events_captured"],
|
||||
"memories_stored": spark_status["memories_stored"],
|
||||
"prediction_accuracy": spark_status["predictions"]["avg_accuracy"],
|
||||
}
|
||||
return status
|
||||
|
||||
def get_routing_decisions(self, task_id: Optional[str] = None, limit: int = 100) -> list:
|
||||
"""Get routing decision history for audit.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Persona definitions for the six built-in swarm agents.
|
||||
"""Persona definitions for the nine built-in swarm agents.
|
||||
|
||||
Each persona entry describes a specialised SwarmNode that can be spawned
|
||||
into the coordinator. Personas have:
|
||||
@@ -76,6 +76,7 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"preferred_keywords": [
|
||||
"deploy", "infrastructure", "config", "docker", "kubernetes",
|
||||
"server", "automation", "pipeline", "ci", "cd",
|
||||
"git", "push", "pull", "clone", "devops",
|
||||
],
|
||||
},
|
||||
"seer": {
|
||||
@@ -109,6 +110,7 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"preferred_keywords": [
|
||||
"code", "function", "bug", "fix", "refactor", "test",
|
||||
"implement", "class", "api", "script",
|
||||
"commit", "branch", "merge", "git", "pull request",
|
||||
],
|
||||
},
|
||||
"quill": {
|
||||
@@ -127,6 +129,60 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"edit", "proofread", "content", "article",
|
||||
],
|
||||
},
|
||||
# ── Creative & DevOps personas ────────────────────────────────────────────
|
||||
"pixel": {
|
||||
"id": "pixel",
|
||||
"name": "Pixel",
|
||||
"role": "Visual Architect",
|
||||
"description": (
|
||||
"Image generation, storyboard frames, and visual design "
|
||||
"using FLUX models."
|
||||
),
|
||||
"capabilities": "image-generation,storyboard,design",
|
||||
"rate_sats": 80,
|
||||
"bid_base": 60,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"image", "picture", "photo", "draw", "illustration",
|
||||
"storyboard", "frame", "visual", "design", "generate image",
|
||||
"portrait", "landscape", "scene", "artwork",
|
||||
],
|
||||
},
|
||||
"lyra": {
|
||||
"id": "lyra",
|
||||
"name": "Lyra",
|
||||
"role": "Sound Weaver",
|
||||
"description": (
|
||||
"Music and song generation with vocals, instrumentals, "
|
||||
"and lyrics using ACE-Step."
|
||||
),
|
||||
"capabilities": "music-generation,vocals,composition",
|
||||
"rate_sats": 90,
|
||||
"bid_base": 70,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"music", "song", "sing", "vocal", "instrumental",
|
||||
"melody", "beat", "track", "compose", "lyrics",
|
||||
"audio", "sound", "album", "remix",
|
||||
],
|
||||
},
|
||||
"reel": {
|
||||
"id": "reel",
|
||||
"name": "Reel",
|
||||
"role": "Motion Director",
|
||||
"description": (
|
||||
"Video generation from text and image prompts "
|
||||
"using Wan 2.1 models."
|
||||
),
|
||||
"capabilities": "video-generation,animation,motion",
|
||||
"rate_sats": 100,
|
||||
"bid_base": 80,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"video", "clip", "animate", "motion", "film",
|
||||
"scene", "cinematic", "footage", "render", "timelapse",
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -23,11 +23,14 @@ class ToolExecutor:
|
||||
|
||||
Each persona gets a different set of tools based on their specialty:
|
||||
- Echo: web search, file reading
|
||||
- Forge: shell, python, file read/write
|
||||
- Forge: shell, python, file read/write, git
|
||||
- Seer: python, file reading
|
||||
- Quill: file read/write
|
||||
- Mace: shell, web search
|
||||
- Helm: shell, file operations
|
||||
- Helm: shell, file operations, git
|
||||
- Pixel: image generation, storyboards
|
||||
- Lyra: music/song generation
|
||||
- Reel: video generation, assembly
|
||||
|
||||
The executor combines:
|
||||
1. MCP tools (file, shell, python, search)
|
||||
@@ -214,6 +217,39 @@ Response:"""
|
||||
"run": "shell",
|
||||
"list": "list_files",
|
||||
"directory": "list_files",
|
||||
# Git operations
|
||||
"commit": "git_commit",
|
||||
"branch": "git_branch",
|
||||
"push": "git_push",
|
||||
"pull": "git_pull",
|
||||
"diff": "git_diff",
|
||||
"clone": "git_clone",
|
||||
"merge": "git_branch",
|
||||
"stash": "git_stash",
|
||||
"blame": "git_blame",
|
||||
"git status": "git_status",
|
||||
"git log": "git_log",
|
||||
# Image generation
|
||||
"image": "generate_image",
|
||||
"picture": "generate_image",
|
||||
"storyboard": "generate_storyboard",
|
||||
"illustration": "generate_image",
|
||||
# Music generation
|
||||
"music": "generate_song",
|
||||
"song": "generate_song",
|
||||
"vocal": "generate_vocals",
|
||||
"instrumental": "generate_instrumental",
|
||||
"lyrics": "generate_song",
|
||||
# Video generation
|
||||
"video": "generate_video_clip",
|
||||
"clip": "generate_video_clip",
|
||||
"animate": "image_to_video",
|
||||
"film": "generate_video_clip",
|
||||
# Assembly
|
||||
"stitch": "stitch_clips",
|
||||
"assemble": "run_assembly",
|
||||
"title card": "add_title_card",
|
||||
"subtitle": "add_subtitles",
|
||||
}
|
||||
|
||||
for keyword, tool in keyword_tool_map.items():
|
||||
|
||||
@@ -5,12 +5,22 @@ Provides Timmy and swarm agents with capabilities for:
|
||||
- File read/write (local filesystem)
|
||||
- Shell command execution (sandboxed)
|
||||
- Python code execution
|
||||
- Git operations (clone, commit, push, pull, branch, diff, etc.)
|
||||
- Image generation (FLUX text-to-image, storyboards)
|
||||
- Music generation (ACE-Step vocals + instrumentals)
|
||||
- Video generation (Wan 2.1 text-to-video, image-to-video)
|
||||
- Creative pipeline (storyboard → music → video → assembly)
|
||||
|
||||
Tools are assigned to personas based on their specialties:
|
||||
- Echo (Research): web search, file read
|
||||
- Forge (Code): shell, python execution, file write
|
||||
- Forge (Code): shell, python execution, file write, git
|
||||
- Seer (Data): python execution, file read
|
||||
- Quill (Writing): file read/write
|
||||
- Helm (DevOps): shell, file operations, git
|
||||
- Mace (Security): shell, web search, file read
|
||||
- Pixel (Visual): image generation, storyboards
|
||||
- Lyra (Music): song/vocal/instrumental generation
|
||||
- Reel (Video): video clip generation, image-to-video
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -280,9 +290,26 @@ PERSONA_TOOLKITS: dict[str, Callable[[], Toolkit]] = {
|
||||
"seer": create_data_tools,
|
||||
"forge": create_code_tools,
|
||||
"quill": create_writing_tools,
|
||||
"pixel": lambda base_dir=None: _create_stub_toolkit("pixel"),
|
||||
"lyra": lambda base_dir=None: _create_stub_toolkit("lyra"),
|
||||
"reel": lambda base_dir=None: _create_stub_toolkit("reel"),
|
||||
}
|
||||
|
||||
|
||||
def _create_stub_toolkit(name: str):
|
||||
"""Create a minimal Agno toolkit for creative personas.
|
||||
|
||||
Creative personas use their own dedicated tool modules (tools.image_tools,
|
||||
tools.music_tools, tools.video_tools) rather than Agno-wrapped functions.
|
||||
This stub ensures PERSONA_TOOLKITS has an entry so ToolExecutor doesn't
|
||||
fall back to the full toolkit.
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
return None
|
||||
toolkit = Toolkit(name=name)
|
||||
return toolkit
|
||||
|
||||
|
||||
def get_tools_for_persona(persona_id: str, base_dir: str | Path | None = None) -> Toolkit | None:
|
||||
"""Get the appropriate toolkit for a persona.
|
||||
|
||||
@@ -301,11 +328,11 @@ def get_tools_for_persona(persona_id: str, base_dir: str | Path | None = None) -
|
||||
|
||||
def get_all_available_tools() -> dict[str, dict]:
|
||||
"""Get a catalog of all available tools and their descriptions.
|
||||
|
||||
|
||||
Returns:
|
||||
Dict mapping tool categories to their tools and descriptions.
|
||||
"""
|
||||
return {
|
||||
catalog = {
|
||||
"web_search": {
|
||||
"name": "Web Search",
|
||||
"description": "Search the web using DuckDuckGo",
|
||||
@@ -337,3 +364,77 @@ def get_all_available_tools() -> dict[str, dict]:
|
||||
"available_in": ["echo", "seer", "forge", "quill", "mace", "helm", "timmy"],
|
||||
},
|
||||
}
|
||||
|
||||
# ── Git tools ─────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.git_tools import GIT_TOOL_CATALOG
|
||||
for tool_id, info in GIT_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["forge", "helm", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Image tools (Pixel) ───────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.image_tools import IMAGE_TOOL_CATALOG
|
||||
for tool_id, info in IMAGE_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["pixel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Music tools (Lyra) ────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.music_tools import MUSIC_TOOL_CATALOG
|
||||
for tool_id, info in MUSIC_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["lyra", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Video tools (Reel) ────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.video_tools import VIDEO_TOOL_CATALOG
|
||||
for tool_id, info in VIDEO_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Creative pipeline (Director) ──────────────────────────────────────────
|
||||
try:
|
||||
from creative.director import DIRECTOR_TOOL_CATALOG
|
||||
for tool_id, info in DIRECTOR_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Assembler tools ───────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.assembler import ASSEMBLER_TOOL_CATALOG
|
||||
for tool_id, info in ASSEMBLER_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
return catalog
|
||||
|
||||
1
src/tools/__init__.py
Normal file
1
src/tools/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Creative and DevOps tool modules for Timmy's swarm agents."""
|
||||
281
src/tools/git_tools.py
Normal file
281
src/tools/git_tools.py
Normal file
@@ -0,0 +1,281 @@
|
||||
"""Git operations tools for Forge, Helm, and Timmy personas.
|
||||
|
||||
Provides a full set of git commands that agents can execute against
|
||||
local or remote repositories. Uses GitPython under the hood.
|
||||
|
||||
All functions return plain dicts so they're easily serialisable for
|
||||
tool-call results, Spark event capture, and WebSocket broadcast.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_GIT_AVAILABLE = True
|
||||
try:
|
||||
from git import Repo, InvalidGitRepositoryError, GitCommandNotFound
|
||||
except ImportError:
|
||||
_GIT_AVAILABLE = False
|
||||
|
||||
|
||||
def _require_git() -> None:
|
||||
if not _GIT_AVAILABLE:
|
||||
raise ImportError(
|
||||
"GitPython is not installed. Run: pip install GitPython"
|
||||
)
|
||||
|
||||
|
||||
def _open_repo(repo_path: str | Path) -> "Repo":
|
||||
"""Open an existing git repo at *repo_path*."""
|
||||
_require_git()
|
||||
return Repo(str(repo_path))
|
||||
|
||||
|
||||
# ── Repository management ────────────────────────────────────────────────────
|
||||
|
||||
def git_clone(url: str, dest: str | Path) -> dict:
|
||||
"""Clone a remote repository to a local path.
|
||||
|
||||
Returns dict with ``path`` and ``default_branch``.
|
||||
"""
|
||||
_require_git()
|
||||
repo = Repo.clone_from(url, str(dest))
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(dest),
|
||||
"default_branch": repo.active_branch.name,
|
||||
}
|
||||
|
||||
|
||||
def git_init(path: str | Path) -> dict:
|
||||
"""Initialise a new git repository at *path*."""
|
||||
_require_git()
|
||||
Path(path).mkdir(parents=True, exist_ok=True)
|
||||
repo = Repo.init(str(path))
|
||||
return {"success": True, "path": str(path), "bare": repo.bare}
|
||||
|
||||
|
||||
# ── Status / inspection ──────────────────────────────────────────────────────
|
||||
|
||||
def git_status(repo_path: str | Path) -> dict:
|
||||
"""Return working-tree status: modified, staged, untracked files."""
|
||||
repo = _open_repo(repo_path)
|
||||
return {
|
||||
"success": True,
|
||||
"branch": repo.active_branch.name,
|
||||
"is_dirty": repo.is_dirty(untracked_files=True),
|
||||
"untracked": repo.untracked_files,
|
||||
"modified": [item.a_path for item in repo.index.diff(None)],
|
||||
"staged": [item.a_path for item in repo.index.diff("HEAD")],
|
||||
}
|
||||
|
||||
|
||||
def git_diff(
|
||||
repo_path: str | Path,
|
||||
staged: bool = False,
|
||||
file_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Show diff of working tree or staged changes.
|
||||
|
||||
If *file_path* is given, scope diff to that file only.
|
||||
"""
|
||||
repo = _open_repo(repo_path)
|
||||
args: list[str] = []
|
||||
if staged:
|
||||
args.append("--cached")
|
||||
if file_path:
|
||||
args.extend(["--", file_path])
|
||||
diff_text = repo.git.diff(*args)
|
||||
return {"success": True, "diff": diff_text, "staged": staged}
|
||||
|
||||
|
||||
def git_log(
|
||||
repo_path: str | Path,
|
||||
max_count: int = 20,
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Return recent commit history as a list of dicts."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
commits = []
|
||||
for commit in repo.iter_commits(ref, max_count=max_count):
|
||||
commits.append({
|
||||
"sha": commit.hexsha,
|
||||
"short_sha": commit.hexsha[:8],
|
||||
"message": commit.message.strip(),
|
||||
"author": str(commit.author),
|
||||
"date": commit.committed_datetime.isoformat(),
|
||||
"files_changed": len(commit.stats.files),
|
||||
})
|
||||
return {"success": True, "branch": ref, "commits": commits}
|
||||
|
||||
|
||||
def git_blame(repo_path: str | Path, file_path: str) -> dict:
|
||||
"""Show line-by-line authorship for a file."""
|
||||
repo = _open_repo(repo_path)
|
||||
blame_text = repo.git.blame(file_path)
|
||||
return {"success": True, "file": file_path, "blame": blame_text}
|
||||
|
||||
|
||||
# ── Branching ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def git_branch(
|
||||
repo_path: str | Path,
|
||||
create: Optional[str] = None,
|
||||
switch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""List branches, optionally create or switch to one."""
|
||||
repo = _open_repo(repo_path)
|
||||
|
||||
if create:
|
||||
repo.create_head(create)
|
||||
if switch:
|
||||
repo.heads[switch].checkout()
|
||||
|
||||
branches = [h.name for h in repo.heads]
|
||||
active = repo.active_branch.name
|
||||
return {
|
||||
"success": True,
|
||||
"branches": branches,
|
||||
"active": active,
|
||||
"created": create,
|
||||
"switched": switch,
|
||||
}
|
||||
|
||||
|
||||
# ── Staging & committing ─────────────────────────────────────────────────────
|
||||
|
||||
def git_add(repo_path: str | Path, paths: list[str] | None = None) -> dict:
|
||||
"""Stage files for commit. *paths* defaults to all modified files."""
|
||||
repo = _open_repo(repo_path)
|
||||
if paths:
|
||||
repo.index.add(paths)
|
||||
else:
|
||||
# Stage all changes
|
||||
repo.git.add(A=True)
|
||||
staged = [item.a_path for item in repo.index.diff("HEAD")]
|
||||
return {"success": True, "staged": staged}
|
||||
|
||||
|
||||
def git_commit(repo_path: str | Path, message: str) -> dict:
|
||||
"""Create a commit with the given message."""
|
||||
repo = _open_repo(repo_path)
|
||||
commit = repo.index.commit(message)
|
||||
return {
|
||||
"success": True,
|
||||
"sha": commit.hexsha,
|
||||
"short_sha": commit.hexsha[:8],
|
||||
"message": message,
|
||||
}
|
||||
|
||||
|
||||
# ── Remote operations ─────────────────────────────────────────────────────────
|
||||
|
||||
def git_push(
|
||||
repo_path: str | Path,
|
||||
remote: str = "origin",
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Push the current (or specified) branch to the remote."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
info = repo.remotes[remote].push(ref)
|
||||
summaries = [str(i.summary) for i in info]
|
||||
return {"success": True, "remote": remote, "branch": ref, "summaries": summaries}
|
||||
|
||||
|
||||
def git_pull(
|
||||
repo_path: str | Path,
|
||||
remote: str = "origin",
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Pull from the remote into the working tree."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
info = repo.remotes[remote].pull(ref)
|
||||
summaries = [str(i.summary) for i in info]
|
||||
return {"success": True, "remote": remote, "branch": ref, "summaries": summaries}
|
||||
|
||||
|
||||
# ── Stashing ──────────────────────────────────────────────────────────────────
|
||||
|
||||
def git_stash(
|
||||
repo_path: str | Path,
|
||||
pop: bool = False,
|
||||
message: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Stash or pop working-tree changes."""
|
||||
repo = _open_repo(repo_path)
|
||||
if pop:
|
||||
repo.git.stash("pop")
|
||||
return {"success": True, "action": "pop"}
|
||||
args = ["push"]
|
||||
if message:
|
||||
args.extend(["-m", message])
|
||||
repo.git.stash(*args)
|
||||
return {"success": True, "action": "stash", "message": message}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
GIT_TOOL_CATALOG: dict[str, dict] = {
|
||||
"git_clone": {
|
||||
"name": "Git Clone",
|
||||
"description": "Clone a remote repository to a local path",
|
||||
"fn": git_clone,
|
||||
},
|
||||
"git_status": {
|
||||
"name": "Git Status",
|
||||
"description": "Show working tree status (modified, staged, untracked)",
|
||||
"fn": git_status,
|
||||
},
|
||||
"git_diff": {
|
||||
"name": "Git Diff",
|
||||
"description": "Show diff of working tree or staged changes",
|
||||
"fn": git_diff,
|
||||
},
|
||||
"git_log": {
|
||||
"name": "Git Log",
|
||||
"description": "Show recent commit history",
|
||||
"fn": git_log,
|
||||
},
|
||||
"git_blame": {
|
||||
"name": "Git Blame",
|
||||
"description": "Show line-by-line authorship for a file",
|
||||
"fn": git_blame,
|
||||
},
|
||||
"git_branch": {
|
||||
"name": "Git Branch",
|
||||
"description": "List, create, or switch branches",
|
||||
"fn": git_branch,
|
||||
},
|
||||
"git_add": {
|
||||
"name": "Git Add",
|
||||
"description": "Stage files for commit",
|
||||
"fn": git_add,
|
||||
},
|
||||
"git_commit": {
|
||||
"name": "Git Commit",
|
||||
"description": "Create a commit with a message",
|
||||
"fn": git_commit,
|
||||
},
|
||||
"git_push": {
|
||||
"name": "Git Push",
|
||||
"description": "Push branch to remote repository",
|
||||
"fn": git_push,
|
||||
},
|
||||
"git_pull": {
|
||||
"name": "Git Pull",
|
||||
"description": "Pull from remote repository",
|
||||
"fn": git_pull,
|
||||
},
|
||||
"git_stash": {
|
||||
"name": "Git Stash",
|
||||
"description": "Stash or pop working tree changes",
|
||||
"fn": git_stash,
|
||||
},
|
||||
}
|
||||
171
src/tools/image_tools.py
Normal file
171
src/tools/image_tools.py
Normal file
@@ -0,0 +1,171 @@
|
||||
"""Image generation tools — Pixel persona.
|
||||
|
||||
Uses FLUX.2 Klein 4B (or configurable model) via HuggingFace diffusers
|
||||
for text-to-image generation, storyboard frames, and variations.
|
||||
|
||||
All heavy imports are lazy so the module loads instantly even without
|
||||
a GPU or the ``creative`` extra installed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded pipeline singleton
|
||||
_pipeline = None
|
||||
|
||||
|
||||
def _get_pipeline():
|
||||
"""Lazy-load the FLUX diffusers pipeline."""
|
||||
global _pipeline
|
||||
if _pipeline is not None:
|
||||
return _pipeline
|
||||
|
||||
try:
|
||||
import torch
|
||||
from diffusers import FluxPipeline
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Creative dependencies not installed. "
|
||||
"Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
|
||||
model_id = getattr(settings, "flux_model_id", "black-forest-labs/FLUX.1-schnell")
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
dtype = torch.float16 if device == "cuda" else torch.float32
|
||||
|
||||
logger.info("Loading image model %s on %s …", model_id, device)
|
||||
_pipeline = FluxPipeline.from_pretrained(
|
||||
model_id, torch_dtype=dtype,
|
||||
).to(device)
|
||||
logger.info("Image model loaded.")
|
||||
return _pipeline
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "image_output_dir", "data/images"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(image_path: Path, meta: dict) -> Path:
|
||||
meta_path = image_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_image(
|
||||
prompt: str,
|
||||
negative_prompt: str = "",
|
||||
width: int = 1024,
|
||||
height: int = 1024,
|
||||
steps: int = 4,
|
||||
seed: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate an image from a text prompt.
|
||||
|
||||
Returns dict with ``path``, ``width``, ``height``, and ``prompt``.
|
||||
"""
|
||||
pipe = _get_pipeline()
|
||||
import torch
|
||||
|
||||
generator = torch.Generator(device=pipe.device)
|
||||
if seed is not None:
|
||||
generator.manual_seed(seed)
|
||||
|
||||
image = pipe(
|
||||
prompt=prompt,
|
||||
negative_prompt=negative_prompt or None,
|
||||
width=width,
|
||||
height=height,
|
||||
num_inference_steps=steps,
|
||||
generator=generator,
|
||||
).images[0]
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.png"
|
||||
image.save(out_path)
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "negative_prompt": negative_prompt,
|
||||
"width": width, "height": height, "steps": steps, "seed": seed,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_storyboard(
|
||||
scenes: list[str],
|
||||
width: int = 1024,
|
||||
height: int = 576,
|
||||
steps: int = 4,
|
||||
) -> dict:
|
||||
"""Generate a storyboard: one keyframe image per scene description.
|
||||
|
||||
Args:
|
||||
scenes: List of scene description strings.
|
||||
|
||||
Returns dict with list of generated frame paths.
|
||||
"""
|
||||
frames = []
|
||||
for i, scene in enumerate(scenes):
|
||||
result = generate_image(
|
||||
prompt=scene, width=width, height=height, steps=steps,
|
||||
)
|
||||
result["scene_index"] = i
|
||||
result["scene_description"] = scene
|
||||
frames.append(result)
|
||||
return {"success": True, "frame_count": len(frames), "frames": frames}
|
||||
|
||||
|
||||
def image_variations(
|
||||
prompt: str,
|
||||
count: int = 4,
|
||||
width: int = 1024,
|
||||
height: int = 1024,
|
||||
steps: int = 4,
|
||||
) -> dict:
|
||||
"""Generate multiple variations of the same prompt with different seeds."""
|
||||
import random
|
||||
variations = []
|
||||
for _ in range(count):
|
||||
seed = random.randint(0, 2**32 - 1)
|
||||
result = generate_image(
|
||||
prompt=prompt, width=width, height=height,
|
||||
steps=steps, seed=seed,
|
||||
)
|
||||
variations.append(result)
|
||||
return {"success": True, "count": len(variations), "variations": variations}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
IMAGE_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_image": {
|
||||
"name": "Generate Image",
|
||||
"description": "Generate an image from a text prompt using FLUX",
|
||||
"fn": generate_image,
|
||||
},
|
||||
"generate_storyboard": {
|
||||
"name": "Generate Storyboard",
|
||||
"description": "Generate keyframe images for a sequence of scenes",
|
||||
"fn": generate_storyboard,
|
||||
},
|
||||
"image_variations": {
|
||||
"name": "Image Variations",
|
||||
"description": "Generate multiple variations of the same prompt",
|
||||
"fn": image_variations,
|
||||
},
|
||||
}
|
||||
210
src/tools/music_tools.py
Normal file
210
src/tools/music_tools.py
Normal file
@@ -0,0 +1,210 @@
|
||||
"""Music generation tools — Lyra persona.
|
||||
|
||||
Uses ACE-Step 1.5 for full song generation with vocals, instrumentals,
|
||||
and lyrics. Falls back gracefully when the ``creative`` extra is not
|
||||
installed.
|
||||
|
||||
All heavy imports are lazy — the module loads instantly without GPU.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded model singleton
|
||||
_model = None
|
||||
|
||||
|
||||
def _get_model():
|
||||
"""Lazy-load the ACE-Step music generation model."""
|
||||
global _model
|
||||
if _model is not None:
|
||||
return _model
|
||||
|
||||
try:
|
||||
from ace_step import ACEStep
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"ACE-Step not installed. Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
model_name = getattr(settings, "ace_step_model", "ace-step/ACE-Step-v1.5")
|
||||
|
||||
logger.info("Loading music model %s …", model_name)
|
||||
_model = ACEStep(model_name)
|
||||
logger.info("Music model loaded.")
|
||||
return _model
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "music_output_dir", "data/music"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(audio_path: Path, meta: dict) -> Path:
|
||||
meta_path = audio_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
# ── Supported genres ──────────────────────────────────────────────────────────
|
||||
|
||||
GENRES = [
|
||||
"pop", "rock", "hip-hop", "r&b", "jazz", "blues", "country",
|
||||
"electronic", "classical", "folk", "reggae", "metal", "punk",
|
||||
"soul", "funk", "latin", "ambient", "lo-fi", "cinematic",
|
||||
]
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_song(
|
||||
lyrics: str,
|
||||
genre: str = "pop",
|
||||
duration: int = 120,
|
||||
language: str = "en",
|
||||
title: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Generate a full song with vocals and instrumentals from lyrics.
|
||||
|
||||
Args:
|
||||
lyrics: Song lyrics text.
|
||||
genre: Musical genre / style tag.
|
||||
duration: Target duration in seconds (30–240).
|
||||
language: ISO language code (19 languages supported).
|
||||
title: Optional song title for metadata.
|
||||
|
||||
Returns dict with ``path``, ``duration``, ``genre``, etc.
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(30, min(240, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
logger.info("Generating song: genre=%s duration=%ds …", genre, duration)
|
||||
audio = model.generate(
|
||||
lyrics=lyrics,
|
||||
genre=genre,
|
||||
duration=duration,
|
||||
language=language,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "title": title or f"Untitled ({genre})",
|
||||
"lyrics": lyrics, "genre": genre,
|
||||
"duration": duration, "language": language,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_instrumental(
|
||||
prompt: str,
|
||||
genre: str = "cinematic",
|
||||
duration: int = 60,
|
||||
) -> dict:
|
||||
"""Generate an instrumental track from a text prompt (no vocals).
|
||||
|
||||
Args:
|
||||
prompt: Description of the desired music.
|
||||
genre: Musical genre / style tag.
|
||||
duration: Target duration in seconds (15–180).
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(15, min(180, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
logger.info("Generating instrumental: genre=%s …", genre)
|
||||
audio = model.generate(
|
||||
lyrics="",
|
||||
genre=genre,
|
||||
duration=duration,
|
||||
prompt=prompt,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "genre": genre,
|
||||
"duration": duration, "instrumental": True,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_vocals(
|
||||
lyrics: str,
|
||||
style: str = "pop",
|
||||
duration: int = 60,
|
||||
language: str = "en",
|
||||
) -> dict:
|
||||
"""Generate a vocal-only track from lyrics.
|
||||
|
||||
Useful for layering over custom instrumentals.
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(15, min(180, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
audio = model.generate(
|
||||
lyrics=lyrics,
|
||||
genre=f"{style} acapella vocals",
|
||||
duration=duration,
|
||||
language=language,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "lyrics": lyrics, "style": style,
|
||||
"duration": duration, "vocals_only": True,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def list_genres() -> dict:
|
||||
"""Return the list of supported genre / style tags."""
|
||||
return {"success": True, "genres": GENRES}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
MUSIC_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_song": {
|
||||
"name": "Generate Song",
|
||||
"description": "Generate a full song with vocals + instrumentals from lyrics",
|
||||
"fn": generate_song,
|
||||
},
|
||||
"generate_instrumental": {
|
||||
"name": "Generate Instrumental",
|
||||
"description": "Generate an instrumental track from a text prompt",
|
||||
"fn": generate_instrumental,
|
||||
},
|
||||
"generate_vocals": {
|
||||
"name": "Generate Vocals",
|
||||
"description": "Generate a vocal-only track from lyrics",
|
||||
"fn": generate_vocals,
|
||||
},
|
||||
"list_genres": {
|
||||
"name": "List Genres",
|
||||
"description": "List supported music genre / style tags",
|
||||
"fn": list_genres,
|
||||
},
|
||||
}
|
||||
206
src/tools/video_tools.py
Normal file
206
src/tools/video_tools.py
Normal file
@@ -0,0 +1,206 @@
|
||||
"""Video generation tools — Reel persona.
|
||||
|
||||
Uses Wan 2.1 (via HuggingFace diffusers) for text-to-video and
|
||||
image-to-video generation. Heavy imports are lazy.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded pipeline singletons
|
||||
_t2v_pipeline = None
|
||||
_i2v_pipeline = None
|
||||
|
||||
|
||||
def _get_t2v_pipeline():
|
||||
"""Lazy-load the text-to-video pipeline (Wan 2.1)."""
|
||||
global _t2v_pipeline
|
||||
if _t2v_pipeline is not None:
|
||||
return _t2v_pipeline
|
||||
|
||||
try:
|
||||
import torch
|
||||
from diffusers import DiffusionPipeline
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Creative dependencies not installed. "
|
||||
"Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
model_id = getattr(settings, "wan_model_id", "Wan-AI/Wan2.1-T2V-1.3B")
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
dtype = torch.float16 if device == "cuda" else torch.float32
|
||||
|
||||
logger.info("Loading video model %s on %s …", model_id, device)
|
||||
_t2v_pipeline = DiffusionPipeline.from_pretrained(
|
||||
model_id, torch_dtype=dtype,
|
||||
).to(device)
|
||||
logger.info("Video model loaded.")
|
||||
return _t2v_pipeline
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "video_output_dir", "data/video"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(video_path: Path, meta: dict) -> Path:
|
||||
meta_path = video_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
def _export_frames_to_mp4(frames, out_path: Path, fps: int = 24) -> None:
|
||||
"""Export a list of PIL Image frames to an MP4 file using moviepy."""
|
||||
import numpy as np
|
||||
from moviepy import ImageSequenceClip
|
||||
|
||||
frame_arrays = [np.array(f) for f in frames]
|
||||
clip = ImageSequenceClip(frame_arrays, fps=fps)
|
||||
clip.write_videofile(
|
||||
str(out_path), codec="libx264", audio=False, logger=None,
|
||||
)
|
||||
|
||||
|
||||
# ── Resolution presets ────────────────────────────────────────────────────────
|
||||
|
||||
RESOLUTION_PRESETS = {
|
||||
"480p": (854, 480),
|
||||
"720p": (1280, 720),
|
||||
}
|
||||
|
||||
VIDEO_STYLES = [
|
||||
"cinematic", "anime", "documentary", "abstract",
|
||||
"timelapse", "slow-motion", "music-video", "vlog",
|
||||
]
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_video_clip(
|
||||
prompt: str,
|
||||
duration: int = 5,
|
||||
resolution: str = "480p",
|
||||
fps: int = 24,
|
||||
seed: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate a short video clip from a text prompt.
|
||||
|
||||
Args:
|
||||
prompt: Text description of the desired video.
|
||||
duration: Target duration in seconds (2–10).
|
||||
resolution: "480p" or "720p".
|
||||
fps: Frames per second.
|
||||
seed: Optional seed for reproducibility.
|
||||
|
||||
Returns dict with ``path``, ``duration``, ``resolution``.
|
||||
"""
|
||||
pipe = _get_t2v_pipeline()
|
||||
import torch
|
||||
|
||||
duration = max(2, min(10, duration))
|
||||
w, h = RESOLUTION_PRESETS.get(resolution, RESOLUTION_PRESETS["480p"])
|
||||
num_frames = duration * fps
|
||||
|
||||
generator = torch.Generator(device=pipe.device)
|
||||
if seed is not None:
|
||||
generator.manual_seed(seed)
|
||||
|
||||
logger.info("Generating %ds video at %s …", duration, resolution)
|
||||
result = pipe(
|
||||
prompt=prompt,
|
||||
num_frames=num_frames,
|
||||
width=w,
|
||||
height=h,
|
||||
generator=generator,
|
||||
)
|
||||
frames = result.frames[0] if hasattr(result, "frames") else result.images
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.mp4"
|
||||
_export_frames_to_mp4(frames, out_path, fps=fps)
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "duration": duration,
|
||||
"resolution": resolution, "fps": fps, "seed": seed,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def image_to_video(
|
||||
image_path: str,
|
||||
prompt: str = "",
|
||||
duration: int = 5,
|
||||
fps: int = 24,
|
||||
) -> dict:
|
||||
"""Animate a still image into a video clip.
|
||||
|
||||
Args:
|
||||
image_path: Path to the source image.
|
||||
prompt: Optional motion / style guidance.
|
||||
duration: Target duration in seconds (2–10).
|
||||
"""
|
||||
pipe = _get_t2v_pipeline()
|
||||
from PIL import Image
|
||||
|
||||
duration = max(2, min(10, duration))
|
||||
img = Image.open(image_path).convert("RGB")
|
||||
num_frames = duration * fps
|
||||
|
||||
logger.info("Animating image %s → %ds video …", image_path, duration)
|
||||
result = pipe(
|
||||
prompt=prompt or "animate this image with natural motion",
|
||||
image=img,
|
||||
num_frames=num_frames,
|
||||
)
|
||||
frames = result.frames[0] if hasattr(result, "frames") else result.images
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.mp4"
|
||||
_export_frames_to_mp4(frames, out_path, fps=fps)
|
||||
|
||||
meta = {
|
||||
"id": uid, "source_image": image_path,
|
||||
"prompt": prompt, "duration": duration, "fps": fps,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def list_video_styles() -> dict:
|
||||
"""Return supported video style presets."""
|
||||
return {"success": True, "styles": VIDEO_STYLES, "resolutions": list(RESOLUTION_PRESETS.keys())}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
VIDEO_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_video_clip": {
|
||||
"name": "Generate Video Clip",
|
||||
"description": "Generate a short video clip from a text prompt using Wan 2.1",
|
||||
"fn": generate_video_clip,
|
||||
},
|
||||
"image_to_video": {
|
||||
"name": "Image to Video",
|
||||
"description": "Animate a still image into a video clip",
|
||||
"fn": image_to_video,
|
||||
},
|
||||
"list_video_styles": {
|
||||
"name": "List Video Styles",
|
||||
"description": "List supported video style presets and resolutions",
|
||||
"fn": list_video_styles,
|
||||
},
|
||||
}
|
||||
0
tests/fixtures/__init__.py
vendored
Normal file
0
tests/fixtures/__init__.py
vendored
Normal file
178
tests/fixtures/media.py
vendored
Normal file
178
tests/fixtures/media.py
vendored
Normal file
@@ -0,0 +1,178 @@
|
||||
"""Real media file fixtures for integration tests.
|
||||
|
||||
Generates actual PNG images, WAV audio files, and MP4 video clips
|
||||
using numpy, Pillow, and MoviePy — no AI models required.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import wave
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
|
||||
# ── Color palettes for visual variety ─────────────────────────────────────────
|
||||
|
||||
SCENE_COLORS = [
|
||||
(30, 60, 120), # dark blue — "night sky"
|
||||
(200, 100, 30), # warm orange — "sunrise"
|
||||
(50, 150, 50), # forest green — "mountain forest"
|
||||
(20, 120, 180), # teal blue — "river"
|
||||
(180, 60, 60), # crimson — "sunset"
|
||||
(40, 40, 80), # deep purple — "twilight"
|
||||
]
|
||||
|
||||
|
||||
def make_storyboard_frame(
|
||||
path: Path,
|
||||
label: str,
|
||||
color: tuple[int, int, int] = (60, 60, 60),
|
||||
width: int = 320,
|
||||
height: int = 180,
|
||||
) -> Path:
|
||||
"""Create a real PNG image with a colored background and text label.
|
||||
|
||||
Returns the path to the written file.
|
||||
"""
|
||||
img = Image.new("RGB", (width, height), color=color)
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
# Draw label text in white, centered
|
||||
bbox = draw.textbbox((0, 0), label)
|
||||
tw, th = bbox[2] - bbox[0], bbox[3] - bbox[1]
|
||||
x = (width - tw) // 2
|
||||
y = (height - th) // 2
|
||||
draw.text((x, y), label, fill=(255, 255, 255))
|
||||
|
||||
# Add a border
|
||||
draw.rectangle([2, 2, width - 3, height - 3], outline=(255, 255, 255), width=2)
|
||||
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
img.save(path)
|
||||
return path
|
||||
|
||||
|
||||
def make_storyboard(
|
||||
output_dir: Path,
|
||||
scene_labels: list[str],
|
||||
width: int = 320,
|
||||
height: int = 180,
|
||||
) -> list[Path]:
|
||||
"""Generate a full storyboard — one PNG per scene."""
|
||||
frames = []
|
||||
for i, label in enumerate(scene_labels):
|
||||
color = SCENE_COLORS[i % len(SCENE_COLORS)]
|
||||
path = output_dir / f"frame_{i:03d}.png"
|
||||
make_storyboard_frame(path, label, color=color, width=width, height=height)
|
||||
frames.append(path)
|
||||
return frames
|
||||
|
||||
|
||||
def make_audio_track(
|
||||
path: Path,
|
||||
duration_seconds: float = 10.0,
|
||||
sample_rate: int = 44100,
|
||||
frequency: float = 440.0,
|
||||
fade_in: float = 0.5,
|
||||
fade_out: float = 0.5,
|
||||
) -> Path:
|
||||
"""Create a real WAV audio file — a sine wave tone with fade in/out.
|
||||
|
||||
Good enough to verify audio overlay, mixing, and codec encoding.
|
||||
"""
|
||||
n_samples = int(sample_rate * duration_seconds)
|
||||
t = np.linspace(0, duration_seconds, n_samples, endpoint=False)
|
||||
|
||||
# Generate a sine wave with slight frequency variation for realism
|
||||
signal = np.sin(2 * np.pi * frequency * t)
|
||||
|
||||
# Add a second harmonic for richness
|
||||
signal += 0.3 * np.sin(2 * np.pi * frequency * 2 * t)
|
||||
|
||||
# Fade in/out
|
||||
fade_in_samples = int(sample_rate * fade_in)
|
||||
fade_out_samples = int(sample_rate * fade_out)
|
||||
if fade_in_samples > 0:
|
||||
signal[:fade_in_samples] *= np.linspace(0, 1, fade_in_samples)
|
||||
if fade_out_samples > 0:
|
||||
signal[-fade_out_samples:] *= np.linspace(1, 0, fade_out_samples)
|
||||
|
||||
# Normalize and convert to 16-bit PCM
|
||||
signal = (signal / np.max(np.abs(signal)) * 32767 * 0.8).astype(np.int16)
|
||||
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with wave.open(str(path), "w") as wf:
|
||||
wf.setnchannels(1)
|
||||
wf.setsampwidth(2)
|
||||
wf.setframerate(sample_rate)
|
||||
wf.writeframes(signal.tobytes())
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def make_video_clip(
|
||||
path: Path,
|
||||
duration_seconds: float = 3.0,
|
||||
fps: int = 12,
|
||||
width: int = 320,
|
||||
height: int = 180,
|
||||
color_start: tuple[int, int, int] = (30, 30, 80),
|
||||
color_end: tuple[int, int, int] = (80, 30, 30),
|
||||
label: str = "",
|
||||
) -> Path:
|
||||
"""Create a real MP4 video clip with a color gradient animation.
|
||||
|
||||
Frames transition smoothly from color_start to color_end,
|
||||
producing a visible animation that's easy to visually verify.
|
||||
"""
|
||||
from moviepy import ImageSequenceClip
|
||||
|
||||
n_frames = int(duration_seconds * fps)
|
||||
frames = []
|
||||
|
||||
for i in range(n_frames):
|
||||
t = i / max(1, n_frames - 1)
|
||||
r = int(color_start[0] + (color_end[0] - color_start[0]) * t)
|
||||
g = int(color_start[1] + (color_end[1] - color_start[1]) * t)
|
||||
b = int(color_start[2] + (color_end[2] - color_start[2]) * t)
|
||||
|
||||
img = Image.new("RGB", (width, height), color=(r, g, b))
|
||||
|
||||
if label:
|
||||
draw = ImageDraw.Draw(img)
|
||||
draw.text((10, 10), label, fill=(255, 255, 255))
|
||||
# Frame counter
|
||||
draw.text((10, height - 20), f"f{i}/{n_frames}", fill=(200, 200, 200))
|
||||
|
||||
frames.append(np.array(img))
|
||||
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
clip = ImageSequenceClip(frames, fps=fps)
|
||||
clip.write_videofile(str(path), codec="libx264", audio=False, logger=None)
|
||||
|
||||
return path
|
||||
|
||||
|
||||
def make_scene_clips(
|
||||
output_dir: Path,
|
||||
scene_labels: list[str],
|
||||
duration_per_clip: float = 3.0,
|
||||
fps: int = 12,
|
||||
width: int = 320,
|
||||
height: int = 180,
|
||||
) -> list[Path]:
|
||||
"""Generate one video clip per scene, each with a distinct color animation."""
|
||||
clips = []
|
||||
for i, label in enumerate(scene_labels):
|
||||
c1 = SCENE_COLORS[i % len(SCENE_COLORS)]
|
||||
c2 = SCENE_COLORS[(i + 1) % len(SCENE_COLORS)]
|
||||
path = output_dir / f"clip_{i:03d}.mp4"
|
||||
make_video_clip(
|
||||
path, duration_seconds=duration_per_clip, fps=fps,
|
||||
width=width, height=height,
|
||||
color_start=c1, color_end=c2, label=label,
|
||||
)
|
||||
clips.append(path)
|
||||
return clips
|
||||
69
tests/test_assembler.py
Normal file
69
tests/test_assembler.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Tests for creative.assembler — Video assembly engine.
|
||||
|
||||
MoviePy is mocked for CI; these tests verify the interface contracts.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from creative.assembler import (
|
||||
ASSEMBLER_TOOL_CATALOG,
|
||||
stitch_clips,
|
||||
overlay_audio,
|
||||
add_title_card,
|
||||
add_subtitles,
|
||||
export_final,
|
||||
_MOVIEPY_AVAILABLE,
|
||||
)
|
||||
|
||||
|
||||
class TestAssemblerToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"stitch_clips", "overlay_audio", "add_title_card",
|
||||
"add_subtitles", "export_final",
|
||||
}
|
||||
assert expected == set(ASSEMBLER_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_callable(self):
|
||||
for tool_id, info in ASSEMBLER_TOOL_CATALOG.items():
|
||||
assert callable(info["fn"])
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
|
||||
|
||||
class TestStitchClipsInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_raises_on_empty_clips(self):
|
||||
"""Stitch with no clips should fail gracefully."""
|
||||
# MoviePy would fail on empty list
|
||||
with pytest.raises(Exception):
|
||||
stitch_clips([])
|
||||
|
||||
|
||||
class TestOverlayAudioInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_overlay_requires_valid_paths(self):
|
||||
with pytest.raises(Exception):
|
||||
overlay_audio("/nonexistent/video.mp4", "/nonexistent/audio.wav")
|
||||
|
||||
|
||||
class TestAddTitleCardInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_add_title_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
add_title_card("/nonexistent/video.mp4", "Title")
|
||||
|
||||
|
||||
class TestAddSubtitlesInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
add_subtitles("/nonexistent.mp4", [{"text": "Hi", "start": 0, "end": 1}])
|
||||
|
||||
|
||||
class TestExportFinalInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
export_final("/nonexistent/video.mp4")
|
||||
275
tests/test_assembler_integration.py
Normal file
275
tests/test_assembler_integration.py
Normal file
@@ -0,0 +1,275 @@
|
||||
"""Integration tests for creative.assembler — real files, no mocks.
|
||||
|
||||
Every test creates actual media files (PNG, WAV, MP4), runs them through
|
||||
the assembler functions, and inspects the output with MoviePy / Pillow.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from moviepy import VideoFileClip, AudioFileClip
|
||||
|
||||
from creative.assembler import (
|
||||
stitch_clips,
|
||||
overlay_audio,
|
||||
add_title_card,
|
||||
add_subtitles,
|
||||
export_final,
|
||||
)
|
||||
from fixtures.media import (
|
||||
make_audio_track,
|
||||
make_video_clip,
|
||||
make_scene_clips,
|
||||
)
|
||||
|
||||
|
||||
# ── Fixtures ──────────────────────────────────────────────────────────────────
|
||||
|
||||
@pytest.fixture
|
||||
def media_dir(tmp_path):
|
||||
"""Isolated directory for generated media."""
|
||||
d = tmp_path / "media"
|
||||
d.mkdir()
|
||||
return d
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def two_clips(media_dir):
|
||||
"""Two real 3-second MP4 clips."""
|
||||
return make_scene_clips(
|
||||
media_dir, ["Scene A", "Scene B"],
|
||||
duration_per_clip=3.0, fps=12, width=320, height=180,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def five_clips(media_dir):
|
||||
"""Five real 2-second MP4 clips — enough for a short video."""
|
||||
return make_scene_clips(
|
||||
media_dir,
|
||||
["Dawn", "Sunrise", "Mountains", "River", "Sunset"],
|
||||
duration_per_clip=2.0, fps=12, width=320, height=180,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def audio_10s(media_dir):
|
||||
"""A real 10-second WAV audio track."""
|
||||
return make_audio_track(media_dir / "track.wav", duration_seconds=10.0)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def audio_30s(media_dir):
|
||||
"""A real 30-second WAV audio track."""
|
||||
return make_audio_track(
|
||||
media_dir / "track_long.wav",
|
||||
duration_seconds=30.0,
|
||||
frequency=330.0,
|
||||
)
|
||||
|
||||
|
||||
# ── Stitch clips ─────────────────────────────────────────────────────────────
|
||||
|
||||
class TestStitchClipsReal:
|
||||
def test_stitch_two_clips_no_transition(self, two_clips, tmp_path):
|
||||
"""Stitching 2 x 3s clips → ~6s video."""
|
||||
out = tmp_path / "stitched.mp4"
|
||||
result = stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(out),
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["clip_count"] == 2
|
||||
assert out.exists()
|
||||
assert out.stat().st_size > 1000 # non-trivial file
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
assert video.duration == pytest.approx(6.0, abs=0.5)
|
||||
assert video.size == [320, 180]
|
||||
video.close()
|
||||
|
||||
def test_stitch_with_crossfade(self, two_clips, tmp_path):
|
||||
"""Cross-fade transition shortens total duration."""
|
||||
out = tmp_path / "crossfade.mp4"
|
||||
result = stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=1.0,
|
||||
output_path=str(out),
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
video = VideoFileClip(str(out))
|
||||
# 2 x 3s - 1s overlap = ~5s
|
||||
assert video.duration == pytest.approx(5.0, abs=1.0)
|
||||
video.close()
|
||||
|
||||
def test_stitch_five_clips(self, five_clips, tmp_path):
|
||||
"""Stitch 5 clips → continuous video with correct frame count."""
|
||||
out = tmp_path / "five.mp4"
|
||||
result = stitch_clips(
|
||||
[str(p) for p in five_clips],
|
||||
transition_duration=0.5,
|
||||
output_path=str(out),
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["clip_count"] == 5
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
# 5 x 2s - 4 * 0.5s overlap = 8s
|
||||
assert video.duration >= 7.0
|
||||
assert video.size == [320, 180]
|
||||
video.close()
|
||||
|
||||
|
||||
# ── Audio overlay ─────────────────────────────────────────────────────────────
|
||||
|
||||
class TestOverlayAudioReal:
|
||||
def test_overlay_adds_audio_stream(self, two_clips, audio_10s, tmp_path):
|
||||
"""Overlaying audio onto a silent video produces audible output."""
|
||||
# First stitch clips
|
||||
stitched = tmp_path / "silent.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
|
||||
out = tmp_path / "with_audio.mp4"
|
||||
result = overlay_audio(str(stitched), str(audio_10s), output_path=str(out))
|
||||
|
||||
assert result["success"]
|
||||
assert out.exists()
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
assert video.audio is not None # has audio stream
|
||||
assert video.duration == pytest.approx(6.0, abs=0.5)
|
||||
video.close()
|
||||
|
||||
def test_audio_trimmed_to_video_length(self, two_clips, audio_30s, tmp_path):
|
||||
"""30s audio track is trimmed to match ~6s video duration."""
|
||||
stitched = tmp_path / "short.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
|
||||
out = tmp_path / "trimmed.mp4"
|
||||
result = overlay_audio(str(stitched), str(audio_30s), output_path=str(out))
|
||||
|
||||
assert result["success"]
|
||||
video = VideoFileClip(str(out))
|
||||
# Audio should be trimmed to video length, not 30s
|
||||
assert video.duration < 10.0
|
||||
video.close()
|
||||
|
||||
|
||||
# ── Title cards ───────────────────────────────────────────────────────────────
|
||||
|
||||
class TestAddTitleCardReal:
|
||||
def test_prepend_title_card(self, two_clips, tmp_path):
|
||||
"""Title card at start adds to total duration."""
|
||||
stitched = tmp_path / "base.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
base_video = VideoFileClip(str(stitched))
|
||||
base_duration = base_video.duration
|
||||
base_video.close()
|
||||
|
||||
out = tmp_path / "titled.mp4"
|
||||
result = add_title_card(
|
||||
str(stitched),
|
||||
title="My Music Video",
|
||||
duration=3.0,
|
||||
position="start",
|
||||
output_path=str(out),
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["title"] == "My Music Video"
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
# Title card (3s) + base video (~6s) = ~9s
|
||||
assert video.duration == pytest.approx(base_duration + 3.0, abs=1.0)
|
||||
video.close()
|
||||
|
||||
def test_append_credits(self, two_clips, tmp_path):
|
||||
"""Credits card at end adds to total duration."""
|
||||
clip_path = str(two_clips[0]) # single 3s clip
|
||||
|
||||
out = tmp_path / "credits.mp4"
|
||||
result = add_title_card(
|
||||
clip_path,
|
||||
title="THE END",
|
||||
duration=2.0,
|
||||
position="end",
|
||||
output_path=str(out),
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
video = VideoFileClip(str(out))
|
||||
# 3s clip + 2s credits = ~5s
|
||||
assert video.duration == pytest.approx(5.0, abs=1.0)
|
||||
video.close()
|
||||
|
||||
|
||||
# ── Subtitles ─────────────────────────────────────────────────────────────────
|
||||
|
||||
class TestAddSubtitlesReal:
|
||||
def test_burn_captions(self, two_clips, tmp_path):
|
||||
"""Subtitles are burned onto the video (duration unchanged)."""
|
||||
stitched = tmp_path / "base.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in two_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
|
||||
captions = [
|
||||
{"text": "Welcome to the show", "start": 0.0, "end": 2.0},
|
||||
{"text": "Here we go!", "start": 2.5, "end": 4.5},
|
||||
{"text": "Finale", "start": 5.0, "end": 6.0},
|
||||
]
|
||||
|
||||
out = tmp_path / "subtitled.mp4"
|
||||
result = add_subtitles(str(stitched), captions, output_path=str(out))
|
||||
|
||||
assert result["success"]
|
||||
assert result["caption_count"] == 3
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
# Duration should be unchanged
|
||||
assert video.duration == pytest.approx(6.0, abs=0.5)
|
||||
assert video.size == [320, 180]
|
||||
video.close()
|
||||
|
||||
|
||||
# ── Export final ──────────────────────────────────────────────────────────────
|
||||
|
||||
class TestExportFinalReal:
|
||||
def test_reencodes_video(self, two_clips, tmp_path):
|
||||
"""Final export produces a valid re-encoded file."""
|
||||
clip_path = str(two_clips[0])
|
||||
|
||||
out = tmp_path / "final.mp4"
|
||||
result = export_final(
|
||||
clip_path,
|
||||
output_path=str(out),
|
||||
codec="libx264",
|
||||
bitrate="2000k",
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["codec"] == "libx264"
|
||||
assert out.exists()
|
||||
assert out.stat().st_size > 500
|
||||
|
||||
video = VideoFileClip(str(out))
|
||||
assert video.duration == pytest.approx(3.0, abs=0.5)
|
||||
video.close()
|
||||
190
tests/test_creative_director.py
Normal file
190
tests/test_creative_director.py
Normal file
@@ -0,0 +1,190 @@
|
||||
"""Tests for creative.director — Creative Director pipeline.
|
||||
|
||||
Tests project management, pipeline orchestration, and tool catalogue.
|
||||
All AI model calls are mocked.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from creative.director import (
|
||||
create_project,
|
||||
get_project,
|
||||
list_projects,
|
||||
run_storyboard,
|
||||
run_music,
|
||||
run_video_generation,
|
||||
run_assembly,
|
||||
run_full_pipeline,
|
||||
CreativeProject,
|
||||
DIRECTOR_TOOL_CATALOG,
|
||||
_projects,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_projects():
|
||||
"""Clear project store between tests."""
|
||||
_projects.clear()
|
||||
yield
|
||||
_projects.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_project(tmp_path):
|
||||
"""Create a sample project with scenes."""
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project(
|
||||
title="Test Video",
|
||||
description="A test creative project",
|
||||
scenes=[
|
||||
{"description": "A sunrise over mountains"},
|
||||
{"description": "A river flowing through a valley"},
|
||||
{"description": "A sunset over the ocean"},
|
||||
],
|
||||
lyrics="La la la, the sun rises high",
|
||||
)
|
||||
return result["project"]["id"]
|
||||
|
||||
|
||||
class TestCreateProject:
|
||||
def test_creates_project(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("My Video", "A cool video")
|
||||
assert result["success"]
|
||||
assert result["project"]["title"] == "My Video"
|
||||
assert result["project"]["status"] == "planning"
|
||||
|
||||
def test_project_has_id(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("Test", "Test")
|
||||
assert len(result["project"]["id"]) == 12
|
||||
|
||||
def test_project_with_scenes(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project(
|
||||
"Scenes", "With scenes",
|
||||
scenes=[{"description": "Scene 1"}, {"description": "Scene 2"}],
|
||||
)
|
||||
assert result["project"]["scene_count"] == 2
|
||||
|
||||
|
||||
class TestGetProject:
|
||||
def test_get_existing(self, sample_project):
|
||||
result = get_project(sample_project)
|
||||
assert result is not None
|
||||
assert result["title"] == "Test Video"
|
||||
|
||||
def test_get_nonexistent(self):
|
||||
assert get_project("bogus") is None
|
||||
|
||||
|
||||
class TestListProjects:
|
||||
def test_empty(self):
|
||||
assert list_projects() == []
|
||||
|
||||
def test_with_projects(self, sample_project, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
create_project("Second", "desc")
|
||||
assert len(list_projects()) == 2
|
||||
|
||||
|
||||
class TestRunStoryboard:
|
||||
def test_fails_without_project(self):
|
||||
result = run_storyboard("bogus")
|
||||
assert not result["success"]
|
||||
assert "not found" in result["error"]
|
||||
|
||||
def test_fails_without_scenes(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("Empty", "No scenes")
|
||||
pid = result["project"]["id"]
|
||||
result = run_storyboard(pid)
|
||||
assert not result["success"]
|
||||
assert "No scenes" in result["error"]
|
||||
|
||||
def test_generates_frames(self, sample_project, tmp_path):
|
||||
mock_result = {
|
||||
"success": True,
|
||||
"frame_count": 3,
|
||||
"frames": [
|
||||
{"path": "/fake/1.png", "scene_index": 0, "prompt": "sunrise"},
|
||||
{"path": "/fake/2.png", "scene_index": 1, "prompt": "river"},
|
||||
{"path": "/fake/3.png", "scene_index": 2, "prompt": "sunset"},
|
||||
],
|
||||
}
|
||||
with patch("tools.image_tools.generate_storyboard", return_value=mock_result):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_storyboard(sample_project)
|
||||
assert result["success"]
|
||||
assert result["frame_count"] == 3
|
||||
|
||||
|
||||
class TestRunMusic:
|
||||
def test_fails_without_project(self):
|
||||
result = run_music("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_generates_track(self, sample_project):
|
||||
mock_result = {
|
||||
"success": True, "path": "/fake/song.wav",
|
||||
"genre": "pop", "duration": 60,
|
||||
}
|
||||
with patch("tools.music_tools.generate_song", return_value=mock_result):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_music(sample_project, genre="pop")
|
||||
assert result["success"]
|
||||
assert result["path"] == "/fake/song.wav"
|
||||
|
||||
|
||||
class TestRunVideoGeneration:
|
||||
def test_fails_without_project(self):
|
||||
result = run_video_generation("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_generates_clips(self, sample_project):
|
||||
mock_clip = {
|
||||
"success": True, "path": "/fake/clip.mp4",
|
||||
"duration": 5,
|
||||
}
|
||||
with patch("tools.video_tools.generate_video_clip", return_value=mock_clip):
|
||||
with patch("tools.video_tools.image_to_video", return_value=mock_clip):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_video_generation(sample_project)
|
||||
assert result["success"]
|
||||
assert result["clip_count"] == 3
|
||||
|
||||
|
||||
class TestRunAssembly:
|
||||
def test_fails_without_project(self):
|
||||
result = run_assembly("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_fails_without_clips(self, sample_project):
|
||||
result = run_assembly(sample_project)
|
||||
assert not result["success"]
|
||||
assert "No video clips" in result["error"]
|
||||
|
||||
|
||||
class TestCreativeProject:
|
||||
def test_to_dict(self):
|
||||
p = CreativeProject(title="Test", description="Desc")
|
||||
d = p.to_dict()
|
||||
assert d["title"] == "Test"
|
||||
assert d["status"] == "planning"
|
||||
assert d["scene_count"] == 0
|
||||
assert d["has_storyboard"] is False
|
||||
assert d["has_music"] is False
|
||||
|
||||
|
||||
class TestDirectorToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"create_project", "run_storyboard", "run_music",
|
||||
"run_video_generation", "run_assembly", "run_full_pipeline",
|
||||
}
|
||||
assert expected == set(DIRECTOR_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_callable(self):
|
||||
for tool_id, info in DIRECTOR_TOOL_CATALOG.items():
|
||||
assert callable(info["fn"])
|
||||
61
tests/test_creative_route.py
Normal file
61
tests/test_creative_route.py
Normal file
@@ -0,0 +1,61 @@
|
||||
"""Tests for the Creative Studio dashboard route."""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
|
||||
os.environ.setdefault("TIMMY_TEST_MODE", "1")
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(tmp_path, monkeypatch):
|
||||
"""Test client with temp DB paths."""
|
||||
monkeypatch.setattr("swarm.tasks.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.registry.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.stats.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.learner.DB_PATH", tmp_path / "swarm.db")
|
||||
|
||||
from dashboard.app import app
|
||||
return TestClient(app)
|
||||
|
||||
|
||||
class TestCreativeStudioPage:
|
||||
def test_creative_page_loads(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert resp.status_code == 200
|
||||
assert "Creative Studio" in resp.text
|
||||
|
||||
def test_creative_page_has_tabs(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert "tab-images" in resp.text
|
||||
assert "tab-music" in resp.text
|
||||
assert "tab-video" in resp.text
|
||||
assert "tab-director" in resp.text
|
||||
|
||||
def test_creative_page_shows_personas(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert "Pixel" in resp.text
|
||||
assert "Lyra" in resp.text
|
||||
assert "Reel" in resp.text
|
||||
|
||||
|
||||
class TestCreativeAPI:
|
||||
def test_projects_api_empty(self, client):
|
||||
resp = client.get("/creative/api/projects")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "projects" in data
|
||||
|
||||
def test_genres_api(self, client):
|
||||
resp = client.get("/creative/api/genres")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "genres" in data
|
||||
|
||||
def test_video_styles_api(self, client):
|
||||
resp = client.get("/creative/api/video-styles")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "styles" in data
|
||||
assert "resolutions" in data
|
||||
@@ -100,8 +100,8 @@ def test_marketplace_has_timmy(client):
|
||||
def test_marketplace_has_planned_agents(client):
|
||||
response = client.get("/marketplace")
|
||||
data = response.json()
|
||||
# Total should be 7 (1 Timmy + 6 personas)
|
||||
assert data["total"] == 7
|
||||
# Total should be 10 (1 Timmy + 9 personas)
|
||||
assert data["total"] == 10
|
||||
# planned_count + active_count should equal total
|
||||
assert data["planned_count"] + data["active_count"] == data["total"]
|
||||
# Timmy should always be in the active list
|
||||
|
||||
183
tests/test_git_tools.py
Normal file
183
tests/test_git_tools.py
Normal file
@@ -0,0 +1,183 @@
|
||||
"""Tests for tools.git_tools — Git operations for Forge/Helm personas.
|
||||
|
||||
All tests use temporary git repositories to avoid touching the real
|
||||
working tree.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from tools.git_tools import (
|
||||
git_init,
|
||||
git_status,
|
||||
git_add,
|
||||
git_commit,
|
||||
git_log,
|
||||
git_diff,
|
||||
git_branch,
|
||||
git_stash,
|
||||
git_blame,
|
||||
git_clone,
|
||||
GIT_TOOL_CATALOG,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def git_repo(tmp_path):
|
||||
"""Create a temporary git repo with one commit."""
|
||||
result = git_init(tmp_path)
|
||||
assert result["success"]
|
||||
|
||||
# Configure git identity for commits
|
||||
from git import Repo
|
||||
repo = Repo(str(tmp_path))
|
||||
repo.config_writer().set_value("user", "name", "Test").release()
|
||||
repo.config_writer().set_value("user", "email", "test@test.com").release()
|
||||
|
||||
# Create initial commit
|
||||
readme = tmp_path / "README.md"
|
||||
readme.write_text("# Test Repo\n")
|
||||
repo.index.add(["README.md"])
|
||||
repo.index.commit("Initial commit")
|
||||
|
||||
return tmp_path
|
||||
|
||||
|
||||
class TestGitInit:
|
||||
def test_init_creates_repo(self, tmp_path):
|
||||
path = tmp_path / "new_repo"
|
||||
result = git_init(path)
|
||||
assert result["success"]
|
||||
assert (path / ".git").is_dir()
|
||||
|
||||
def test_init_returns_path(self, tmp_path):
|
||||
path = tmp_path / "repo"
|
||||
result = git_init(path)
|
||||
assert result["path"] == str(path)
|
||||
|
||||
|
||||
class TestGitStatus:
|
||||
def test_clean_repo(self, git_repo):
|
||||
result = git_status(git_repo)
|
||||
assert result["success"]
|
||||
assert result["is_dirty"] is False
|
||||
assert result["untracked"] == []
|
||||
|
||||
def test_dirty_repo_untracked(self, git_repo):
|
||||
(git_repo / "new_file.txt").write_text("hello")
|
||||
result = git_status(git_repo)
|
||||
assert result["is_dirty"] is True
|
||||
assert "new_file.txt" in result["untracked"]
|
||||
|
||||
def test_reports_branch(self, git_repo):
|
||||
result = git_status(git_repo)
|
||||
assert result["branch"] in ("main", "master")
|
||||
|
||||
|
||||
class TestGitAddCommit:
|
||||
def test_add_and_commit(self, git_repo):
|
||||
(git_repo / "test.py").write_text("print('hi')\n")
|
||||
add_result = git_add(git_repo, ["test.py"])
|
||||
assert add_result["success"]
|
||||
|
||||
commit_result = git_commit(git_repo, "Add test.py")
|
||||
assert commit_result["success"]
|
||||
assert len(commit_result["sha"]) == 40
|
||||
assert commit_result["message"] == "Add test.py"
|
||||
|
||||
def test_add_all(self, git_repo):
|
||||
(git_repo / "a.txt").write_text("a")
|
||||
(git_repo / "b.txt").write_text("b")
|
||||
result = git_add(git_repo)
|
||||
assert result["success"]
|
||||
|
||||
|
||||
class TestGitLog:
|
||||
def test_log_returns_commits(self, git_repo):
|
||||
result = git_log(git_repo)
|
||||
assert result["success"]
|
||||
assert len(result["commits"]) >= 1
|
||||
first = result["commits"][0]
|
||||
assert "sha" in first
|
||||
assert "message" in first
|
||||
assert "author" in first
|
||||
assert "date" in first
|
||||
|
||||
def test_log_max_count(self, git_repo):
|
||||
result = git_log(git_repo, max_count=1)
|
||||
assert len(result["commits"]) == 1
|
||||
|
||||
|
||||
class TestGitDiff:
|
||||
def test_no_diff_on_clean(self, git_repo):
|
||||
result = git_diff(git_repo)
|
||||
assert result["success"]
|
||||
assert result["diff"] == ""
|
||||
|
||||
def test_diff_on_modified(self, git_repo):
|
||||
readme = git_repo / "README.md"
|
||||
readme.write_text("# Modified\n")
|
||||
result = git_diff(git_repo)
|
||||
assert result["success"]
|
||||
assert "Modified" in result["diff"]
|
||||
|
||||
|
||||
class TestGitBranch:
|
||||
def test_list_branches(self, git_repo):
|
||||
result = git_branch(git_repo)
|
||||
assert result["success"]
|
||||
assert len(result["branches"]) >= 1
|
||||
|
||||
def test_create_branch(self, git_repo):
|
||||
result = git_branch(git_repo, create="feature-x")
|
||||
assert result["success"]
|
||||
assert "feature-x" in result["branches"]
|
||||
assert result["created"] == "feature-x"
|
||||
|
||||
def test_switch_branch(self, git_repo):
|
||||
git_branch(git_repo, create="dev")
|
||||
result = git_branch(git_repo, switch="dev")
|
||||
assert result["active"] == "dev"
|
||||
|
||||
|
||||
class TestGitStash:
|
||||
def test_stash_and_pop(self, git_repo):
|
||||
readme = git_repo / "README.md"
|
||||
readme.write_text("# Changed\n")
|
||||
|
||||
stash_result = git_stash(git_repo, message="wip")
|
||||
assert stash_result["success"]
|
||||
assert stash_result["action"] == "stash"
|
||||
|
||||
# Working tree should be clean after stash
|
||||
status = git_status(git_repo)
|
||||
assert status["is_dirty"] is False
|
||||
|
||||
# Pop restores changes
|
||||
pop_result = git_stash(git_repo, pop=True)
|
||||
assert pop_result["success"]
|
||||
assert pop_result["action"] == "pop"
|
||||
|
||||
|
||||
class TestGitBlame:
|
||||
def test_blame_file(self, git_repo):
|
||||
result = git_blame(git_repo, "README.md")
|
||||
assert result["success"]
|
||||
assert "Test Repo" in result["blame"]
|
||||
|
||||
|
||||
class TestGitToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"git_clone", "git_status", "git_diff", "git_log",
|
||||
"git_blame", "git_branch", "git_add", "git_commit",
|
||||
"git_push", "git_pull", "git_stash",
|
||||
}
|
||||
assert expected == set(GIT_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in GIT_TOOL_CATALOG.items():
|
||||
assert "name" in info, f"{tool_id} missing name"
|
||||
assert "description" in info, f"{tool_id} missing description"
|
||||
assert "fn" in info, f"{tool_id} missing fn"
|
||||
assert callable(info["fn"]), f"{tool_id} fn not callable"
|
||||
120
tests/test_image_tools.py
Normal file
120
tests/test_image_tools.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Tests for tools.image_tools — Image generation (Pixel persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, metadata, and
|
||||
interface tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from pathlib import Path
|
||||
|
||||
from tools.image_tools import (
|
||||
IMAGE_TOOL_CATALOG,
|
||||
generate_image,
|
||||
generate_storyboard,
|
||||
image_variations,
|
||||
_save_metadata,
|
||||
)
|
||||
|
||||
|
||||
class TestImageToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {"generate_image", "generate_storyboard", "image_variations"}
|
||||
assert expected == set(IMAGE_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in IMAGE_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestSaveMetadata:
|
||||
def test_saves_json_sidecar(self, tmp_path):
|
||||
img_path = tmp_path / "test.png"
|
||||
img_path.write_bytes(b"fake image")
|
||||
meta = {"prompt": "a cat", "width": 512}
|
||||
result = _save_metadata(img_path, meta)
|
||||
assert result.suffix == ".json"
|
||||
assert result.exists()
|
||||
|
||||
import json
|
||||
data = json.loads(result.read_text())
|
||||
assert data["prompt"] == "a cat"
|
||||
|
||||
|
||||
class TestGenerateImageInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
"""generate_image raises ImportError when diffusers not available."""
|
||||
with patch("tools.image_tools._pipeline", None):
|
||||
with patch("tools.image_tools._get_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_image("a cat")
|
||||
|
||||
def test_generate_image_with_mocked_pipeline(self, tmp_path):
|
||||
"""generate_image works end-to-end with a mocked pipeline."""
|
||||
import sys
|
||||
|
||||
mock_image = MagicMock()
|
||||
mock_image.save = MagicMock()
|
||||
|
||||
mock_pipe = MagicMock()
|
||||
mock_pipe.device = "cpu"
|
||||
mock_pipe.return_value.images = [mock_image]
|
||||
|
||||
mock_torch = MagicMock()
|
||||
mock_torch.Generator.return_value = MagicMock()
|
||||
|
||||
with patch.dict(sys.modules, {"torch": mock_torch}):
|
||||
with patch("tools.image_tools._get_pipeline", return_value=mock_pipe):
|
||||
with patch("tools.image_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_image("a cat", width=512, height=512, steps=1)
|
||||
|
||||
assert result["success"]
|
||||
assert result["prompt"] == "a cat"
|
||||
assert result["width"] == 512
|
||||
assert "path" in result
|
||||
|
||||
|
||||
class TestGenerateStoryboardInterface:
|
||||
def test_calls_generate_image_per_scene(self):
|
||||
"""Storyboard calls generate_image once per scene."""
|
||||
call_count = 0
|
||||
|
||||
def mock_gen_image(prompt, **kwargs):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
return {
|
||||
"success": True, "path": f"/fake/{call_count}.png",
|
||||
"id": str(call_count), "prompt": prompt,
|
||||
}
|
||||
|
||||
with patch("tools.image_tools.generate_image", side_effect=mock_gen_image):
|
||||
result = generate_storyboard(
|
||||
["sunrise", "mountain peak", "sunset"],
|
||||
steps=1,
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["frame_count"] == 3
|
||||
assert len(result["frames"]) == 3
|
||||
assert call_count == 3
|
||||
|
||||
|
||||
class TestImageVariationsInterface:
|
||||
def test_generates_multiple_variations(self):
|
||||
"""image_variations generates the requested number of results."""
|
||||
def mock_gen_image(prompt, **kwargs):
|
||||
return {
|
||||
"success": True, "path": "/fake.png",
|
||||
"id": "x", "prompt": prompt,
|
||||
"seed": kwargs.get("seed"),
|
||||
}
|
||||
|
||||
with patch("tools.image_tools.generate_image", side_effect=mock_gen_image):
|
||||
result = image_variations("a dog", count=3, steps=1)
|
||||
|
||||
assert result["success"]
|
||||
assert result["count"] == 3
|
||||
assert len(result["variations"]) == 3
|
||||
124
tests/test_music_tools.py
Normal file
124
tests/test_music_tools.py
Normal file
@@ -0,0 +1,124 @@
|
||||
"""Tests for tools.music_tools — Music generation (Lyra persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, interface, and
|
||||
metadata tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from tools.music_tools import (
|
||||
MUSIC_TOOL_CATALOG,
|
||||
GENRES,
|
||||
list_genres,
|
||||
generate_song,
|
||||
generate_instrumental,
|
||||
generate_vocals,
|
||||
)
|
||||
|
||||
|
||||
class TestMusicToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"generate_song", "generate_instrumental",
|
||||
"generate_vocals", "list_genres",
|
||||
}
|
||||
assert expected == set(MUSIC_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in MUSIC_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestListGenres:
|
||||
def test_returns_genre_list(self):
|
||||
result = list_genres()
|
||||
assert result["success"]
|
||||
assert len(result["genres"]) > 10
|
||||
assert "pop" in result["genres"]
|
||||
assert "cinematic" in result["genres"]
|
||||
|
||||
|
||||
class TestGenres:
|
||||
def test_common_genres_present(self):
|
||||
for genre in ["pop", "rock", "hip-hop", "jazz", "electronic", "classical"]:
|
||||
assert genre in GENRES
|
||||
|
||||
|
||||
class TestGenerateSongInterface:
|
||||
def test_raises_without_ace_step(self):
|
||||
with patch("tools.music_tools._model", None):
|
||||
with patch("tools.music_tools._get_model", side_effect=ImportError("no ace-step")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_song("la la la")
|
||||
|
||||
def test_duration_clamped(self):
|
||||
"""Duration is clamped to 30–240 range."""
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=MagicMock()):
|
||||
with patch("tools.music_tools._save_metadata"):
|
||||
# Should clamp 5 to 30
|
||||
generate_song("lyrics", duration=5)
|
||||
call_kwargs = mock_model.generate.call_args[1]
|
||||
assert call_kwargs["duration"] == 30
|
||||
|
||||
def test_generate_song_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_song(
|
||||
"hello world", genre="rock", duration=60, title="Test Song"
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["genre"] == "rock"
|
||||
assert result["title"] == "Test Song"
|
||||
assert result["duration"] == 60
|
||||
|
||||
|
||||
class TestGenerateInstrumentalInterface:
|
||||
def test_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_instrumental("epic orchestral", genre="cinematic")
|
||||
|
||||
assert result["success"]
|
||||
assert result["genre"] == "cinematic"
|
||||
assert result["instrumental"] is True
|
||||
|
||||
|
||||
class TestGenerateVocalsInterface:
|
||||
def test_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_vocals("do re mi", style="jazz")
|
||||
|
||||
assert result["success"]
|
||||
assert result["vocals_only"] is True
|
||||
assert result["style"] == "jazz"
|
||||
444
tests/test_music_video_integration.py
Normal file
444
tests/test_music_video_integration.py
Normal file
@@ -0,0 +1,444 @@
|
||||
"""Integration test: end-to-end music video pipeline with real media files.
|
||||
|
||||
Exercises the Creative Director pipeline and Assembler with genuine PNG,
|
||||
WAV, and MP4 files. Only AI model inference is replaced with fixture
|
||||
generators; all MoviePy / FFmpeg operations run for real.
|
||||
|
||||
The final output video is inspected for:
|
||||
- Duration — correct within tolerance
|
||||
- Resolution — 320x180 (fixture default)
|
||||
- Audio stream — present
|
||||
- File size — non-trivial (>10 kB)
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from moviepy import VideoFileClip
|
||||
|
||||
from creative.director import (
|
||||
create_project,
|
||||
run_storyboard,
|
||||
run_music,
|
||||
run_video_generation,
|
||||
run_assembly,
|
||||
run_full_pipeline,
|
||||
_projects,
|
||||
)
|
||||
from creative.assembler import (
|
||||
stitch_clips,
|
||||
overlay_audio,
|
||||
add_title_card,
|
||||
add_subtitles,
|
||||
export_final,
|
||||
)
|
||||
from fixtures.media import (
|
||||
make_storyboard,
|
||||
make_audio_track,
|
||||
make_video_clip,
|
||||
make_scene_clips,
|
||||
)
|
||||
|
||||
|
||||
# ── Fixtures ──────────────────────────────────────────────────────────────────
|
||||
|
||||
SCENES = [
|
||||
{"description": "Dawn breaks over misty mountains", "duration": 4},
|
||||
{"description": "A river carves through green valleys", "duration": 4},
|
||||
{"description": "Wildflowers sway in warm sunlight", "duration": 4},
|
||||
{"description": "Clouds gather as evening approaches", "duration": 4},
|
||||
{"description": "Stars emerge over a quiet lake", "duration": 4},
|
||||
]
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_projects():
|
||||
"""Clear in-memory project store between tests."""
|
||||
_projects.clear()
|
||||
yield
|
||||
_projects.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def media_dir(tmp_path):
|
||||
d = tmp_path / "media"
|
||||
d.mkdir()
|
||||
return d
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def scene_defs():
|
||||
"""Five-scene creative brief for a short music video."""
|
||||
return [dict(s) for s in SCENES]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def storyboard_frames(media_dir):
|
||||
"""Real PNG storyboard frames for all scenes."""
|
||||
return make_storyboard(
|
||||
media_dir / "frames",
|
||||
[s["description"][:20] for s in SCENES],
|
||||
width=320, height=180,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def audio_track(media_dir):
|
||||
"""Real 25-second WAV audio track."""
|
||||
return make_audio_track(
|
||||
media_dir / "soundtrack.wav",
|
||||
duration_seconds=25.0,
|
||||
frequency=440.0,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def video_clips(media_dir):
|
||||
"""Real 4-second MP4 clips, one per scene (~20s total)."""
|
||||
return make_scene_clips(
|
||||
media_dir / "clips",
|
||||
[s["description"][:20] for s in SCENES],
|
||||
duration_per_clip=4.0,
|
||||
fps=12,
|
||||
width=320,
|
||||
height=180,
|
||||
)
|
||||
|
||||
|
||||
# ── Direct assembly (zero AI mocking) ───────────────────────────────────────
|
||||
|
||||
class TestMusicVideoAssembly:
|
||||
"""Build a real music video from fixture clips + audio, inspect output."""
|
||||
|
||||
def test_full_music_video(self, video_clips, audio_track, tmp_path):
|
||||
"""Stitch 5 clips -> overlay audio -> title -> credits -> inspect."""
|
||||
# 1. Stitch with crossfade
|
||||
stitched = tmp_path / "stitched.mp4"
|
||||
stitch_result = stitch_clips(
|
||||
[str(p) for p in video_clips],
|
||||
transition_duration=0.5,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
assert stitch_result["success"]
|
||||
assert stitch_result["clip_count"] == 5
|
||||
|
||||
# 2. Overlay audio
|
||||
with_audio = tmp_path / "with_audio.mp4"
|
||||
audio_result = overlay_audio(
|
||||
str(stitched), str(audio_track),
|
||||
output_path=str(with_audio),
|
||||
)
|
||||
assert audio_result["success"]
|
||||
|
||||
# 3. Title card at start
|
||||
titled = tmp_path / "titled.mp4"
|
||||
title_result = add_title_card(
|
||||
str(with_audio),
|
||||
title="Dawn to Dusk",
|
||||
duration=3.0,
|
||||
position="start",
|
||||
output_path=str(titled),
|
||||
)
|
||||
assert title_result["success"]
|
||||
|
||||
# 4. Credits at end
|
||||
final_path = tmp_path / "final_music_video.mp4"
|
||||
credits_result = add_title_card(
|
||||
str(titled),
|
||||
title="THE END",
|
||||
duration=2.0,
|
||||
position="end",
|
||||
output_path=str(final_path),
|
||||
)
|
||||
assert credits_result["success"]
|
||||
|
||||
# ── Inspect final video ──────────────────────────────────────────
|
||||
assert final_path.exists()
|
||||
assert final_path.stat().st_size > 10_000 # non-trivial file
|
||||
|
||||
video = VideoFileClip(str(final_path))
|
||||
|
||||
# Duration: 5x4s - 4x0.5s crossfade = 18s + 3s title + 2s credits = 23s
|
||||
expected_body = 5 * 4.0 - 4 * 0.5 # 18s
|
||||
expected_total = expected_body + 3.0 + 2.0 # 23s
|
||||
assert video.duration >= 15.0 # floor sanity check
|
||||
assert video.duration == pytest.approx(expected_total, abs=3.0)
|
||||
|
||||
# Resolution
|
||||
assert video.size == [320, 180]
|
||||
|
||||
# Audio present
|
||||
assert video.audio is not None
|
||||
|
||||
video.close()
|
||||
|
||||
def test_with_subtitles(self, video_clips, audio_track, tmp_path):
|
||||
"""Full video with burned-in captions."""
|
||||
# Stitch without transitions for predictable duration
|
||||
stitched = tmp_path / "stitched.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in video_clips],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
|
||||
# Overlay audio
|
||||
with_audio = tmp_path / "with_audio.mp4"
|
||||
overlay_audio(
|
||||
str(stitched), str(audio_track),
|
||||
output_path=str(with_audio),
|
||||
)
|
||||
|
||||
# Burn subtitles — one caption per scene
|
||||
captions = [
|
||||
{"text": "Dawn breaks over misty mountains", "start": 0.0, "end": 3.5},
|
||||
{"text": "A river carves through green valleys", "start": 4.0, "end": 7.5},
|
||||
{"text": "Wildflowers sway in warm sunlight", "start": 8.0, "end": 11.5},
|
||||
{"text": "Clouds gather as evening approaches", "start": 12.0, "end": 15.5},
|
||||
{"text": "Stars emerge over a quiet lake", "start": 16.0, "end": 19.5},
|
||||
]
|
||||
|
||||
final = tmp_path / "subtitled_video.mp4"
|
||||
result = add_subtitles(str(with_audio), captions, output_path=str(final))
|
||||
|
||||
assert result["success"]
|
||||
assert result["caption_count"] == 5
|
||||
|
||||
video = VideoFileClip(str(final))
|
||||
# 5x4s = 20s total (no crossfade)
|
||||
assert video.duration == pytest.approx(20.0, abs=1.0)
|
||||
assert video.size == [320, 180]
|
||||
assert video.audio is not None
|
||||
video.close()
|
||||
|
||||
def test_export_final_quality(self, video_clips, tmp_path):
|
||||
"""Export with specific codec/bitrate and verify."""
|
||||
stitched = tmp_path / "raw.mp4"
|
||||
stitch_clips(
|
||||
[str(p) for p in video_clips[:2]],
|
||||
transition_duration=0,
|
||||
output_path=str(stitched),
|
||||
)
|
||||
|
||||
final = tmp_path / "hq.mp4"
|
||||
result = export_final(
|
||||
str(stitched),
|
||||
output_path=str(final),
|
||||
codec="libx264",
|
||||
bitrate="5000k",
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["codec"] == "libx264"
|
||||
assert final.stat().st_size > 5000
|
||||
|
||||
video = VideoFileClip(str(final))
|
||||
# Two 4s clips = 8s
|
||||
assert video.duration == pytest.approx(8.0, abs=1.0)
|
||||
video.close()
|
||||
|
||||
|
||||
# ── Creative Director pipeline (AI calls replaced with fixtures) ────────────
|
||||
|
||||
class TestCreativeDirectorPipeline:
|
||||
"""Run the full director pipeline; only AI model inference is stubbed
|
||||
with real-file fixture generators. All assembly runs for real."""
|
||||
|
||||
def _make_storyboard_stub(self, frames_dir):
|
||||
"""Return a callable that produces real PNGs in tool-result format."""
|
||||
def stub(descriptions):
|
||||
frames = make_storyboard(
|
||||
frames_dir, descriptions, width=320, height=180,
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"frame_count": len(frames),
|
||||
"frames": [
|
||||
{"path": str(f), "scene_index": i, "prompt": descriptions[i]}
|
||||
for i, f in enumerate(frames)
|
||||
],
|
||||
}
|
||||
return stub
|
||||
|
||||
def _make_song_stub(self, audio_dir):
|
||||
"""Return a callable that produces a real WAV in tool-result format."""
|
||||
def stub(lyrics="", genre="pop", duration=60, title=""):
|
||||
path = make_audio_track(
|
||||
audio_dir / "song.wav",
|
||||
duration_seconds=min(duration, 25),
|
||||
)
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(path),
|
||||
"genre": genre,
|
||||
"duration": min(duration, 25),
|
||||
}
|
||||
return stub
|
||||
|
||||
def _make_video_stub(self, clips_dir):
|
||||
"""Return a callable that produces real MP4s in tool-result format."""
|
||||
counter = [0]
|
||||
def stub(image_path=None, prompt="scene", duration=4, **kwargs):
|
||||
path = make_video_clip(
|
||||
clips_dir / f"gen_{counter[0]:03d}.mp4",
|
||||
duration_seconds=duration,
|
||||
fps=12, width=320, height=180,
|
||||
label=prompt[:20],
|
||||
)
|
||||
counter[0] += 1
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(path),
|
||||
"duration": duration,
|
||||
}
|
||||
return stub
|
||||
|
||||
def test_full_pipeline_end_to_end(self, scene_defs, tmp_path):
|
||||
"""run_full_pipeline with real fixtures at every stage."""
|
||||
frames_dir = tmp_path / "frames"
|
||||
frames_dir.mkdir()
|
||||
audio_dir = tmp_path / "audio"
|
||||
audio_dir.mkdir()
|
||||
clips_dir = tmp_path / "clips"
|
||||
clips_dir.mkdir()
|
||||
assembly_dir = tmp_path / "assembly"
|
||||
assembly_dir.mkdir()
|
||||
|
||||
with (
|
||||
patch("tools.image_tools.generate_storyboard",
|
||||
side_effect=self._make_storyboard_stub(frames_dir)),
|
||||
patch("tools.music_tools.generate_song",
|
||||
side_effect=self._make_song_stub(audio_dir)),
|
||||
patch("tools.video_tools.image_to_video",
|
||||
side_effect=self._make_video_stub(clips_dir)),
|
||||
patch("tools.video_tools.generate_video_clip",
|
||||
side_effect=self._make_video_stub(clips_dir)),
|
||||
patch("creative.director._project_dir",
|
||||
return_value=tmp_path / "project"),
|
||||
patch("creative.director._save_project"),
|
||||
patch("creative.assembler._output_dir",
|
||||
return_value=assembly_dir),
|
||||
):
|
||||
result = run_full_pipeline(
|
||||
title="Integration Test Video",
|
||||
description="End-to-end pipeline test",
|
||||
scenes=scene_defs,
|
||||
lyrics="Test lyrics for the song",
|
||||
genre="rock",
|
||||
)
|
||||
|
||||
assert result["success"], f"Pipeline failed: {result}"
|
||||
assert result["project_id"]
|
||||
assert result["final_video"] is not None
|
||||
assert result["project"]["status"] == "complete"
|
||||
assert result["project"]["has_final"] is True
|
||||
assert result["project"]["clip_count"] == 5
|
||||
|
||||
# Inspect the final video
|
||||
final_path = Path(result["final_video"]["path"])
|
||||
assert final_path.exists()
|
||||
assert final_path.stat().st_size > 5000
|
||||
|
||||
video = VideoFileClip(str(final_path))
|
||||
# 5x4s clips - 4x1s crossfade = 16s body + 4s title card ~= 20s
|
||||
assert video.duration >= 10.0
|
||||
assert video.size == [320, 180]
|
||||
assert video.audio is not None
|
||||
video.close()
|
||||
|
||||
def test_step_by_step_pipeline(self, scene_defs, tmp_path):
|
||||
"""Run each pipeline step individually — mirrors manual usage."""
|
||||
frames_dir = tmp_path / "frames"
|
||||
frames_dir.mkdir()
|
||||
audio_dir = tmp_path / "audio"
|
||||
audio_dir.mkdir()
|
||||
clips_dir = tmp_path / "clips"
|
||||
clips_dir.mkdir()
|
||||
assembly_dir = tmp_path / "assembly"
|
||||
assembly_dir.mkdir()
|
||||
|
||||
# 1. Create project
|
||||
with (
|
||||
patch("creative.director._project_dir",
|
||||
return_value=tmp_path / "proj"),
|
||||
patch("creative.director._save_project"),
|
||||
):
|
||||
proj = create_project(
|
||||
"Step-by-Step Video",
|
||||
"Manual pipeline test",
|
||||
scenes=scene_defs,
|
||||
lyrics="Step by step, we build it all",
|
||||
)
|
||||
pid = proj["project"]["id"]
|
||||
assert proj["success"]
|
||||
|
||||
# 2. Storyboard
|
||||
with (
|
||||
patch("tools.image_tools.generate_storyboard",
|
||||
side_effect=self._make_storyboard_stub(frames_dir)),
|
||||
patch("creative.director._save_project"),
|
||||
):
|
||||
sb = run_storyboard(pid)
|
||||
assert sb["success"]
|
||||
assert sb["frame_count"] == 5
|
||||
|
||||
# 3. Music
|
||||
with (
|
||||
patch("tools.music_tools.generate_song",
|
||||
side_effect=self._make_song_stub(audio_dir)),
|
||||
patch("creative.director._save_project"),
|
||||
):
|
||||
mus = run_music(pid, genre="electronic")
|
||||
assert mus["success"]
|
||||
assert mus["genre"] == "electronic"
|
||||
|
||||
# Verify the audio file exists and is valid
|
||||
audio_path = Path(mus["path"])
|
||||
assert audio_path.exists()
|
||||
assert audio_path.stat().st_size > 1000
|
||||
|
||||
# 4. Video generation (uses storyboard frames → image_to_video)
|
||||
with (
|
||||
patch("tools.video_tools.image_to_video",
|
||||
side_effect=self._make_video_stub(clips_dir)),
|
||||
patch("creative.director._save_project"),
|
||||
):
|
||||
vid = run_video_generation(pid)
|
||||
assert vid["success"]
|
||||
assert vid["clip_count"] == 5
|
||||
|
||||
# Verify each clip exists
|
||||
for clip_info in vid["clips"]:
|
||||
clip_path = Path(clip_info["path"])
|
||||
assert clip_path.exists()
|
||||
assert clip_path.stat().st_size > 1000
|
||||
|
||||
# 5. Assembly (all real MoviePy operations)
|
||||
with (
|
||||
patch("creative.director._save_project"),
|
||||
patch("creative.assembler._output_dir",
|
||||
return_value=assembly_dir),
|
||||
):
|
||||
asm = run_assembly(pid, transition_duration=0.5)
|
||||
assert asm["success"]
|
||||
|
||||
# Inspect final output
|
||||
final_path = Path(asm["path"])
|
||||
assert final_path.exists()
|
||||
assert final_path.stat().st_size > 5000
|
||||
|
||||
video = VideoFileClip(str(final_path))
|
||||
# 5x4s - 4x0.5s = 18s body, + title card ~= 22s
|
||||
assert video.duration >= 10.0
|
||||
assert video.size == [320, 180]
|
||||
assert video.audio is not None
|
||||
video.close()
|
||||
|
||||
# Verify project reached completion
|
||||
project = _projects[pid]
|
||||
assert project.status == "complete"
|
||||
assert project.final_video is not None
|
||||
assert len(project.video_clips) == 5
|
||||
assert len(project.storyboard_frames) == 5
|
||||
assert project.music_track is not None
|
||||
431
tests/test_spark.py
Normal file
431
tests/test_spark.py
Normal file
@@ -0,0 +1,431 @@
|
||||
"""Tests for the Spark Intelligence integration.
|
||||
|
||||
Covers:
|
||||
- spark.memory: event capture, memory consolidation, importance scoring
|
||||
- spark.eidos: predictions, evaluations, accuracy stats
|
||||
- spark.advisor: advisory generation from patterns
|
||||
- spark.engine: top-level engine wiring all subsystems
|
||||
- dashboard.routes.spark: HTTP endpoints
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
# ── Fixtures ────────────────────────────────────────────────────────────────
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tmp_spark_db(tmp_path, monkeypatch):
|
||||
"""Redirect all Spark SQLite writes to a temp directory."""
|
||||
db_path = tmp_path / "spark.db"
|
||||
monkeypatch.setattr("spark.memory.DB_PATH", db_path)
|
||||
monkeypatch.setattr("spark.eidos.DB_PATH", db_path)
|
||||
yield db_path
|
||||
|
||||
|
||||
# ── spark.memory ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestImportanceScoring:
|
||||
def test_failure_scores_high(self):
|
||||
from spark.memory import score_importance
|
||||
score = score_importance("task_failed", {})
|
||||
assert score >= 0.9
|
||||
|
||||
def test_bid_scores_low(self):
|
||||
from spark.memory import score_importance
|
||||
score = score_importance("bid_submitted", {})
|
||||
assert score <= 0.3
|
||||
|
||||
def test_high_bid_boosts_score(self):
|
||||
from spark.memory import score_importance
|
||||
low = score_importance("bid_submitted", {"bid_sats": 10})
|
||||
high = score_importance("bid_submitted", {"bid_sats": 100})
|
||||
assert high > low
|
||||
|
||||
def test_unknown_event_default(self):
|
||||
from spark.memory import score_importance
|
||||
score = score_importance("unknown_type", {})
|
||||
assert score == 0.5
|
||||
|
||||
|
||||
class TestEventRecording:
|
||||
def test_record_and_query(self):
|
||||
from spark.memory import record_event, get_events
|
||||
eid = record_event("task_posted", "Test task", task_id="t1")
|
||||
assert eid
|
||||
events = get_events(task_id="t1")
|
||||
assert len(events) == 1
|
||||
assert events[0].event_type == "task_posted"
|
||||
assert events[0].description == "Test task"
|
||||
|
||||
def test_record_with_agent(self):
|
||||
from spark.memory import record_event, get_events
|
||||
record_event("bid_submitted", "Agent bid", agent_id="a1", task_id="t2",
|
||||
data='{"bid_sats": 50}')
|
||||
events = get_events(agent_id="a1")
|
||||
assert len(events) == 1
|
||||
assert events[0].agent_id == "a1"
|
||||
|
||||
def test_filter_by_event_type(self):
|
||||
from spark.memory import record_event, get_events
|
||||
record_event("task_posted", "posted", task_id="t3")
|
||||
record_event("task_completed", "completed", task_id="t3")
|
||||
posted = get_events(event_type="task_posted")
|
||||
assert len(posted) == 1
|
||||
|
||||
def test_filter_by_min_importance(self):
|
||||
from spark.memory import record_event, get_events
|
||||
record_event("bid_submitted", "low", importance=0.1)
|
||||
record_event("task_failed", "high", importance=0.9)
|
||||
high_events = get_events(min_importance=0.5)
|
||||
assert len(high_events) == 1
|
||||
assert high_events[0].event_type == "task_failed"
|
||||
|
||||
def test_count_events(self):
|
||||
from spark.memory import record_event, count_events
|
||||
record_event("task_posted", "a")
|
||||
record_event("task_posted", "b")
|
||||
record_event("task_completed", "c")
|
||||
assert count_events() == 3
|
||||
assert count_events("task_posted") == 2
|
||||
|
||||
def test_limit_results(self):
|
||||
from spark.memory import record_event, get_events
|
||||
for i in range(10):
|
||||
record_event("bid_submitted", f"bid {i}")
|
||||
events = get_events(limit=3)
|
||||
assert len(events) == 3
|
||||
|
||||
|
||||
class TestMemoryConsolidation:
|
||||
def test_store_and_query_memory(self):
|
||||
from spark.memory import store_memory, get_memories
|
||||
mid = store_memory("pattern", "agent-x", "Strong performer", confidence=0.8)
|
||||
assert mid
|
||||
memories = get_memories(subject="agent-x")
|
||||
assert len(memories) == 1
|
||||
assert memories[0].content == "Strong performer"
|
||||
|
||||
def test_filter_by_type(self):
|
||||
from spark.memory import store_memory, get_memories
|
||||
store_memory("pattern", "system", "Good pattern")
|
||||
store_memory("anomaly", "system", "Bad anomaly")
|
||||
patterns = get_memories(memory_type="pattern")
|
||||
assert len(patterns) == 1
|
||||
assert patterns[0].memory_type == "pattern"
|
||||
|
||||
def test_filter_by_confidence(self):
|
||||
from spark.memory import store_memory, get_memories
|
||||
store_memory("pattern", "a", "Low conf", confidence=0.2)
|
||||
store_memory("pattern", "b", "High conf", confidence=0.9)
|
||||
high = get_memories(min_confidence=0.5)
|
||||
assert len(high) == 1
|
||||
assert high[0].content == "High conf"
|
||||
|
||||
def test_count_memories(self):
|
||||
from spark.memory import store_memory, count_memories
|
||||
store_memory("pattern", "a", "X")
|
||||
store_memory("anomaly", "b", "Y")
|
||||
assert count_memories() == 2
|
||||
assert count_memories("pattern") == 1
|
||||
|
||||
|
||||
# ── spark.eidos ─────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestPredictions:
|
||||
def test_predict_stores_prediction(self):
|
||||
from spark.eidos import predict_task_outcome, get_predictions
|
||||
result = predict_task_outcome("t1", "Fix the bug", ["agent-a", "agent-b"])
|
||||
assert "prediction_id" in result
|
||||
assert result["likely_winner"] == "agent-a"
|
||||
preds = get_predictions(task_id="t1")
|
||||
assert len(preds) == 1
|
||||
|
||||
def test_predict_with_history(self):
|
||||
from spark.eidos import predict_task_outcome
|
||||
history = {
|
||||
"agent-a": {"success_rate": 0.3, "avg_winning_bid": 40},
|
||||
"agent-b": {"success_rate": 0.9, "avg_winning_bid": 30},
|
||||
}
|
||||
result = predict_task_outcome(
|
||||
"t2", "Research topic", ["agent-a", "agent-b"],
|
||||
agent_history=history,
|
||||
)
|
||||
assert result["likely_winner"] == "agent-b"
|
||||
assert result["success_probability"] > 0.5
|
||||
|
||||
def test_predict_empty_candidates(self):
|
||||
from spark.eidos import predict_task_outcome
|
||||
result = predict_task_outcome("t3", "No agents", [])
|
||||
assert result["likely_winner"] is None
|
||||
|
||||
|
||||
class TestEvaluation:
|
||||
def test_evaluate_correct_prediction(self):
|
||||
from spark.eidos import predict_task_outcome, evaluate_prediction
|
||||
predict_task_outcome("t4", "Task", ["agent-a"])
|
||||
result = evaluate_prediction("t4", "agent-a", task_succeeded=True, winning_bid=30)
|
||||
assert result is not None
|
||||
assert result["accuracy"] > 0.0
|
||||
|
||||
def test_evaluate_wrong_prediction(self):
|
||||
from spark.eidos import predict_task_outcome, evaluate_prediction
|
||||
predict_task_outcome("t5", "Task", ["agent-a"])
|
||||
result = evaluate_prediction("t5", "agent-b", task_succeeded=False)
|
||||
assert result is not None
|
||||
# Wrong winner + failed = lower accuracy
|
||||
assert result["accuracy"] < 1.0
|
||||
|
||||
def test_evaluate_no_prediction_returns_none(self):
|
||||
from spark.eidos import evaluate_prediction
|
||||
result = evaluate_prediction("no-task", "agent-a", task_succeeded=True)
|
||||
assert result is None
|
||||
|
||||
def test_double_evaluation_returns_none(self):
|
||||
from spark.eidos import predict_task_outcome, evaluate_prediction
|
||||
predict_task_outcome("t6", "Task", ["agent-a"])
|
||||
evaluate_prediction("t6", "agent-a", task_succeeded=True)
|
||||
# Second evaluation should return None (already evaluated)
|
||||
result = evaluate_prediction("t6", "agent-a", task_succeeded=True)
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestAccuracyStats:
|
||||
def test_empty_stats(self):
|
||||
from spark.eidos import get_accuracy_stats
|
||||
stats = get_accuracy_stats()
|
||||
assert stats["total_predictions"] == 0
|
||||
assert stats["evaluated"] == 0
|
||||
assert stats["avg_accuracy"] == 0.0
|
||||
|
||||
def test_stats_after_evaluations(self):
|
||||
from spark.eidos import predict_task_outcome, evaluate_prediction, get_accuracy_stats
|
||||
for i in range(3):
|
||||
predict_task_outcome(f"task-{i}", "Description", ["agent-a"])
|
||||
evaluate_prediction(f"task-{i}", "agent-a", task_succeeded=True, winning_bid=30)
|
||||
stats = get_accuracy_stats()
|
||||
assert stats["total_predictions"] == 3
|
||||
assert stats["evaluated"] == 3
|
||||
assert stats["pending"] == 0
|
||||
assert stats["avg_accuracy"] > 0.0
|
||||
|
||||
|
||||
class TestComputeAccuracy:
|
||||
def test_perfect_prediction(self):
|
||||
from spark.eidos import _compute_accuracy
|
||||
predicted = {
|
||||
"likely_winner": "agent-a",
|
||||
"success_probability": 1.0,
|
||||
"estimated_bid_range": [20, 40],
|
||||
}
|
||||
actual = {"winner": "agent-a", "succeeded": True, "winning_bid": 30}
|
||||
acc = _compute_accuracy(predicted, actual)
|
||||
assert acc == pytest.approx(1.0, abs=0.01)
|
||||
|
||||
def test_all_wrong(self):
|
||||
from spark.eidos import _compute_accuracy
|
||||
predicted = {
|
||||
"likely_winner": "agent-a",
|
||||
"success_probability": 1.0,
|
||||
"estimated_bid_range": [10, 20],
|
||||
}
|
||||
actual = {"winner": "agent-b", "succeeded": False, "winning_bid": 100}
|
||||
acc = _compute_accuracy(predicted, actual)
|
||||
assert acc < 0.5
|
||||
|
||||
def test_partial_credit(self):
|
||||
from spark.eidos import _compute_accuracy
|
||||
predicted = {
|
||||
"likely_winner": "agent-a",
|
||||
"success_probability": 0.5,
|
||||
"estimated_bid_range": [20, 40],
|
||||
}
|
||||
actual = {"winner": "agent-b", "succeeded": True, "winning_bid": 30}
|
||||
acc = _compute_accuracy(predicted, actual)
|
||||
# Wrong winner but right success and in bid range → partial
|
||||
assert 0.2 < acc < 0.8
|
||||
|
||||
|
||||
# ── spark.advisor ───────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestAdvisor:
|
||||
def test_insufficient_data(self):
|
||||
from spark.advisor import generate_advisories
|
||||
advisories = generate_advisories()
|
||||
assert len(advisories) >= 1
|
||||
assert advisories[0].category == "system_health"
|
||||
assert "Insufficient" in advisories[0].title
|
||||
|
||||
def test_failure_detection(self):
|
||||
from spark.memory import record_event
|
||||
from spark.advisor import generate_advisories
|
||||
# Record enough events to pass the minimum threshold
|
||||
for i in range(5):
|
||||
record_event("task_failed", f"Failed task {i}",
|
||||
agent_id="agent-bad", task_id=f"t-{i}")
|
||||
advisories = generate_advisories()
|
||||
failure_advisories = [a for a in advisories if a.category == "failure_prevention"]
|
||||
assert len(failure_advisories) >= 1
|
||||
assert "agent-ba" in failure_advisories[0].title
|
||||
|
||||
def test_advisories_sorted_by_priority(self):
|
||||
from spark.memory import record_event
|
||||
from spark.advisor import generate_advisories
|
||||
for i in range(4):
|
||||
record_event("task_posted", f"posted {i}", task_id=f"p-{i}")
|
||||
record_event("task_completed", f"done {i}",
|
||||
agent_id="agent-good", task_id=f"p-{i}")
|
||||
advisories = generate_advisories()
|
||||
if len(advisories) >= 2:
|
||||
assert advisories[0].priority >= advisories[-1].priority
|
||||
|
||||
def test_no_activity_advisory(self):
|
||||
from spark.advisor import _check_system_activity
|
||||
advisories = _check_system_activity()
|
||||
assert len(advisories) >= 1
|
||||
assert "No swarm activity" in advisories[0].title
|
||||
|
||||
|
||||
# ── spark.engine ────────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestSparkEngine:
|
||||
def test_engine_enabled(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=True)
|
||||
assert engine.enabled
|
||||
|
||||
def test_engine_disabled(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=False)
|
||||
result = engine.on_task_posted("t1", "Ignored task")
|
||||
assert result is None
|
||||
|
||||
def test_on_task_posted(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_task_posted("t1", "Test task", ["agent-a"])
|
||||
assert eid is not None
|
||||
events = get_events(task_id="t1")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_on_bid_submitted(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_bid_submitted("t1", "agent-a", 50)
|
||||
assert eid is not None
|
||||
events = get_events(event_type="bid_submitted")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_on_task_assigned(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_task_assigned("t1", "agent-a")
|
||||
assert eid is not None
|
||||
events = get_events(event_type="task_assigned")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_on_task_completed_evaluates_prediction(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.eidos import get_predictions
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_task_posted("t1", "Fix bug", ["agent-a"])
|
||||
eid = engine.on_task_completed("t1", "agent-a", "Fixed it")
|
||||
assert eid is not None
|
||||
preds = get_predictions(task_id="t1")
|
||||
# Should have prediction(s) evaluated
|
||||
assert len(preds) >= 1
|
||||
|
||||
def test_on_task_failed(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_task_posted("t1", "Deploy server", ["agent-a"])
|
||||
eid = engine.on_task_failed("t1", "agent-a", "Connection timeout")
|
||||
assert eid is not None
|
||||
events = get_events(event_type="task_failed")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_on_agent_joined(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_agent_joined("agent-a", "Echo")
|
||||
assert eid is not None
|
||||
events = get_events(event_type="agent_joined")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_status(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_task_posted("t1", "Test", ["agent-a"])
|
||||
engine.on_bid_submitted("t1", "agent-a", 30)
|
||||
status = engine.status()
|
||||
assert status["enabled"] is True
|
||||
assert status["events_captured"] >= 2
|
||||
assert "predictions" in status
|
||||
assert "event_types" in status
|
||||
|
||||
def test_get_advisories(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=True)
|
||||
advisories = engine.get_advisories()
|
||||
assert isinstance(advisories, list)
|
||||
|
||||
def test_get_advisories_disabled(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=False)
|
||||
advisories = engine.get_advisories()
|
||||
assert advisories == []
|
||||
|
||||
def test_get_timeline(self):
|
||||
from spark.engine import SparkEngine
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_task_posted("t1", "Task 1")
|
||||
engine.on_task_posted("t2", "Task 2")
|
||||
timeline = engine.get_timeline(limit=10)
|
||||
assert len(timeline) == 2
|
||||
|
||||
def test_memory_consolidation(self):
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_memories
|
||||
engine = SparkEngine(enabled=True)
|
||||
# Generate enough completions to trigger consolidation (>=5 events, >=3 outcomes)
|
||||
for i in range(6):
|
||||
engine.on_task_completed(f"t-{i}", "agent-star", f"Result {i}")
|
||||
memories = get_memories(subject="agent-star")
|
||||
# Should have at least one consolidated memory about strong performance
|
||||
assert len(memories) >= 1
|
||||
|
||||
|
||||
# ── Dashboard routes ────────────────────────────────────────────────────────
|
||||
|
||||
|
||||
class TestSparkRoutes:
|
||||
def test_spark_json(self, client):
|
||||
resp = client.get("/spark")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "status" in data
|
||||
assert "advisories" in data
|
||||
|
||||
def test_spark_ui(self, client):
|
||||
resp = client.get("/spark/ui")
|
||||
assert resp.status_code == 200
|
||||
assert "SPARK INTELLIGENCE" in resp.text
|
||||
|
||||
def test_spark_timeline(self, client):
|
||||
resp = client.get("/spark/timeline")
|
||||
assert resp.status_code == 200
|
||||
|
||||
def test_spark_insights(self, client):
|
||||
resp = client.get("/spark/insights")
|
||||
assert resp.status_code == 200
|
||||
110
tests/test_spark_tools_creative.py
Normal file
110
tests/test_spark_tools_creative.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Tests for Spark engine tool-level and creative pipeline event capture.
|
||||
|
||||
Covers the new on_tool_executed() and on_creative_step() methods added
|
||||
in Phase 6.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events, count_events
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tmp_spark_db(tmp_path, monkeypatch):
|
||||
db_path = tmp_path / "spark.db"
|
||||
monkeypatch.setattr("spark.memory.DB_PATH", db_path)
|
||||
monkeypatch.setattr("spark.eidos.DB_PATH", db_path)
|
||||
yield db_path
|
||||
|
||||
|
||||
class TestOnToolExecuted:
|
||||
def test_captures_tool_event(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_tool_executed("agent-a", "git_commit", task_id="t1")
|
||||
assert eid is not None
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
assert "git_commit" in events[0].description
|
||||
|
||||
def test_captures_tool_failure(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_tool_executed("agent-a", "generate_image", success=False)
|
||||
assert eid is not None
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
assert "FAIL" in events[0].description
|
||||
|
||||
def test_captures_duration(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("agent-a", "generate_song", duration_ms=5000)
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_disabled_returns_none(self):
|
||||
engine = SparkEngine(enabled=False)
|
||||
result = engine.on_tool_executed("agent-a", "git_push")
|
||||
assert result is None
|
||||
|
||||
def test_multiple_tool_events(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("agent-a", "git_add")
|
||||
engine.on_tool_executed("agent-a", "git_commit")
|
||||
engine.on_tool_executed("agent-a", "git_push")
|
||||
assert count_events("tool_executed") == 3
|
||||
|
||||
|
||||
class TestOnCreativeStep:
|
||||
def test_captures_creative_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_creative_step(
|
||||
project_id="proj-1",
|
||||
step_name="storyboard",
|
||||
agent_id="pixel-001",
|
||||
output_path="/data/images/frame.png",
|
||||
)
|
||||
assert eid is not None
|
||||
events = get_events(event_type="creative_step")
|
||||
assert len(events) == 1
|
||||
assert "storyboard" in events[0].description
|
||||
|
||||
def test_captures_failed_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_creative_step(
|
||||
project_id="proj-1",
|
||||
step_name="music",
|
||||
agent_id="lyra-001",
|
||||
success=False,
|
||||
)
|
||||
events = get_events(event_type="creative_step")
|
||||
assert len(events) == 1
|
||||
assert "FAIL" in events[0].description
|
||||
|
||||
def test_disabled_returns_none(self):
|
||||
engine = SparkEngine(enabled=False)
|
||||
result = engine.on_creative_step("p1", "storyboard", "pixel-001")
|
||||
assert result is None
|
||||
|
||||
def test_full_pipeline_events(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
steps = ["storyboard", "music", "video", "assembly"]
|
||||
agents = ["pixel-001", "lyra-001", "reel-001", "reel-001"]
|
||||
for step, agent in zip(steps, agents):
|
||||
engine.on_creative_step("proj-1", step, agent)
|
||||
assert count_events("creative_step") == 4
|
||||
|
||||
|
||||
class TestSparkStatusIncludesNewTypes:
|
||||
def test_status_includes_tool_executed(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("a", "git_commit")
|
||||
status = engine.status()
|
||||
assert "tool_executed" in status["event_types"]
|
||||
assert status["event_types"]["tool_executed"] == 1
|
||||
|
||||
def test_status_includes_creative_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_creative_step("p1", "storyboard", "pixel-001")
|
||||
status = engine.status()
|
||||
assert "creative_step" in status["event_types"]
|
||||
assert status["event_types"]["creative_step"] == 1
|
||||
@@ -18,9 +18,9 @@ def tmp_swarm_db(tmp_path, monkeypatch):
|
||||
|
||||
# ── personas.py ───────────────────────────────────────────────────────────────
|
||||
|
||||
def test_all_six_personas_defined():
|
||||
def test_all_nine_personas_defined():
|
||||
from swarm.personas import PERSONAS
|
||||
expected = {"echo", "mace", "helm", "seer", "forge", "quill"}
|
||||
expected = {"echo", "mace", "helm", "seer", "forge", "quill", "pixel", "lyra", "reel"}
|
||||
assert expected == set(PERSONAS.keys())
|
||||
|
||||
|
||||
@@ -46,10 +46,10 @@ def test_get_persona_returns_none_for_unknown():
|
||||
assert get_persona("bogus") is None
|
||||
|
||||
|
||||
def test_list_personas_returns_all_six():
|
||||
def test_list_personas_returns_all_nine():
|
||||
from swarm.personas import list_personas
|
||||
personas = list_personas()
|
||||
assert len(personas) == 6
|
||||
assert len(personas) == 9
|
||||
|
||||
|
||||
def test_persona_capabilities_are_comma_strings():
|
||||
@@ -179,7 +179,7 @@ def test_coordinator_spawn_all_personas():
|
||||
from swarm import registry
|
||||
coord = SwarmCoordinator()
|
||||
names = []
|
||||
for pid in ["echo", "mace", "helm", "seer", "forge", "quill"]:
|
||||
for pid in ["echo", "mace", "helm", "seer", "forge", "quill", "pixel", "lyra", "reel"]:
|
||||
result = coord.spawn_persona(pid)
|
||||
names.append(result["name"])
|
||||
agents = registry.list_agents()
|
||||
|
||||
93
tests/test_video_tools.py
Normal file
93
tests/test_video_tools.py
Normal file
@@ -0,0 +1,93 @@
|
||||
"""Tests for tools.video_tools — Video generation (Reel persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, interface, and
|
||||
resolution preset tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from tools.video_tools import (
|
||||
VIDEO_TOOL_CATALOG,
|
||||
RESOLUTION_PRESETS,
|
||||
VIDEO_STYLES,
|
||||
list_video_styles,
|
||||
generate_video_clip,
|
||||
image_to_video,
|
||||
)
|
||||
|
||||
|
||||
class TestVideoToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {"generate_video_clip", "image_to_video", "list_video_styles"}
|
||||
assert expected == set(VIDEO_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in VIDEO_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestResolutionPresets:
|
||||
def test_480p_preset(self):
|
||||
assert RESOLUTION_PRESETS["480p"] == (854, 480)
|
||||
|
||||
def test_720p_preset(self):
|
||||
assert RESOLUTION_PRESETS["720p"] == (1280, 720)
|
||||
|
||||
|
||||
class TestVideoStyles:
|
||||
def test_common_styles_present(self):
|
||||
for style in ["cinematic", "anime", "documentary"]:
|
||||
assert style in VIDEO_STYLES
|
||||
|
||||
|
||||
class TestListVideoStyles:
|
||||
def test_returns_styles_and_resolutions(self):
|
||||
result = list_video_styles()
|
||||
assert result["success"]
|
||||
assert "cinematic" in result["styles"]
|
||||
assert "480p" in result["resolutions"]
|
||||
assert "720p" in result["resolutions"]
|
||||
|
||||
|
||||
class TestGenerateVideoClipInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
with patch("tools.video_tools._t2v_pipeline", None):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_video_clip("a sunset")
|
||||
|
||||
def test_duration_clamped(self):
|
||||
"""Duration is clamped to 2–10 range."""
|
||||
import sys
|
||||
|
||||
mock_pipe = MagicMock()
|
||||
mock_pipe.device = "cpu"
|
||||
mock_result = MagicMock()
|
||||
mock_result.frames = [[MagicMock() for _ in range(48)]]
|
||||
mock_pipe.return_value = mock_result
|
||||
|
||||
mock_torch = MagicMock()
|
||||
mock_torch.Generator.return_value = MagicMock()
|
||||
|
||||
out_dir = MagicMock()
|
||||
out_dir.__truediv__ = MagicMock(return_value=MagicMock(__str__=lambda s: "/fake/clip.mp4"))
|
||||
|
||||
with patch.dict(sys.modules, {"torch": mock_torch}):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", return_value=mock_pipe):
|
||||
with patch("tools.video_tools._export_frames_to_mp4"):
|
||||
with patch("tools.video_tools._output_dir", return_value=out_dir):
|
||||
with patch("tools.video_tools._save_metadata"):
|
||||
result = generate_video_clip("test", duration=50)
|
||||
assert result["duration"] == 10 # clamped
|
||||
|
||||
|
||||
class TestImageToVideoInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
with patch("tools.video_tools._t2v_pipeline", None):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
image_to_video("/fake/image.png", "animate")
|
||||
Reference in New Issue
Block a user