forked from Rockachopa/Timmy-time-dashboard
feat: add full creative studio + DevOps tools (Pixel, Lyra, Reel personas)
Adds 3 new personas (Pixel, Lyra, Reel) and 5 new tool modules: - Git/DevOps tools (GitPython): clone, status, diff, log, blame, branch, add, commit, push, pull, stash — wired to Forge and Helm personas - Image generation (FLUX via diffusers): text-to-image, storyboards, variations — Pixel persona - Music generation (ACE-Step 1.5): full songs with vocals+instrumentals, instrumental tracks, vocal-only tracks — Lyra persona - Video generation (Wan 2.1 via diffusers): text-to-video, image-to-video clips — Reel persona - Creative Director pipeline: multi-step orchestration that chains storyboard → music → video → assembly into 3+ minute final videos - Video assembler (MoviePy + FFmpeg): stitch clips, overlay audio, title cards, subtitles, final export Also includes: - Spark Intelligence tool-level + creative pipeline event capture - Creative Studio dashboard page (/creative/ui) with 4 tabs - Config settings for all new models and output directories - pyproject.toml creative optional extra for GPU dependencies - 107 new tests covering all modules (624 total, all passing) https://claude.ai/code/session_01KJm6jQkNi3aA3yoQJn636c
This commit is contained in:
478
PLAN.md
Normal file
478
PLAN.md
Normal file
@@ -0,0 +1,478 @@
|
||||
# Plan: Full Creative & DevOps Capabilities for Timmy
|
||||
|
||||
## Overview
|
||||
|
||||
Add five major capability domains to Timmy's agent system, turning it into a
|
||||
sovereign creative studio and full-stack DevOps operator. All tools are
|
||||
open-source, self-hosted, and GPU-accelerated where needed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Git & DevOps Tools (Forge + Helm personas)
|
||||
|
||||
**Goal:** Timmy can observe local/remote repos, read code, create branches,
|
||||
stage changes, commit, diff, log, and manage PRs — all through the swarm
|
||||
task system with Spark event capture.
|
||||
|
||||
### New module: `src/tools/git_tools.py`
|
||||
|
||||
Tools to add (using **GitPython** — BSD-3, `pip install GitPython`):
|
||||
|
||||
| Tool | Function | Persona Access |
|
||||
|---|---|---|
|
||||
| `git_clone` | Clone a remote repo to local path | Forge, Helm |
|
||||
| `git_status` | Show working tree status | Forge, Helm, Timmy |
|
||||
| `git_diff` | Show staged/unstaged diffs | Forge, Helm, Timmy |
|
||||
| `git_log` | Show recent commit history | Forge, Helm, Echo, Timmy |
|
||||
| `git_branch` | List/create/switch branches | Forge, Helm |
|
||||
| `git_add` | Stage files for commit | Forge, Helm |
|
||||
| `git_commit` | Create a commit with message | Forge, Helm |
|
||||
| `git_push` | Push to remote | Forge, Helm |
|
||||
| `git_pull` | Pull from remote | Forge, Helm |
|
||||
| `git_blame` | Show line-by-line authorship | Forge, Echo |
|
||||
| `git_stash` | Stash/pop changes | Forge, Helm |
|
||||
|
||||
### Changes to existing files
|
||||
|
||||
- **`src/timmy/tools.py`** — Add `create_git_tools()` factory, wire into
|
||||
`PERSONA_TOOLKITS` for Forge and Helm
|
||||
- **`src/swarm/tool_executor.py`** — Enhance `_infer_tools_needed()` with
|
||||
git keywords (commit, branch, push, pull, diff, clone, merge)
|
||||
- **`src/config.py`** — Add `git_default_repo_dir: str = "~/repos"` setting
|
||||
- **`src/spark/engine.py`** — Add `on_tool_executed()` method to capture
|
||||
individual tool invocations (not just task-level events)
|
||||
- **`src/swarm/personas.py`** — Add git-related keywords to Forge and Helm
|
||||
preferred_keywords
|
||||
|
||||
### New dependency
|
||||
|
||||
```toml
|
||||
# pyproject.toml
|
||||
dependencies = [
|
||||
...,
|
||||
"GitPython>=3.1.40",
|
||||
]
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- **`/tools`** page updated to show git tools in the catalog
|
||||
- Git tool usage stats visible per agent
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_git_tools.py` — test all git tool functions against tmp repos
|
||||
- Mock GitPython's `Repo` class for unit tests
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Image Generation (new "Pixel" persona)
|
||||
|
||||
**Goal:** Generate storyboard frames and standalone images from text prompts
|
||||
using FLUX.2 Klein 4B locally.
|
||||
|
||||
### New persona: Pixel — Visual Architect
|
||||
|
||||
```python
|
||||
"pixel": {
|
||||
"id": "pixel",
|
||||
"name": "Pixel",
|
||||
"role": "Visual Architect",
|
||||
"description": "Image generation, storyboard frames, and visual design.",
|
||||
"capabilities": "image-generation,storyboard,design",
|
||||
"rate_sats": 80,
|
||||
"bid_base": 60,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"image", "picture", "photo", "draw", "illustration",
|
||||
"storyboard", "frame", "visual", "design", "generate",
|
||||
"portrait", "landscape", "scene", "artwork",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/image_tools.py`
|
||||
|
||||
Tools (using **diffusers** + **FLUX.2 Klein 4B** — Apache 2.0):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_image` | Text-to-image generation (returns file path) |
|
||||
| `generate_storyboard` | Generate N frames from scene descriptions |
|
||||
| `image_variations` | Generate variations of an existing image |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_image(prompt, width=1024, height=1024, steps=4)
|
||||
→ loads FLUX.2 Klein via diffusers FluxPipeline
|
||||
→ saves to data/images/{uuid}.png
|
||||
→ returns path + metadata
|
||||
```
|
||||
|
||||
- Model loaded lazily on first use, kept in memory for subsequent calls
|
||||
- Falls back to CPU generation (slower) if no GPU
|
||||
- Output saved to `data/images/` with metadata JSON sidecar
|
||||
|
||||
### New dependency (optional extra)
|
||||
|
||||
```toml
|
||||
[project.optional-dependencies]
|
||||
creative = [
|
||||
"diffusers>=0.30.0",
|
||||
"transformers>=4.40.0",
|
||||
"accelerate>=0.30.0",
|
||||
"torch>=2.2.0",
|
||||
"safetensors>=0.4.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
# config.py additions
|
||||
flux_model_id: str = "black-forest-labs/FLUX.2-klein-4b"
|
||||
image_output_dir: str = "data/images"
|
||||
image_default_steps: int = 4
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` — new Creative Studio page (image gallery + generation form)
|
||||
- HTMX-powered: submit prompt, poll for result, display inline
|
||||
- Gallery view of all generated images with metadata
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_image_tools.py` — mock diffusers pipeline, test prompt handling,
|
||||
file output, storyboard generation
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Music Generation (new "Lyra" persona)
|
||||
|
||||
**Goal:** Generate full songs with vocals, instrumentals, and lyrics using
|
||||
ACE-Step 1.5 locally.
|
||||
|
||||
### New persona: Lyra — Sound Weaver
|
||||
|
||||
```python
|
||||
"lyra": {
|
||||
"id": "lyra",
|
||||
"name": "Lyra",
|
||||
"role": "Sound Weaver",
|
||||
"description": "Music and song generation with vocals, instrumentals, and lyrics.",
|
||||
"capabilities": "music-generation,vocals,composition",
|
||||
"rate_sats": 90,
|
||||
"bid_base": 70,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"music", "song", "sing", "vocal", "instrumental",
|
||||
"melody", "beat", "track", "compose", "lyrics",
|
||||
"audio", "sound", "album", "remix",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/music_tools.py`
|
||||
|
||||
Tools (using **ACE-Step 1.5** — Apache 2.0, `pip install ace-step`):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_song` | Text/lyrics → full song (vocals + instrumentals) |
|
||||
| `generate_instrumental` | Text prompt → instrumental track |
|
||||
| `generate_vocals` | Lyrics + style → vocal track |
|
||||
| `list_genres` | Return supported genre/style tags |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_song(lyrics, genre="pop", duration=120, language="en")
|
||||
→ loads ACE-Step model (lazy, cached)
|
||||
→ generates audio
|
||||
→ saves to data/music/{uuid}.wav
|
||||
→ returns path + metadata (duration, genre, etc.)
|
||||
```
|
||||
|
||||
- Model loaded lazily, ~4GB VRAM minimum
|
||||
- Output saved to `data/music/` with metadata sidecar
|
||||
- Supports 19 languages, genre tags, tempo control
|
||||
|
||||
### New dependency (optional extra, extends `creative`)
|
||||
|
||||
```toml
|
||||
[project.optional-dependencies]
|
||||
creative = [
|
||||
...,
|
||||
"ace-step>=1.5.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
music_output_dir: str = "data/music"
|
||||
ace_step_model: str = "ace-step/ACE-Step-v1.5"
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` expanded with Music tab
|
||||
- Audio player widget (HTML5 `<audio>` element)
|
||||
- Lyrics input form with genre/style selector
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_music_tools.py` — mock ACE-Step model, test generation params
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Video Generation (new "Reel" persona)
|
||||
|
||||
**Goal:** Generate video clips from text/image prompts using Wan 2.1 locally.
|
||||
|
||||
### New persona: Reel — Motion Director
|
||||
|
||||
```python
|
||||
"reel": {
|
||||
"id": "reel",
|
||||
"name": "Reel",
|
||||
"role": "Motion Director",
|
||||
"description": "Video generation from text and image prompts.",
|
||||
"capabilities": "video-generation,animation,motion",
|
||||
"rate_sats": 100,
|
||||
"bid_base": 80,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"video", "clip", "animate", "motion", "film",
|
||||
"scene", "cinematic", "footage", "render", "timelapse",
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
### New module: `src/tools/video_tools.py`
|
||||
|
||||
Tools (using **Wan 2.1** via diffusers — Apache 2.0):
|
||||
|
||||
| Tool | Function |
|
||||
|---|---|
|
||||
| `generate_video_clip` | Text → short video clip (3–6 seconds) |
|
||||
| `image_to_video` | Image + prompt → animated video from still |
|
||||
| `list_video_styles` | Return supported style presets |
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
generate_video_clip(prompt, duration=5, resolution="480p", fps=24)
|
||||
→ loads Wan 2.1 via diffusers pipeline (lazy, cached)
|
||||
→ generates frames
|
||||
→ encodes to MP4 via FFmpeg
|
||||
→ saves to data/video/{uuid}.mp4
|
||||
→ returns path + metadata
|
||||
```
|
||||
|
||||
- Wan 2.1 1.3B model: ~16GB VRAM
|
||||
- Output saved to `data/video/`
|
||||
- Resolution options: 480p (16GB), 720p (24GB+)
|
||||
|
||||
### New dependency (extends `creative` extra)
|
||||
|
||||
```toml
|
||||
creative = [
|
||||
...,
|
||||
# Wan 2.1 uses diffusers (already listed) + model weights downloaded on first use
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
video_output_dir: str = "data/video"
|
||||
wan_model_id: str = "Wan-AI/Wan2.1-T2V-1.3B"
|
||||
video_default_resolution: str = "480p"
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_video_tools.py` — mock diffusers pipeline, test clip generation
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Creative Director — Storyboard & Assembly Pipeline
|
||||
|
||||
**Goal:** Orchestrate multi-persona workflows to produce 3+ minute creative
|
||||
videos with music, narration, and stitched scenes.
|
||||
|
||||
### New module: `src/creative/director.py`
|
||||
|
||||
The Creative Director is a **multi-step pipeline** that coordinates Pixel,
|
||||
Lyra, and Reel to produce complete creative works:
|
||||
|
||||
```
|
||||
User: "Create a 3-minute music video about a sunrise over mountains"
|
||||
│
|
||||
Creative Director
|
||||
┌─────────┼──────────┐
|
||||
│ │ │
|
||||
1. STORYBOARD 2. MUSIC 3. GENERATE
|
||||
(Pixel) (Lyra) (Reel)
|
||||
│ │ │
|
||||
N scene Full song N video clips
|
||||
descriptions with from storyboard
|
||||
+ keyframes vocals frames
|
||||
│ │ │
|
||||
└─────────┼──────────┘
|
||||
│
|
||||
4. ASSEMBLE
|
||||
(MoviePy + FFmpeg)
|
||||
│
|
||||
Final video with
|
||||
music, transitions,
|
||||
titles
|
||||
```
|
||||
|
||||
### Pipeline steps
|
||||
|
||||
1. **Script** — Timmy (or Quill) writes scene descriptions and lyrics
|
||||
2. **Storyboard** — Pixel generates keyframe images for each scene
|
||||
3. **Music** — Lyra generates the soundtrack (vocals + instrumentals)
|
||||
4. **Video clips** — Reel generates video for each scene (image-to-video
|
||||
from storyboard frames, or text-to-video from descriptions)
|
||||
5. **Assembly** — MoviePy stitches clips together with cross-fades,
|
||||
overlays the music track, adds title cards
|
||||
|
||||
### New module: `src/creative/assembler.py`
|
||||
|
||||
Video assembly engine (using **MoviePy** — MIT, `pip install moviepy`):
|
||||
|
||||
| Function | Purpose |
|
||||
|---|---|
|
||||
| `stitch_clips` | Concatenate video clips with transitions |
|
||||
| `overlay_audio` | Mix music track onto video |
|
||||
| `add_title_card` | Prepend/append title/credits |
|
||||
| `add_subtitles` | Burn lyrics/captions onto video |
|
||||
| `export_final` | Encode final video (H.264 + AAC) |
|
||||
|
||||
### New dependency
|
||||
|
||||
```toml
|
||||
dependencies = [
|
||||
...,
|
||||
"moviepy>=2.0.0",
|
||||
]
|
||||
```
|
||||
|
||||
### Config
|
||||
|
||||
```python
|
||||
creative_output_dir: str = "data/creative"
|
||||
video_transition_duration: float = 1.0 # seconds
|
||||
default_video_codec: str = "libx264"
|
||||
```
|
||||
|
||||
### Dashboard
|
||||
|
||||
- `/creative/ui` — Full Creative Studio with tabs:
|
||||
- **Images** — gallery + generation form
|
||||
- **Music** — player + generation form
|
||||
- **Video** — player + generation form
|
||||
- **Director** — multi-step pipeline builder with storyboard view
|
||||
- `/creative/projects` — saved projects with all assets
|
||||
- `/creative/projects/{id}` — project detail with timeline view
|
||||
|
||||
### Tests
|
||||
|
||||
- `tests/test_assembler.py` — test stitching, audio overlay, title cards
|
||||
- `tests/test_director.py` — test pipeline orchestration with mocks
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Spark Integration for All New Tools
|
||||
|
||||
**Goal:** Every tool invocation and creative pipeline step gets captured by
|
||||
Spark Intelligence for learning and advisory.
|
||||
|
||||
### Changes to `src/spark/engine.py`
|
||||
|
||||
```python
|
||||
def on_tool_executed(
|
||||
self, agent_id: str, tool_name: str,
|
||||
task_id: Optional[str], success: bool,
|
||||
duration_ms: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture individual tool invocations."""
|
||||
|
||||
def on_creative_step(
|
||||
self, project_id: str, step_name: str,
|
||||
agent_id: str, output_path: Optional[str],
|
||||
) -> Optional[str]:
|
||||
"""Capture creative pipeline progress."""
|
||||
```
|
||||
|
||||
### New advisor patterns
|
||||
|
||||
- "Pixel generates storyboards 40% faster than individual image calls"
|
||||
- "Lyra's pop genre tracks have 85% higher completion rate than jazz"
|
||||
- "Video generation on 480p uses 60% less GPU time than 720p for similar quality"
|
||||
- "Git commits from Forge average 3 files per commit"
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
| Phase | What | New Files | Est. Tests |
|
||||
|---|---|---|---|
|
||||
| 1 | Git/DevOps tools | 2 source + 1 test | ~25 |
|
||||
| 2 | Image generation | 2 source + 1 test + 1 template | ~15 |
|
||||
| 3 | Music generation | 1 source + 1 test | ~12 |
|
||||
| 4 | Video generation | 1 source + 1 test | ~12 |
|
||||
| 5 | Creative Director pipeline | 2 source + 2 tests + 1 template | ~20 |
|
||||
| 6 | Spark tool-level capture | 1 modified + 1 test update | ~8 |
|
||||
|
||||
**Total: ~10 new source files, ~6 new test files, ~92 new tests**
|
||||
|
||||
---
|
||||
|
||||
## New Dependencies Summary
|
||||
|
||||
**Required (always installed):**
|
||||
```
|
||||
GitPython>=3.1.40
|
||||
moviepy>=2.0.0
|
||||
```
|
||||
|
||||
**Optional `creative` extra (GPU features):**
|
||||
```
|
||||
diffusers>=0.30.0
|
||||
transformers>=4.40.0
|
||||
accelerate>=0.30.0
|
||||
torch>=2.2.0
|
||||
safetensors>=0.4.0
|
||||
ace-step>=1.5.0
|
||||
```
|
||||
|
||||
**Install:** `pip install ".[creative]"` for full creative stack
|
||||
|
||||
---
|
||||
|
||||
## New Persona Summary
|
||||
|
||||
| ID | Name | Role | Tools |
|
||||
|---|---|---|---|
|
||||
| pixel | Pixel | Visual Architect | generate_image, generate_storyboard, image_variations |
|
||||
| lyra | Lyra | Sound Weaver | generate_song, generate_instrumental, generate_vocals |
|
||||
| reel | Reel | Motion Director | generate_video_clip, image_to_video |
|
||||
|
||||
These join the existing 6 personas (Echo, Mace, Helm, Seer, Forge, Quill)
|
||||
for a total of **9 specialized agents** in the swarm.
|
||||
|
||||
---
|
||||
|
||||
## Hardware Requirements
|
||||
|
||||
- **CPU only:** Git tools, MoviePy assembly, all tests (mocked)
|
||||
- **8GB VRAM:** FLUX.2 Klein 4B (images)
|
||||
- **4GB VRAM:** ACE-Step 1.5 (music)
|
||||
- **16GB VRAM:** Wan 2.1 1.3B (video at 480p)
|
||||
- **Recommended:** RTX 4090 24GB runs the entire stack comfortably
|
||||
@@ -23,6 +23,8 @@ dependencies = [
|
||||
"rich>=13.0.0",
|
||||
"pydantic-settings>=2.0.0",
|
||||
"websockets>=12.0",
|
||||
"GitPython>=3.1.40",
|
||||
"moviepy>=2.0.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
@@ -52,6 +54,16 @@ voice = [
|
||||
telegram = [
|
||||
"python-telegram-bot>=21.0",
|
||||
]
|
||||
# Creative: GPU-accelerated image, music, and video generation.
|
||||
# pip install ".[creative]"
|
||||
creative = [
|
||||
"diffusers>=0.30.0",
|
||||
"transformers>=4.40.0",
|
||||
"accelerate>=0.30.0",
|
||||
"torch>=2.2.0",
|
||||
"safetensors>=0.4.0",
|
||||
"ace-step>=1.5.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
timmy = "timmy.cli:main"
|
||||
@@ -73,6 +85,8 @@ include = [
|
||||
"src/shortcuts",
|
||||
"src/telegram_bot",
|
||||
"src/spark",
|
||||
"src/tools",
|
||||
"src/creative",
|
||||
]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
|
||||
@@ -34,6 +34,28 @@ class Settings(BaseSettings):
|
||||
# consolidates memories, and generates advisory recommendations.
|
||||
spark_enabled: bool = True
|
||||
|
||||
# ── Git / DevOps ──────────────────────────────────────────────────────
|
||||
git_default_repo_dir: str = "~/repos"
|
||||
|
||||
# ── Creative — Image Generation (Pixel) ───────────────────────────────
|
||||
flux_model_id: str = "black-forest-labs/FLUX.1-schnell"
|
||||
image_output_dir: str = "data/images"
|
||||
image_default_steps: int = 4
|
||||
|
||||
# ── Creative — Music Generation (Lyra) ────────────────────────────────
|
||||
music_output_dir: str = "data/music"
|
||||
ace_step_model: str = "ace-step/ACE-Step-v1.5"
|
||||
|
||||
# ── Creative — Video Generation (Reel) ────────────────────────────────
|
||||
video_output_dir: str = "data/video"
|
||||
wan_model_id: str = "Wan-AI/Wan2.1-T2V-1.3B"
|
||||
video_default_resolution: str = "480p"
|
||||
|
||||
# ── Creative — Pipeline / Assembly ────────────────────────────────────
|
||||
creative_output_dir: str = "data/creative"
|
||||
video_transition_duration: float = 1.0
|
||||
default_video_codec: str = "libx264"
|
||||
|
||||
model_config = SettingsConfigDict(
|
||||
env_file=".env",
|
||||
env_file_encoding="utf-8",
|
||||
|
||||
1
src/creative/__init__.py
Normal file
1
src/creative/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Creative pipeline — orchestrates image, music, and video generation."""
|
||||
300
src/creative/assembler.py
Normal file
300
src/creative/assembler.py
Normal file
@@ -0,0 +1,300 @@
|
||||
"""Video assembly engine — stitch clips, overlay audio, add titles.
|
||||
|
||||
Uses MoviePy + FFmpeg to combine generated video clips, music tracks,
|
||||
and title cards into 3+ minute final videos.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_MOVIEPY_AVAILABLE = True
|
||||
try:
|
||||
from moviepy import (
|
||||
VideoFileClip,
|
||||
AudioFileClip,
|
||||
TextClip,
|
||||
CompositeVideoClip,
|
||||
ImageClip,
|
||||
concatenate_videoclips,
|
||||
)
|
||||
except ImportError:
|
||||
_MOVIEPY_AVAILABLE = False
|
||||
|
||||
|
||||
def _require_moviepy() -> None:
|
||||
if not _MOVIEPY_AVAILABLE:
|
||||
raise ImportError(
|
||||
"MoviePy is not installed. Run: pip install moviepy"
|
||||
)
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "creative_output_dir", "data/creative"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
# ── Stitching ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def stitch_clips(
|
||||
clip_paths: list[str],
|
||||
transition_duration: float = 1.0,
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Concatenate video clips with cross-fade transitions.
|
||||
|
||||
Args:
|
||||
clip_paths: Ordered list of MP4 file paths.
|
||||
transition_duration: Cross-fade duration in seconds.
|
||||
output_path: Optional output path. Auto-generated if omitted.
|
||||
|
||||
Returns dict with ``path`` and ``total_duration``.
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
clips = [VideoFileClip(p) for p in clip_paths]
|
||||
|
||||
# Apply cross-fade between consecutive clips
|
||||
if transition_duration > 0 and len(clips) > 1:
|
||||
processed = [clips[0]]
|
||||
for clip in clips[1:]:
|
||||
clip = clip.with_start(
|
||||
processed[-1].end - transition_duration
|
||||
).crossfadein(transition_duration)
|
||||
processed.append(clip)
|
||||
final = CompositeVideoClip(processed)
|
||||
else:
|
||||
final = concatenate_videoclips(clips, method="compose")
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"stitched_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
total_duration = final.duration
|
||||
# Clean up
|
||||
for c in clips:
|
||||
c.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"total_duration": total_duration,
|
||||
"clip_count": len(clip_paths),
|
||||
}
|
||||
|
||||
|
||||
# ── Audio overlay ─────────────────────────────────────────────────────────────
|
||||
|
||||
def overlay_audio(
|
||||
video_path: str,
|
||||
audio_path: str,
|
||||
output_path: Optional[str] = None,
|
||||
volume: float = 1.0,
|
||||
) -> dict:
|
||||
"""Mix an audio track onto a video file.
|
||||
|
||||
The audio is trimmed or looped to match the video duration.
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
audio = AudioFileClip(audio_path)
|
||||
|
||||
# Trim audio to video length
|
||||
if audio.duration > video.duration:
|
||||
audio = audio.subclipped(0, video.duration)
|
||||
|
||||
if volume != 1.0:
|
||||
audio = audio.with_volume_scaled(volume)
|
||||
|
||||
video = video.with_audio(audio)
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"mixed_{uid}.mp4"
|
||||
video.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = video.duration
|
||||
video.close()
|
||||
audio.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
}
|
||||
|
||||
|
||||
# ── Title cards ───────────────────────────────────────────────────────────────
|
||||
|
||||
def add_title_card(
|
||||
video_path: str,
|
||||
title: str,
|
||||
subtitle: str = "",
|
||||
duration: float = 4.0,
|
||||
position: str = "start",
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Add a title card at the start or end of a video.
|
||||
|
||||
Args:
|
||||
video_path: Source video path.
|
||||
title: Title text.
|
||||
subtitle: Optional subtitle text.
|
||||
duration: Title card display duration in seconds.
|
||||
position: "start" or "end".
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
w, h = video.size
|
||||
|
||||
# Build title card as a text clip on black background
|
||||
txt = TextClip(
|
||||
text=title,
|
||||
font_size=60,
|
||||
color="white",
|
||||
size=(w, h),
|
||||
method="caption",
|
||||
font="Arial",
|
||||
).with_duration(duration)
|
||||
|
||||
clips = [txt, video] if position == "start" else [video, txt]
|
||||
final = concatenate_videoclips(clips, method="compose")
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"titled_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = final.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"title": title,
|
||||
}
|
||||
|
||||
|
||||
# ── Subtitles / captions ─────────────────────────────────────────────────────
|
||||
|
||||
def add_subtitles(
|
||||
video_path: str,
|
||||
captions: list[dict],
|
||||
output_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Burn subtitle captions onto a video.
|
||||
|
||||
Args:
|
||||
captions: List of dicts with ``text``, ``start``, ``end`` keys
|
||||
(times in seconds).
|
||||
"""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
w, h = video.size
|
||||
|
||||
text_clips = []
|
||||
for cap in captions:
|
||||
txt = (
|
||||
TextClip(
|
||||
text=cap["text"],
|
||||
font_size=36,
|
||||
color="white",
|
||||
stroke_color="black",
|
||||
stroke_width=2,
|
||||
size=(w - 40, None),
|
||||
method="caption",
|
||||
font="Arial",
|
||||
)
|
||||
.with_start(cap["start"])
|
||||
.with_end(cap["end"])
|
||||
.with_position(("center", h - 100))
|
||||
)
|
||||
text_clips.append(txt)
|
||||
|
||||
final = CompositeVideoClip([video] + text_clips)
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"subtitled_{uid}.mp4"
|
||||
final.write_videofile(str(out), codec="libx264", audio_codec="aac", logger=None)
|
||||
|
||||
result_duration = final.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"caption_count": len(captions),
|
||||
}
|
||||
|
||||
|
||||
# ── Final export helper ──────────────────────────────────────────────────────
|
||||
|
||||
def export_final(
|
||||
video_path: str,
|
||||
output_path: Optional[str] = None,
|
||||
codec: str = "libx264",
|
||||
audio_codec: str = "aac",
|
||||
bitrate: str = "8000k",
|
||||
) -> dict:
|
||||
"""Re-encode a video with specific codec settings for distribution."""
|
||||
_require_moviepy()
|
||||
|
||||
video = VideoFileClip(video_path)
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out = Path(output_path) if output_path else _output_dir() / f"final_{uid}.mp4"
|
||||
video.write_videofile(
|
||||
str(out), codec=codec, audio_codec=audio_codec,
|
||||
bitrate=bitrate, logger=None,
|
||||
)
|
||||
|
||||
result_duration = video.duration
|
||||
video.close()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(out),
|
||||
"duration": result_duration,
|
||||
"codec": codec,
|
||||
}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
ASSEMBLER_TOOL_CATALOG: dict[str, dict] = {
|
||||
"stitch_clips": {
|
||||
"name": "Stitch Clips",
|
||||
"description": "Concatenate video clips with cross-fade transitions",
|
||||
"fn": stitch_clips,
|
||||
},
|
||||
"overlay_audio": {
|
||||
"name": "Overlay Audio",
|
||||
"description": "Mix a music track onto a video",
|
||||
"fn": overlay_audio,
|
||||
},
|
||||
"add_title_card": {
|
||||
"name": "Add Title Card",
|
||||
"description": "Add a title card at the start or end of a video",
|
||||
"fn": add_title_card,
|
||||
},
|
||||
"add_subtitles": {
|
||||
"name": "Add Subtitles",
|
||||
"description": "Burn subtitle captions onto a video",
|
||||
"fn": add_subtitles,
|
||||
},
|
||||
"export_final": {
|
||||
"name": "Export Final",
|
||||
"description": "Re-encode video with specific codec settings",
|
||||
"fn": export_final,
|
||||
},
|
||||
}
|
||||
378
src/creative/director.py
Normal file
378
src/creative/director.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""Creative Director — multi-persona pipeline for 3+ minute creative works.
|
||||
|
||||
Orchestrates Pixel (images), Lyra (music), and Reel (video) to produce
|
||||
complete music videos, cinematic shorts, and other creative works.
|
||||
|
||||
Pipeline stages:
|
||||
1. Script — Generate scene descriptions and lyrics
|
||||
2. Storyboard — Generate keyframe images (Pixel)
|
||||
3. Music — Generate soundtrack (Lyra)
|
||||
4. Video — Generate clips per scene (Reel)
|
||||
5. Assembly — Stitch clips + overlay audio (MoviePy)
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class CreativeProject:
|
||||
"""Tracks all assets and state for a creative production."""
|
||||
id: str = field(default_factory=lambda: uuid.uuid4().hex[:12])
|
||||
title: str = ""
|
||||
description: str = ""
|
||||
created_at: str = field(
|
||||
default_factory=lambda: datetime.now(timezone.utc).isoformat()
|
||||
)
|
||||
status: str = "planning" # planning|scripting|storyboard|music|video|assembly|complete|failed
|
||||
|
||||
# Pipeline outputs
|
||||
scenes: list[dict] = field(default_factory=list)
|
||||
lyrics: str = ""
|
||||
storyboard_frames: list[dict] = field(default_factory=list)
|
||||
music_track: Optional[dict] = None
|
||||
video_clips: list[dict] = field(default_factory=list)
|
||||
final_video: Optional[dict] = None
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
return {
|
||||
"id": self.id, "title": self.title,
|
||||
"description": self.description,
|
||||
"created_at": self.created_at, "status": self.status,
|
||||
"scene_count": len(self.scenes),
|
||||
"has_storyboard": len(self.storyboard_frames) > 0,
|
||||
"has_music": self.music_track is not None,
|
||||
"clip_count": len(self.video_clips),
|
||||
"has_final": self.final_video is not None,
|
||||
}
|
||||
|
||||
|
||||
# In-memory project store
|
||||
_projects: dict[str, CreativeProject] = {}
|
||||
|
||||
|
||||
def _project_dir(project_id: str) -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "creative_output_dir", "data/creative")) / project_id
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_project(project: CreativeProject) -> None:
|
||||
"""Persist project metadata to disk."""
|
||||
path = _project_dir(project.id) / "project.json"
|
||||
path.write_text(json.dumps(project.to_dict(), indent=2))
|
||||
|
||||
|
||||
# ── Project management ────────────────────────────────────────────────────────
|
||||
|
||||
def create_project(
|
||||
title: str,
|
||||
description: str,
|
||||
scenes: Optional[list[dict]] = None,
|
||||
lyrics: str = "",
|
||||
) -> dict:
|
||||
"""Create a new creative project.
|
||||
|
||||
Args:
|
||||
title: Project title.
|
||||
description: High-level creative brief.
|
||||
scenes: Optional pre-written scene descriptions.
|
||||
Each scene is a dict with ``description`` key.
|
||||
lyrics: Optional song lyrics for the soundtrack.
|
||||
|
||||
Returns dict with project metadata.
|
||||
"""
|
||||
project = CreativeProject(
|
||||
title=title,
|
||||
description=description,
|
||||
scenes=scenes or [],
|
||||
lyrics=lyrics,
|
||||
)
|
||||
_projects[project.id] = project
|
||||
_save_project(project)
|
||||
logger.info("Creative project created: %s (%s)", project.id, title)
|
||||
return {"success": True, "project": project.to_dict()}
|
||||
|
||||
|
||||
def get_project(project_id: str) -> Optional[dict]:
|
||||
"""Get project metadata."""
|
||||
project = _projects.get(project_id)
|
||||
if project:
|
||||
return project.to_dict()
|
||||
return None
|
||||
|
||||
|
||||
def list_projects() -> list[dict]:
|
||||
"""List all creative projects."""
|
||||
return [p.to_dict() for p in _projects.values()]
|
||||
|
||||
|
||||
# ── Pipeline steps ────────────────────────────────────────────────────────────
|
||||
|
||||
def run_storyboard(project_id: str) -> dict:
|
||||
"""Generate storyboard frames for all scenes in a project.
|
||||
|
||||
Calls Pixel's generate_storyboard tool.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.scenes:
|
||||
return {"success": False, "error": "No scenes defined"}
|
||||
|
||||
project.status = "storyboard"
|
||||
|
||||
from tools.image_tools import generate_storyboard
|
||||
|
||||
scene_descriptions = [s["description"] for s in project.scenes]
|
||||
result = generate_storyboard(scene_descriptions)
|
||||
|
||||
if result["success"]:
|
||||
project.storyboard_frames = result["frames"]
|
||||
_save_project(project)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def run_music(
|
||||
project_id: str,
|
||||
genre: str = "pop",
|
||||
duration: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate the soundtrack for a project.
|
||||
|
||||
Calls Lyra's generate_song tool.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
|
||||
project.status = "music"
|
||||
|
||||
from tools.music_tools import generate_song
|
||||
|
||||
# Default duration: ~15s per scene, minimum 60s
|
||||
target_duration = duration or max(60, len(project.scenes) * 15)
|
||||
|
||||
result = generate_song(
|
||||
lyrics=project.lyrics,
|
||||
genre=genre,
|
||||
duration=target_duration,
|
||||
title=project.title,
|
||||
)
|
||||
|
||||
if result["success"]:
|
||||
project.music_track = result
|
||||
_save_project(project)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def run_video_generation(project_id: str) -> dict:
|
||||
"""Generate video clips for each scene.
|
||||
|
||||
Uses storyboard frames (image-to-video) if available,
|
||||
otherwise falls back to text-to-video.
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.scenes:
|
||||
return {"success": False, "error": "No scenes defined"}
|
||||
|
||||
project.status = "video"
|
||||
|
||||
from tools.video_tools import generate_video_clip, image_to_video
|
||||
|
||||
clips = []
|
||||
for i, scene in enumerate(project.scenes):
|
||||
desc = scene["description"]
|
||||
|
||||
# Prefer image-to-video if storyboard frame exists
|
||||
if i < len(project.storyboard_frames):
|
||||
frame = project.storyboard_frames[i]
|
||||
result = image_to_video(
|
||||
image_path=frame["path"],
|
||||
prompt=desc,
|
||||
duration=scene.get("duration", 5),
|
||||
)
|
||||
else:
|
||||
result = generate_video_clip(
|
||||
prompt=desc,
|
||||
duration=scene.get("duration", 5),
|
||||
)
|
||||
|
||||
result["scene_index"] = i
|
||||
clips.append(result)
|
||||
|
||||
project.video_clips = clips
|
||||
_save_project(project)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"clip_count": len(clips),
|
||||
"clips": clips,
|
||||
}
|
||||
|
||||
|
||||
def run_assembly(project_id: str, transition_duration: float = 1.0) -> dict:
|
||||
"""Assemble all clips into the final video with music.
|
||||
|
||||
Pipeline:
|
||||
1. Stitch clips with transitions
|
||||
2. Overlay music track
|
||||
3. Add title card
|
||||
"""
|
||||
project = _projects.get(project_id)
|
||||
if not project:
|
||||
return {"success": False, "error": "Project not found"}
|
||||
if not project.video_clips:
|
||||
return {"success": False, "error": "No video clips generated"}
|
||||
|
||||
project.status = "assembly"
|
||||
|
||||
from creative.assembler import stitch_clips, overlay_audio, add_title_card
|
||||
|
||||
# 1. Stitch clips
|
||||
clip_paths = [c["path"] for c in project.video_clips if c.get("success")]
|
||||
if not clip_paths:
|
||||
return {"success": False, "error": "No successful clips to assemble"}
|
||||
|
||||
stitched = stitch_clips(clip_paths, transition_duration=transition_duration)
|
||||
if not stitched["success"]:
|
||||
return stitched
|
||||
|
||||
# 2. Overlay music (if available)
|
||||
current_video = stitched["path"]
|
||||
if project.music_track and project.music_track.get("path"):
|
||||
mixed = overlay_audio(current_video, project.music_track["path"])
|
||||
if mixed["success"]:
|
||||
current_video = mixed["path"]
|
||||
|
||||
# 3. Add title card
|
||||
titled = add_title_card(current_video, title=project.title)
|
||||
if titled["success"]:
|
||||
current_video = titled["path"]
|
||||
|
||||
project.final_video = {
|
||||
"path": current_video,
|
||||
"duration": titled.get("duration", stitched["total_duration"]),
|
||||
}
|
||||
project.status = "complete"
|
||||
_save_project(project)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"path": current_video,
|
||||
"duration": project.final_video["duration"],
|
||||
"project_id": project_id,
|
||||
}
|
||||
|
||||
|
||||
def run_full_pipeline(
|
||||
title: str,
|
||||
description: str,
|
||||
scenes: list[dict],
|
||||
lyrics: str = "",
|
||||
genre: str = "pop",
|
||||
) -> dict:
|
||||
"""Run the entire creative pipeline end-to-end.
|
||||
|
||||
This is the top-level orchestration function that:
|
||||
1. Creates the project
|
||||
2. Generates storyboard frames
|
||||
3. Generates music
|
||||
4. Generates video clips
|
||||
5. Assembles the final video
|
||||
|
||||
Args:
|
||||
title: Project title.
|
||||
description: Creative brief.
|
||||
scenes: List of scene dicts with ``description`` keys.
|
||||
lyrics: Song lyrics for the soundtrack.
|
||||
genre: Music genre.
|
||||
|
||||
Returns dict with final video path and project metadata.
|
||||
"""
|
||||
# Create project
|
||||
project_result = create_project(title, description, scenes, lyrics)
|
||||
if not project_result["success"]:
|
||||
return project_result
|
||||
project_id = project_result["project"]["id"]
|
||||
|
||||
# Run pipeline steps
|
||||
steps = [
|
||||
("storyboard", lambda: run_storyboard(project_id)),
|
||||
("music", lambda: run_music(project_id, genre=genre)),
|
||||
("video", lambda: run_video_generation(project_id)),
|
||||
("assembly", lambda: run_assembly(project_id)),
|
||||
]
|
||||
|
||||
for step_name, step_fn in steps:
|
||||
logger.info("Creative pipeline step: %s (project %s)", step_name, project_id)
|
||||
result = step_fn()
|
||||
if not result.get("success"):
|
||||
project = _projects.get(project_id)
|
||||
if project:
|
||||
project.status = "failed"
|
||||
_save_project(project)
|
||||
return {
|
||||
"success": False,
|
||||
"failed_step": step_name,
|
||||
"error": result.get("error", "Unknown error"),
|
||||
"project_id": project_id,
|
||||
}
|
||||
|
||||
project = _projects.get(project_id)
|
||||
return {
|
||||
"success": True,
|
||||
"project_id": project_id,
|
||||
"final_video": project.final_video if project else None,
|
||||
"project": project.to_dict() if project else None,
|
||||
}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
DIRECTOR_TOOL_CATALOG: dict[str, dict] = {
|
||||
"create_project": {
|
||||
"name": "Create Creative Project",
|
||||
"description": "Create a new creative production project",
|
||||
"fn": create_project,
|
||||
},
|
||||
"run_storyboard": {
|
||||
"name": "Generate Storyboard",
|
||||
"description": "Generate keyframe images for all project scenes",
|
||||
"fn": run_storyboard,
|
||||
},
|
||||
"run_music": {
|
||||
"name": "Generate Music",
|
||||
"description": "Generate the project soundtrack with vocals and instrumentals",
|
||||
"fn": run_music,
|
||||
},
|
||||
"run_video_generation": {
|
||||
"name": "Generate Video Clips",
|
||||
"description": "Generate video clips for each project scene",
|
||||
"fn": run_video_generation,
|
||||
},
|
||||
"run_assembly": {
|
||||
"name": "Assemble Final Video",
|
||||
"description": "Stitch clips, overlay music, and add title cards",
|
||||
"fn": run_assembly,
|
||||
},
|
||||
"run_full_pipeline": {
|
||||
"name": "Run Full Pipeline",
|
||||
"description": "Execute entire creative pipeline end-to-end",
|
||||
"fn": run_full_pipeline,
|
||||
},
|
||||
}
|
||||
@@ -24,6 +24,7 @@ from dashboard.routes.telegram import router as telegram_router
|
||||
from dashboard.routes.swarm_internal import router as swarm_internal_router
|
||||
from dashboard.routes.tools import router as tools_router
|
||||
from dashboard.routes.spark import router as spark_router
|
||||
from dashboard.routes.creative import router as creative_router
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
@@ -143,6 +144,7 @@ app.include_router(telegram_router)
|
||||
app.include_router(swarm_internal_router)
|
||||
app.include_router(tools_router)
|
||||
app.include_router(spark_router)
|
||||
app.include_router(creative_router)
|
||||
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
|
||||
87
src/dashboard/routes/creative.py
Normal file
87
src/dashboard/routes/creative.py
Normal file
@@ -0,0 +1,87 @@
|
||||
"""Creative Studio dashboard route — /creative endpoints.
|
||||
|
||||
Provides a dashboard page for the creative pipeline: image generation,
|
||||
music generation, video generation, and the full director pipeline.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Request
|
||||
from fastapi.responses import HTMLResponse
|
||||
from fastapi.templating import Jinja2Templates
|
||||
|
||||
router = APIRouter(tags=["creative"])
|
||||
templates = Jinja2Templates(directory=str(Path(__file__).parent.parent / "templates"))
|
||||
|
||||
|
||||
@router.get("/creative/ui", response_class=HTMLResponse)
|
||||
async def creative_studio(request: Request):
|
||||
"""Render the Creative Studio page."""
|
||||
# Collect existing outputs
|
||||
image_dir = Path("data/images")
|
||||
music_dir = Path("data/music")
|
||||
video_dir = Path("data/video")
|
||||
creative_dir = Path("data/creative")
|
||||
|
||||
images = sorted(image_dir.glob("*.png"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if image_dir.exists() else []
|
||||
music_files = sorted(music_dir.glob("*.wav"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if music_dir.exists() else []
|
||||
videos = sorted(video_dir.glob("*.mp4"), key=lambda p: p.stat().st_mtime, reverse=True)[:20] if video_dir.exists() else []
|
||||
|
||||
# Load projects
|
||||
projects = []
|
||||
if creative_dir.exists():
|
||||
for proj_dir in sorted(creative_dir.iterdir(), reverse=True):
|
||||
meta_path = proj_dir / "project.json"
|
||||
if meta_path.exists():
|
||||
import json
|
||||
projects.append(json.loads(meta_path.read_text()))
|
||||
|
||||
return templates.TemplateResponse(
|
||||
request,
|
||||
"creative.html",
|
||||
{
|
||||
"page_title": "Creative Studio",
|
||||
"images": [{"name": p.name, "path": str(p)} for p in images],
|
||||
"music_files": [{"name": p.name, "path": str(p)} for p in music_files],
|
||||
"videos": [{"name": p.name, "path": str(p)} for p in videos],
|
||||
"projects": projects[:10],
|
||||
"image_count": len(images),
|
||||
"music_count": len(music_files),
|
||||
"video_count": len(videos),
|
||||
"project_count": len(projects),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
@router.get("/creative/api/projects")
|
||||
async def creative_projects_api():
|
||||
"""Return creative projects as JSON."""
|
||||
try:
|
||||
from creative.director import list_projects
|
||||
return {"projects": list_projects()}
|
||||
except ImportError:
|
||||
return {"projects": []}
|
||||
|
||||
|
||||
@router.get("/creative/api/genres")
|
||||
async def creative_genres_api():
|
||||
"""Return supported music genres."""
|
||||
try:
|
||||
from tools.music_tools import GENRES
|
||||
return {"genres": GENRES}
|
||||
except ImportError:
|
||||
return {"genres": []}
|
||||
|
||||
|
||||
@router.get("/creative/api/video-styles")
|
||||
async def creative_video_styles_api():
|
||||
"""Return supported video styles and resolutions."""
|
||||
try:
|
||||
from tools.video_tools import VIDEO_STYLES, RESOLUTION_PRESETS
|
||||
return {
|
||||
"styles": VIDEO_STYLES,
|
||||
"resolutions": list(RESOLUTION_PRESETS.keys()),
|
||||
}
|
||||
except ImportError:
|
||||
return {"styles": [], "resolutions": []}
|
||||
@@ -26,6 +26,7 @@
|
||||
<a href="/spark/ui" class="mc-test-link">SPARK</a>
|
||||
<a href="/marketplace/ui" class="mc-test-link">MARKET</a>
|
||||
<a href="/tools" class="mc-test-link">TOOLS</a>
|
||||
<a href="/creative/ui" class="mc-test-link">CREATIVE</a>
|
||||
<a href="/mobile" class="mc-test-link">MOBILE</a>
|
||||
<button id="enable-notifications" class="mc-test-link" style="background:none;border:none;cursor:pointer;" title="Enable notifications">🔔</button>
|
||||
<span class="mc-time" id="clock"></span>
|
||||
|
||||
198
src/dashboard/templates/creative.html
Normal file
198
src/dashboard/templates/creative.html
Normal file
@@ -0,0 +1,198 @@
|
||||
{% extends "base.html" %}
|
||||
|
||||
{% block title %}Creative Studio — Mission Control{% endblock %}
|
||||
|
||||
{% block content %}
|
||||
<div class="container-fluid py-4">
|
||||
<div class="row mb-4">
|
||||
<div class="col">
|
||||
<h1 class="display-6">Creative Studio</h1>
|
||||
<p class="text-secondary">Image, music, and video generation — powered by Pixel, Lyra, and Reel</p>
|
||||
</div>
|
||||
<div class="col-auto d-flex gap-3">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ image_count }}</h4>
|
||||
<small class="text-secondary">Images</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ music_count }}</h4>
|
||||
<small class="text-secondary">Tracks</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ video_count }}</h4>
|
||||
<small class="text-secondary">Clips</small>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center py-2 px-3">
|
||||
<h4 class="mb-0">{{ project_count }}</h4>
|
||||
<small class="text-secondary">Projects</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Tab Navigation -->
|
||||
<ul class="nav nav-tabs mb-4" role="tablist">
|
||||
<li class="nav-item">
|
||||
<button class="nav-link active" data-bs-toggle="tab" data-bs-target="#tab-images" type="button">Images</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-music" type="button">Music</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-video" type="button">Video</button>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<button class="nav-link" data-bs-toggle="tab" data-bs-target="#tab-director" type="button">Director</button>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<div class="tab-content">
|
||||
<!-- Images Tab -->
|
||||
<div class="tab-pane fade show active" id="tab-images" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Pixel</strong> — Visual Architect (FLUX)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate images by sending a task to the swarm: <code>"Generate an image of ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-primary">generate_image</span> <span class="badge bg-primary">generate_storyboard</span> <span class="badge bg-primary">image_variations</span></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if images %}
|
||||
<div class="row g-3">
|
||||
{% for img in images %}
|
||||
<div class="col-md-3">
|
||||
<div class="card bg-dark border-secondary h-100">
|
||||
<div class="card-body text-center">
|
||||
<small class="text-secondary">{{ img.name }}</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No images generated yet. Send an image generation task to the swarm to get started.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Music Tab -->
|
||||
<div class="tab-pane fade" id="tab-music" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Lyra</strong> — Sound Weaver (ACE-Step 1.5)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate music by sending a task: <code>"Compose a pop song about ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-success">generate_song</span> <span class="badge bg-success">generate_instrumental</span> <span class="badge bg-success">generate_vocals</span> <span class="badge bg-success">list_genres</span></p>
|
||||
<p class="text-secondary small mb-0">Genres: pop, rock, hip-hop, r&b, jazz, blues, country, electronic, classical, folk, reggae, metal, punk, soul, funk, latin, ambient, lo-fi, cinematic</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if music_files %}
|
||||
<div class="list-group">
|
||||
{% for track in music_files %}
|
||||
<div class="list-group-item bg-dark border-secondary d-flex justify-content-between align-items-center">
|
||||
<span>{{ track.name }}</span>
|
||||
<audio controls preload="none"><source src="/static/{{ track.path }}" type="audio/wav"></audio>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No music tracks generated yet. Send a music generation task to the swarm.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Video Tab -->
|
||||
<div class="tab-pane fade" id="tab-video" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Reel</strong> — Motion Director (Wan 2.1)
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Generate video clips: <code>"Create a cinematic clip of ..."</code></p>
|
||||
<p class="text-secondary small">Tools: <span class="badge bg-warning text-dark">generate_video_clip</span> <span class="badge bg-warning text-dark">image_to_video</span> <span class="badge bg-warning text-dark">stitch_clips</span> <span class="badge bg-warning text-dark">overlay_audio</span></p>
|
||||
<p class="text-secondary small mb-0">Resolutions: 480p, 720p | Styles: cinematic, anime, documentary, abstract, timelapse, slow-motion, music-video, vlog</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% if videos %}
|
||||
<div class="row g-3">
|
||||
{% for vid in videos %}
|
||||
<div class="col-md-4">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-body text-center">
|
||||
<small class="text-secondary">{{ vid.name }}</small>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No video clips generated yet. Send a video generation task to the swarm.</div>
|
||||
{% endif %}
|
||||
</div>
|
||||
|
||||
<!-- Director Tab -->
|
||||
<div class="tab-pane fade" id="tab-director" role="tabpanel">
|
||||
<div class="row mb-3">
|
||||
<div class="col-12">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header">
|
||||
<strong>Creative Director</strong> — Full Pipeline
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<p class="text-secondary small mb-2">Orchestrate all three creative personas to produce a 3+ minute music video or cinematic short.</p>
|
||||
<p class="text-secondary small">Pipeline: <span class="badge bg-info">Script</span> → <span class="badge bg-primary">Storyboard</span> → <span class="badge bg-success">Music</span> → <span class="badge bg-warning text-dark">Video</span> → <span class="badge bg-danger">Assembly</span></p>
|
||||
<p class="text-secondary small mb-0">Tools: <span class="badge bg-secondary">create_project</span> <span class="badge bg-secondary">run_storyboard</span> <span class="badge bg-secondary">run_music</span> <span class="badge bg-secondary">run_video_generation</span> <span class="badge bg-secondary">run_assembly</span> <span class="badge bg-secondary">run_full_pipeline</span></p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h5 class="mb-3">Projects</h5>
|
||||
{% if projects %}
|
||||
<div class="row g-3">
|
||||
{% for proj in projects %}
|
||||
<div class="col-md-6">
|
||||
<div class="card bg-dark border-secondary">
|
||||
<div class="card-header d-flex justify-content-between">
|
||||
<strong>{{ proj.title or proj.id }}</strong>
|
||||
<span class="badge {% if proj.status == 'complete' %}bg-success{% elif proj.status == 'failed' %}bg-danger{% else %}bg-info{% endif %}">{{ proj.status }}</span>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="d-flex gap-3 small text-secondary">
|
||||
<span>Scenes: {{ proj.scene_count }}</span>
|
||||
<span>Storyboard: {{ 'Yes' if proj.has_storyboard else 'No' }}</span>
|
||||
<span>Music: {{ 'Yes' if proj.has_music else 'No' }}</span>
|
||||
<span>Clips: {{ proj.clip_count }}</span>
|
||||
<span>Final: {{ 'Yes' if proj.has_final else 'No' }}</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endfor %}
|
||||
</div>
|
||||
{% else %}
|
||||
<div class="alert alert-secondary">No creative projects yet. Use the swarm to create one: <code>"Create a music video about sunrise over mountains"</code></div>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endblock %}
|
||||
@@ -201,6 +201,71 @@ class SparkEngine:
|
||||
agent_id=agent_id,
|
||||
)
|
||||
|
||||
# ── Tool-level event capture ─────────────────────────────────────────────
|
||||
|
||||
def on_tool_executed(
|
||||
self,
|
||||
agent_id: str,
|
||||
tool_name: str,
|
||||
task_id: Optional[str] = None,
|
||||
success: bool = True,
|
||||
duration_ms: Optional[int] = None,
|
||||
) -> Optional[str]:
|
||||
"""Capture an individual tool invocation.
|
||||
|
||||
Tracks which tools each agent uses, success rates, and latency
|
||||
so Spark can generate tool-specific advisories.
|
||||
"""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
data = {"tool": tool_name, "success": success}
|
||||
if duration_ms is not None:
|
||||
data["duration_ms"] = duration_ms
|
||||
|
||||
return spark_memory.record_event(
|
||||
event_type="tool_executed",
|
||||
description=f"Agent {agent_id[:8]} used {tool_name} ({'ok' if success else 'FAIL'})",
|
||||
agent_id=agent_id,
|
||||
task_id=task_id,
|
||||
data=json.dumps(data),
|
||||
importance=0.3 if success else 0.6,
|
||||
)
|
||||
|
||||
# ── Creative pipeline event capture ──────────────────────────────────────
|
||||
|
||||
def on_creative_step(
|
||||
self,
|
||||
project_id: str,
|
||||
step_name: str,
|
||||
agent_id: str,
|
||||
output_path: Optional[str] = None,
|
||||
success: bool = True,
|
||||
) -> Optional[str]:
|
||||
"""Capture a creative pipeline step (storyboard, music, video, assembly).
|
||||
|
||||
Tracks pipeline progress and creative output quality metrics
|
||||
for Spark advisory generation.
|
||||
"""
|
||||
if not self._enabled:
|
||||
return None
|
||||
|
||||
data = {
|
||||
"project_id": project_id,
|
||||
"step": step_name,
|
||||
"success": success,
|
||||
}
|
||||
if output_path:
|
||||
data["output_path"] = output_path
|
||||
|
||||
return spark_memory.record_event(
|
||||
event_type="creative_step",
|
||||
description=f"Creative pipeline: {step_name} by {agent_id[:8]} ({'ok' if success else 'FAIL'})",
|
||||
agent_id=agent_id,
|
||||
data=json.dumps(data),
|
||||
importance=0.5,
|
||||
)
|
||||
|
||||
# ── Memory consolidation ────────────────────────────────────────────────
|
||||
|
||||
def _maybe_consolidate(self, agent_id: str) -> None:
|
||||
@@ -254,6 +319,8 @@ class SparkEngine:
|
||||
"task_completed": spark_memory.count_events("task_completed"),
|
||||
"task_failed": spark_memory.count_events("task_failed"),
|
||||
"agent_joined": spark_memory.count_events("agent_joined"),
|
||||
"tool_executed": spark_memory.count_events("tool_executed"),
|
||||
"creative_step": spark_memory.count_events("creative_step"),
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
"""Persona definitions for the six built-in swarm agents.
|
||||
"""Persona definitions for the nine built-in swarm agents.
|
||||
|
||||
Each persona entry describes a specialised SwarmNode that can be spawned
|
||||
into the coordinator. Personas have:
|
||||
@@ -76,6 +76,7 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"preferred_keywords": [
|
||||
"deploy", "infrastructure", "config", "docker", "kubernetes",
|
||||
"server", "automation", "pipeline", "ci", "cd",
|
||||
"git", "push", "pull", "clone", "devops",
|
||||
],
|
||||
},
|
||||
"seer": {
|
||||
@@ -109,6 +110,7 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"preferred_keywords": [
|
||||
"code", "function", "bug", "fix", "refactor", "test",
|
||||
"implement", "class", "api", "script",
|
||||
"commit", "branch", "merge", "git", "pull request",
|
||||
],
|
||||
},
|
||||
"quill": {
|
||||
@@ -127,6 +129,60 @@ PERSONAS: dict[str, PersonaMeta] = {
|
||||
"edit", "proofread", "content", "article",
|
||||
],
|
||||
},
|
||||
# ── Creative & DevOps personas ────────────────────────────────────────────
|
||||
"pixel": {
|
||||
"id": "pixel",
|
||||
"name": "Pixel",
|
||||
"role": "Visual Architect",
|
||||
"description": (
|
||||
"Image generation, storyboard frames, and visual design "
|
||||
"using FLUX models."
|
||||
),
|
||||
"capabilities": "image-generation,storyboard,design",
|
||||
"rate_sats": 80,
|
||||
"bid_base": 60,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"image", "picture", "photo", "draw", "illustration",
|
||||
"storyboard", "frame", "visual", "design", "generate image",
|
||||
"portrait", "landscape", "scene", "artwork",
|
||||
],
|
||||
},
|
||||
"lyra": {
|
||||
"id": "lyra",
|
||||
"name": "Lyra",
|
||||
"role": "Sound Weaver",
|
||||
"description": (
|
||||
"Music and song generation with vocals, instrumentals, "
|
||||
"and lyrics using ACE-Step."
|
||||
),
|
||||
"capabilities": "music-generation,vocals,composition",
|
||||
"rate_sats": 90,
|
||||
"bid_base": 70,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"music", "song", "sing", "vocal", "instrumental",
|
||||
"melody", "beat", "track", "compose", "lyrics",
|
||||
"audio", "sound", "album", "remix",
|
||||
],
|
||||
},
|
||||
"reel": {
|
||||
"id": "reel",
|
||||
"name": "Reel",
|
||||
"role": "Motion Director",
|
||||
"description": (
|
||||
"Video generation from text and image prompts "
|
||||
"using Wan 2.1 models."
|
||||
),
|
||||
"capabilities": "video-generation,animation,motion",
|
||||
"rate_sats": 100,
|
||||
"bid_base": 80,
|
||||
"bid_jitter": 20,
|
||||
"preferred_keywords": [
|
||||
"video", "clip", "animate", "motion", "film",
|
||||
"scene", "cinematic", "footage", "render", "timelapse",
|
||||
],
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -23,11 +23,14 @@ class ToolExecutor:
|
||||
|
||||
Each persona gets a different set of tools based on their specialty:
|
||||
- Echo: web search, file reading
|
||||
- Forge: shell, python, file read/write
|
||||
- Forge: shell, python, file read/write, git
|
||||
- Seer: python, file reading
|
||||
- Quill: file read/write
|
||||
- Mace: shell, web search
|
||||
- Helm: shell, file operations
|
||||
- Helm: shell, file operations, git
|
||||
- Pixel: image generation, storyboards
|
||||
- Lyra: music/song generation
|
||||
- Reel: video generation, assembly
|
||||
|
||||
The executor combines:
|
||||
1. MCP tools (file, shell, python, search)
|
||||
@@ -214,6 +217,39 @@ Response:"""
|
||||
"run": "shell",
|
||||
"list": "list_files",
|
||||
"directory": "list_files",
|
||||
# Git operations
|
||||
"commit": "git_commit",
|
||||
"branch": "git_branch",
|
||||
"push": "git_push",
|
||||
"pull": "git_pull",
|
||||
"diff": "git_diff",
|
||||
"clone": "git_clone",
|
||||
"merge": "git_branch",
|
||||
"stash": "git_stash",
|
||||
"blame": "git_blame",
|
||||
"git status": "git_status",
|
||||
"git log": "git_log",
|
||||
# Image generation
|
||||
"image": "generate_image",
|
||||
"picture": "generate_image",
|
||||
"storyboard": "generate_storyboard",
|
||||
"illustration": "generate_image",
|
||||
# Music generation
|
||||
"music": "generate_song",
|
||||
"song": "generate_song",
|
||||
"vocal": "generate_vocals",
|
||||
"instrumental": "generate_instrumental",
|
||||
"lyrics": "generate_song",
|
||||
# Video generation
|
||||
"video": "generate_video_clip",
|
||||
"clip": "generate_video_clip",
|
||||
"animate": "image_to_video",
|
||||
"film": "generate_video_clip",
|
||||
# Assembly
|
||||
"stitch": "stitch_clips",
|
||||
"assemble": "run_assembly",
|
||||
"title card": "add_title_card",
|
||||
"subtitle": "add_subtitles",
|
||||
}
|
||||
|
||||
for keyword, tool in keyword_tool_map.items():
|
||||
|
||||
@@ -5,12 +5,22 @@ Provides Timmy and swarm agents with capabilities for:
|
||||
- File read/write (local filesystem)
|
||||
- Shell command execution (sandboxed)
|
||||
- Python code execution
|
||||
- Git operations (clone, commit, push, pull, branch, diff, etc.)
|
||||
- Image generation (FLUX text-to-image, storyboards)
|
||||
- Music generation (ACE-Step vocals + instrumentals)
|
||||
- Video generation (Wan 2.1 text-to-video, image-to-video)
|
||||
- Creative pipeline (storyboard → music → video → assembly)
|
||||
|
||||
Tools are assigned to personas based on their specialties:
|
||||
- Echo (Research): web search, file read
|
||||
- Forge (Code): shell, python execution, file write
|
||||
- Forge (Code): shell, python execution, file write, git
|
||||
- Seer (Data): python execution, file read
|
||||
- Quill (Writing): file read/write
|
||||
- Helm (DevOps): shell, file operations, git
|
||||
- Mace (Security): shell, web search, file read
|
||||
- Pixel (Visual): image generation, storyboards
|
||||
- Lyra (Music): song/vocal/instrumental generation
|
||||
- Reel (Video): video clip generation, image-to-video
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -280,9 +290,26 @@ PERSONA_TOOLKITS: dict[str, Callable[[], Toolkit]] = {
|
||||
"seer": create_data_tools,
|
||||
"forge": create_code_tools,
|
||||
"quill": create_writing_tools,
|
||||
"pixel": lambda base_dir=None: _create_stub_toolkit("pixel"),
|
||||
"lyra": lambda base_dir=None: _create_stub_toolkit("lyra"),
|
||||
"reel": lambda base_dir=None: _create_stub_toolkit("reel"),
|
||||
}
|
||||
|
||||
|
||||
def _create_stub_toolkit(name: str):
|
||||
"""Create a minimal Agno toolkit for creative personas.
|
||||
|
||||
Creative personas use their own dedicated tool modules (tools.image_tools,
|
||||
tools.music_tools, tools.video_tools) rather than Agno-wrapped functions.
|
||||
This stub ensures PERSONA_TOOLKITS has an entry so ToolExecutor doesn't
|
||||
fall back to the full toolkit.
|
||||
"""
|
||||
if not _AGNO_TOOLS_AVAILABLE:
|
||||
return None
|
||||
toolkit = Toolkit(name=name)
|
||||
return toolkit
|
||||
|
||||
|
||||
def get_tools_for_persona(persona_id: str, base_dir: str | Path | None = None) -> Toolkit | None:
|
||||
"""Get the appropriate toolkit for a persona.
|
||||
|
||||
@@ -301,11 +328,11 @@ def get_tools_for_persona(persona_id: str, base_dir: str | Path | None = None) -
|
||||
|
||||
def get_all_available_tools() -> dict[str, dict]:
|
||||
"""Get a catalog of all available tools and their descriptions.
|
||||
|
||||
|
||||
Returns:
|
||||
Dict mapping tool categories to their tools and descriptions.
|
||||
"""
|
||||
return {
|
||||
catalog = {
|
||||
"web_search": {
|
||||
"name": "Web Search",
|
||||
"description": "Search the web using DuckDuckGo",
|
||||
@@ -337,3 +364,77 @@ def get_all_available_tools() -> dict[str, dict]:
|
||||
"available_in": ["echo", "seer", "forge", "quill", "mace", "helm", "timmy"],
|
||||
},
|
||||
}
|
||||
|
||||
# ── Git tools ─────────────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.git_tools import GIT_TOOL_CATALOG
|
||||
for tool_id, info in GIT_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["forge", "helm", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Image tools (Pixel) ───────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.image_tools import IMAGE_TOOL_CATALOG
|
||||
for tool_id, info in IMAGE_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["pixel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Music tools (Lyra) ────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.music_tools import MUSIC_TOOL_CATALOG
|
||||
for tool_id, info in MUSIC_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["lyra", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Video tools (Reel) ────────────────────────────────────────────────────
|
||||
try:
|
||||
from tools.video_tools import VIDEO_TOOL_CATALOG
|
||||
for tool_id, info in VIDEO_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Creative pipeline (Director) ──────────────────────────────────────────
|
||||
try:
|
||||
from creative.director import DIRECTOR_TOOL_CATALOG
|
||||
for tool_id, info in DIRECTOR_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# ── Assembler tools ───────────────────────────────────────────────────────
|
||||
try:
|
||||
from creative.assembler import ASSEMBLER_TOOL_CATALOG
|
||||
for tool_id, info in ASSEMBLER_TOOL_CATALOG.items():
|
||||
catalog[tool_id] = {
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"available_in": ["reel", "timmy"],
|
||||
}
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
return catalog
|
||||
|
||||
1
src/tools/__init__.py
Normal file
1
src/tools/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Creative and DevOps tool modules for Timmy's swarm agents."""
|
||||
281
src/tools/git_tools.py
Normal file
281
src/tools/git_tools.py
Normal file
@@ -0,0 +1,281 @@
|
||||
"""Git operations tools for Forge, Helm, and Timmy personas.
|
||||
|
||||
Provides a full set of git commands that agents can execute against
|
||||
local or remote repositories. Uses GitPython under the hood.
|
||||
|
||||
All functions return plain dicts so they're easily serialisable for
|
||||
tool-call results, Spark event capture, and WebSocket broadcast.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_GIT_AVAILABLE = True
|
||||
try:
|
||||
from git import Repo, InvalidGitRepositoryError, GitCommandNotFound
|
||||
except ImportError:
|
||||
_GIT_AVAILABLE = False
|
||||
|
||||
|
||||
def _require_git() -> None:
|
||||
if not _GIT_AVAILABLE:
|
||||
raise ImportError(
|
||||
"GitPython is not installed. Run: pip install GitPython"
|
||||
)
|
||||
|
||||
|
||||
def _open_repo(repo_path: str | Path) -> "Repo":
|
||||
"""Open an existing git repo at *repo_path*."""
|
||||
_require_git()
|
||||
return Repo(str(repo_path))
|
||||
|
||||
|
||||
# ── Repository management ────────────────────────────────────────────────────
|
||||
|
||||
def git_clone(url: str, dest: str | Path) -> dict:
|
||||
"""Clone a remote repository to a local path.
|
||||
|
||||
Returns dict with ``path`` and ``default_branch``.
|
||||
"""
|
||||
_require_git()
|
||||
repo = Repo.clone_from(url, str(dest))
|
||||
return {
|
||||
"success": True,
|
||||
"path": str(dest),
|
||||
"default_branch": repo.active_branch.name,
|
||||
}
|
||||
|
||||
|
||||
def git_init(path: str | Path) -> dict:
|
||||
"""Initialise a new git repository at *path*."""
|
||||
_require_git()
|
||||
Path(path).mkdir(parents=True, exist_ok=True)
|
||||
repo = Repo.init(str(path))
|
||||
return {"success": True, "path": str(path), "bare": repo.bare}
|
||||
|
||||
|
||||
# ── Status / inspection ──────────────────────────────────────────────────────
|
||||
|
||||
def git_status(repo_path: str | Path) -> dict:
|
||||
"""Return working-tree status: modified, staged, untracked files."""
|
||||
repo = _open_repo(repo_path)
|
||||
return {
|
||||
"success": True,
|
||||
"branch": repo.active_branch.name,
|
||||
"is_dirty": repo.is_dirty(untracked_files=True),
|
||||
"untracked": repo.untracked_files,
|
||||
"modified": [item.a_path for item in repo.index.diff(None)],
|
||||
"staged": [item.a_path for item in repo.index.diff("HEAD")],
|
||||
}
|
||||
|
||||
|
||||
def git_diff(
|
||||
repo_path: str | Path,
|
||||
staged: bool = False,
|
||||
file_path: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Show diff of working tree or staged changes.
|
||||
|
||||
If *file_path* is given, scope diff to that file only.
|
||||
"""
|
||||
repo = _open_repo(repo_path)
|
||||
args: list[str] = []
|
||||
if staged:
|
||||
args.append("--cached")
|
||||
if file_path:
|
||||
args.extend(["--", file_path])
|
||||
diff_text = repo.git.diff(*args)
|
||||
return {"success": True, "diff": diff_text, "staged": staged}
|
||||
|
||||
|
||||
def git_log(
|
||||
repo_path: str | Path,
|
||||
max_count: int = 20,
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Return recent commit history as a list of dicts."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
commits = []
|
||||
for commit in repo.iter_commits(ref, max_count=max_count):
|
||||
commits.append({
|
||||
"sha": commit.hexsha,
|
||||
"short_sha": commit.hexsha[:8],
|
||||
"message": commit.message.strip(),
|
||||
"author": str(commit.author),
|
||||
"date": commit.committed_datetime.isoformat(),
|
||||
"files_changed": len(commit.stats.files),
|
||||
})
|
||||
return {"success": True, "branch": ref, "commits": commits}
|
||||
|
||||
|
||||
def git_blame(repo_path: str | Path, file_path: str) -> dict:
|
||||
"""Show line-by-line authorship for a file."""
|
||||
repo = _open_repo(repo_path)
|
||||
blame_text = repo.git.blame(file_path)
|
||||
return {"success": True, "file": file_path, "blame": blame_text}
|
||||
|
||||
|
||||
# ── Branching ─────────────────────────────────────────────────────────────────
|
||||
|
||||
def git_branch(
|
||||
repo_path: str | Path,
|
||||
create: Optional[str] = None,
|
||||
switch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""List branches, optionally create or switch to one."""
|
||||
repo = _open_repo(repo_path)
|
||||
|
||||
if create:
|
||||
repo.create_head(create)
|
||||
if switch:
|
||||
repo.heads[switch].checkout()
|
||||
|
||||
branches = [h.name for h in repo.heads]
|
||||
active = repo.active_branch.name
|
||||
return {
|
||||
"success": True,
|
||||
"branches": branches,
|
||||
"active": active,
|
||||
"created": create,
|
||||
"switched": switch,
|
||||
}
|
||||
|
||||
|
||||
# ── Staging & committing ─────────────────────────────────────────────────────
|
||||
|
||||
def git_add(repo_path: str | Path, paths: list[str] | None = None) -> dict:
|
||||
"""Stage files for commit. *paths* defaults to all modified files."""
|
||||
repo = _open_repo(repo_path)
|
||||
if paths:
|
||||
repo.index.add(paths)
|
||||
else:
|
||||
# Stage all changes
|
||||
repo.git.add(A=True)
|
||||
staged = [item.a_path for item in repo.index.diff("HEAD")]
|
||||
return {"success": True, "staged": staged}
|
||||
|
||||
|
||||
def git_commit(repo_path: str | Path, message: str) -> dict:
|
||||
"""Create a commit with the given message."""
|
||||
repo = _open_repo(repo_path)
|
||||
commit = repo.index.commit(message)
|
||||
return {
|
||||
"success": True,
|
||||
"sha": commit.hexsha,
|
||||
"short_sha": commit.hexsha[:8],
|
||||
"message": message,
|
||||
}
|
||||
|
||||
|
||||
# ── Remote operations ─────────────────────────────────────────────────────────
|
||||
|
||||
def git_push(
|
||||
repo_path: str | Path,
|
||||
remote: str = "origin",
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Push the current (or specified) branch to the remote."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
info = repo.remotes[remote].push(ref)
|
||||
summaries = [str(i.summary) for i in info]
|
||||
return {"success": True, "remote": remote, "branch": ref, "summaries": summaries}
|
||||
|
||||
|
||||
def git_pull(
|
||||
repo_path: str | Path,
|
||||
remote: str = "origin",
|
||||
branch: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Pull from the remote into the working tree."""
|
||||
repo = _open_repo(repo_path)
|
||||
ref = branch or repo.active_branch.name
|
||||
info = repo.remotes[remote].pull(ref)
|
||||
summaries = [str(i.summary) for i in info]
|
||||
return {"success": True, "remote": remote, "branch": ref, "summaries": summaries}
|
||||
|
||||
|
||||
# ── Stashing ──────────────────────────────────────────────────────────────────
|
||||
|
||||
def git_stash(
|
||||
repo_path: str | Path,
|
||||
pop: bool = False,
|
||||
message: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Stash or pop working-tree changes."""
|
||||
repo = _open_repo(repo_path)
|
||||
if pop:
|
||||
repo.git.stash("pop")
|
||||
return {"success": True, "action": "pop"}
|
||||
args = ["push"]
|
||||
if message:
|
||||
args.extend(["-m", message])
|
||||
repo.git.stash(*args)
|
||||
return {"success": True, "action": "stash", "message": message}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
GIT_TOOL_CATALOG: dict[str, dict] = {
|
||||
"git_clone": {
|
||||
"name": "Git Clone",
|
||||
"description": "Clone a remote repository to a local path",
|
||||
"fn": git_clone,
|
||||
},
|
||||
"git_status": {
|
||||
"name": "Git Status",
|
||||
"description": "Show working tree status (modified, staged, untracked)",
|
||||
"fn": git_status,
|
||||
},
|
||||
"git_diff": {
|
||||
"name": "Git Diff",
|
||||
"description": "Show diff of working tree or staged changes",
|
||||
"fn": git_diff,
|
||||
},
|
||||
"git_log": {
|
||||
"name": "Git Log",
|
||||
"description": "Show recent commit history",
|
||||
"fn": git_log,
|
||||
},
|
||||
"git_blame": {
|
||||
"name": "Git Blame",
|
||||
"description": "Show line-by-line authorship for a file",
|
||||
"fn": git_blame,
|
||||
},
|
||||
"git_branch": {
|
||||
"name": "Git Branch",
|
||||
"description": "List, create, or switch branches",
|
||||
"fn": git_branch,
|
||||
},
|
||||
"git_add": {
|
||||
"name": "Git Add",
|
||||
"description": "Stage files for commit",
|
||||
"fn": git_add,
|
||||
},
|
||||
"git_commit": {
|
||||
"name": "Git Commit",
|
||||
"description": "Create a commit with a message",
|
||||
"fn": git_commit,
|
||||
},
|
||||
"git_push": {
|
||||
"name": "Git Push",
|
||||
"description": "Push branch to remote repository",
|
||||
"fn": git_push,
|
||||
},
|
||||
"git_pull": {
|
||||
"name": "Git Pull",
|
||||
"description": "Pull from remote repository",
|
||||
"fn": git_pull,
|
||||
},
|
||||
"git_stash": {
|
||||
"name": "Git Stash",
|
||||
"description": "Stash or pop working tree changes",
|
||||
"fn": git_stash,
|
||||
},
|
||||
}
|
||||
171
src/tools/image_tools.py
Normal file
171
src/tools/image_tools.py
Normal file
@@ -0,0 +1,171 @@
|
||||
"""Image generation tools — Pixel persona.
|
||||
|
||||
Uses FLUX.2 Klein 4B (or configurable model) via HuggingFace diffusers
|
||||
for text-to-image generation, storyboard frames, and variations.
|
||||
|
||||
All heavy imports are lazy so the module loads instantly even without
|
||||
a GPU or the ``creative`` extra installed.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded pipeline singleton
|
||||
_pipeline = None
|
||||
|
||||
|
||||
def _get_pipeline():
|
||||
"""Lazy-load the FLUX diffusers pipeline."""
|
||||
global _pipeline
|
||||
if _pipeline is not None:
|
||||
return _pipeline
|
||||
|
||||
try:
|
||||
import torch
|
||||
from diffusers import FluxPipeline
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Creative dependencies not installed. "
|
||||
"Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
|
||||
model_id = getattr(settings, "flux_model_id", "black-forest-labs/FLUX.1-schnell")
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
dtype = torch.float16 if device == "cuda" else torch.float32
|
||||
|
||||
logger.info("Loading image model %s on %s …", model_id, device)
|
||||
_pipeline = FluxPipeline.from_pretrained(
|
||||
model_id, torch_dtype=dtype,
|
||||
).to(device)
|
||||
logger.info("Image model loaded.")
|
||||
return _pipeline
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "image_output_dir", "data/images"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(image_path: Path, meta: dict) -> Path:
|
||||
meta_path = image_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_image(
|
||||
prompt: str,
|
||||
negative_prompt: str = "",
|
||||
width: int = 1024,
|
||||
height: int = 1024,
|
||||
steps: int = 4,
|
||||
seed: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate an image from a text prompt.
|
||||
|
||||
Returns dict with ``path``, ``width``, ``height``, and ``prompt``.
|
||||
"""
|
||||
pipe = _get_pipeline()
|
||||
import torch
|
||||
|
||||
generator = torch.Generator(device=pipe.device)
|
||||
if seed is not None:
|
||||
generator.manual_seed(seed)
|
||||
|
||||
image = pipe(
|
||||
prompt=prompt,
|
||||
negative_prompt=negative_prompt or None,
|
||||
width=width,
|
||||
height=height,
|
||||
num_inference_steps=steps,
|
||||
generator=generator,
|
||||
).images[0]
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.png"
|
||||
image.save(out_path)
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "negative_prompt": negative_prompt,
|
||||
"width": width, "height": height, "steps": steps, "seed": seed,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_storyboard(
|
||||
scenes: list[str],
|
||||
width: int = 1024,
|
||||
height: int = 576,
|
||||
steps: int = 4,
|
||||
) -> dict:
|
||||
"""Generate a storyboard: one keyframe image per scene description.
|
||||
|
||||
Args:
|
||||
scenes: List of scene description strings.
|
||||
|
||||
Returns dict with list of generated frame paths.
|
||||
"""
|
||||
frames = []
|
||||
for i, scene in enumerate(scenes):
|
||||
result = generate_image(
|
||||
prompt=scene, width=width, height=height, steps=steps,
|
||||
)
|
||||
result["scene_index"] = i
|
||||
result["scene_description"] = scene
|
||||
frames.append(result)
|
||||
return {"success": True, "frame_count": len(frames), "frames": frames}
|
||||
|
||||
|
||||
def image_variations(
|
||||
prompt: str,
|
||||
count: int = 4,
|
||||
width: int = 1024,
|
||||
height: int = 1024,
|
||||
steps: int = 4,
|
||||
) -> dict:
|
||||
"""Generate multiple variations of the same prompt with different seeds."""
|
||||
import random
|
||||
variations = []
|
||||
for _ in range(count):
|
||||
seed = random.randint(0, 2**32 - 1)
|
||||
result = generate_image(
|
||||
prompt=prompt, width=width, height=height,
|
||||
steps=steps, seed=seed,
|
||||
)
|
||||
variations.append(result)
|
||||
return {"success": True, "count": len(variations), "variations": variations}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
IMAGE_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_image": {
|
||||
"name": "Generate Image",
|
||||
"description": "Generate an image from a text prompt using FLUX",
|
||||
"fn": generate_image,
|
||||
},
|
||||
"generate_storyboard": {
|
||||
"name": "Generate Storyboard",
|
||||
"description": "Generate keyframe images for a sequence of scenes",
|
||||
"fn": generate_storyboard,
|
||||
},
|
||||
"image_variations": {
|
||||
"name": "Image Variations",
|
||||
"description": "Generate multiple variations of the same prompt",
|
||||
"fn": image_variations,
|
||||
},
|
||||
}
|
||||
210
src/tools/music_tools.py
Normal file
210
src/tools/music_tools.py
Normal file
@@ -0,0 +1,210 @@
|
||||
"""Music generation tools — Lyra persona.
|
||||
|
||||
Uses ACE-Step 1.5 for full song generation with vocals, instrumentals,
|
||||
and lyrics. Falls back gracefully when the ``creative`` extra is not
|
||||
installed.
|
||||
|
||||
All heavy imports are lazy — the module loads instantly without GPU.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded model singleton
|
||||
_model = None
|
||||
|
||||
|
||||
def _get_model():
|
||||
"""Lazy-load the ACE-Step music generation model."""
|
||||
global _model
|
||||
if _model is not None:
|
||||
return _model
|
||||
|
||||
try:
|
||||
from ace_step import ACEStep
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"ACE-Step not installed. Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
model_name = getattr(settings, "ace_step_model", "ace-step/ACE-Step-v1.5")
|
||||
|
||||
logger.info("Loading music model %s …", model_name)
|
||||
_model = ACEStep(model_name)
|
||||
logger.info("Music model loaded.")
|
||||
return _model
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "music_output_dir", "data/music"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(audio_path: Path, meta: dict) -> Path:
|
||||
meta_path = audio_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
# ── Supported genres ──────────────────────────────────────────────────────────
|
||||
|
||||
GENRES = [
|
||||
"pop", "rock", "hip-hop", "r&b", "jazz", "blues", "country",
|
||||
"electronic", "classical", "folk", "reggae", "metal", "punk",
|
||||
"soul", "funk", "latin", "ambient", "lo-fi", "cinematic",
|
||||
]
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_song(
|
||||
lyrics: str,
|
||||
genre: str = "pop",
|
||||
duration: int = 120,
|
||||
language: str = "en",
|
||||
title: Optional[str] = None,
|
||||
) -> dict:
|
||||
"""Generate a full song with vocals and instrumentals from lyrics.
|
||||
|
||||
Args:
|
||||
lyrics: Song lyrics text.
|
||||
genre: Musical genre / style tag.
|
||||
duration: Target duration in seconds (30–240).
|
||||
language: ISO language code (19 languages supported).
|
||||
title: Optional song title for metadata.
|
||||
|
||||
Returns dict with ``path``, ``duration``, ``genre``, etc.
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(30, min(240, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
logger.info("Generating song: genre=%s duration=%ds …", genre, duration)
|
||||
audio = model.generate(
|
||||
lyrics=lyrics,
|
||||
genre=genre,
|
||||
duration=duration,
|
||||
language=language,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "title": title or f"Untitled ({genre})",
|
||||
"lyrics": lyrics, "genre": genre,
|
||||
"duration": duration, "language": language,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_instrumental(
|
||||
prompt: str,
|
||||
genre: str = "cinematic",
|
||||
duration: int = 60,
|
||||
) -> dict:
|
||||
"""Generate an instrumental track from a text prompt (no vocals).
|
||||
|
||||
Args:
|
||||
prompt: Description of the desired music.
|
||||
genre: Musical genre / style tag.
|
||||
duration: Target duration in seconds (15–180).
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(15, min(180, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
logger.info("Generating instrumental: genre=%s …", genre)
|
||||
audio = model.generate(
|
||||
lyrics="",
|
||||
genre=genre,
|
||||
duration=duration,
|
||||
prompt=prompt,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "genre": genre,
|
||||
"duration": duration, "instrumental": True,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def generate_vocals(
|
||||
lyrics: str,
|
||||
style: str = "pop",
|
||||
duration: int = 60,
|
||||
language: str = "en",
|
||||
) -> dict:
|
||||
"""Generate a vocal-only track from lyrics.
|
||||
|
||||
Useful for layering over custom instrumentals.
|
||||
"""
|
||||
model = _get_model()
|
||||
duration = max(15, min(180, duration))
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.wav"
|
||||
|
||||
audio = model.generate(
|
||||
lyrics=lyrics,
|
||||
genre=f"{style} acapella vocals",
|
||||
duration=duration,
|
||||
language=language,
|
||||
)
|
||||
audio.save(str(out_path))
|
||||
|
||||
meta = {
|
||||
"id": uid, "lyrics": lyrics, "style": style,
|
||||
"duration": duration, "vocals_only": True,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def list_genres() -> dict:
|
||||
"""Return the list of supported genre / style tags."""
|
||||
return {"success": True, "genres": GENRES}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
MUSIC_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_song": {
|
||||
"name": "Generate Song",
|
||||
"description": "Generate a full song with vocals + instrumentals from lyrics",
|
||||
"fn": generate_song,
|
||||
},
|
||||
"generate_instrumental": {
|
||||
"name": "Generate Instrumental",
|
||||
"description": "Generate an instrumental track from a text prompt",
|
||||
"fn": generate_instrumental,
|
||||
},
|
||||
"generate_vocals": {
|
||||
"name": "Generate Vocals",
|
||||
"description": "Generate a vocal-only track from lyrics",
|
||||
"fn": generate_vocals,
|
||||
},
|
||||
"list_genres": {
|
||||
"name": "List Genres",
|
||||
"description": "List supported music genre / style tags",
|
||||
"fn": list_genres,
|
||||
},
|
||||
}
|
||||
206
src/tools/video_tools.py
Normal file
206
src/tools/video_tools.py
Normal file
@@ -0,0 +1,206 @@
|
||||
"""Video generation tools — Reel persona.
|
||||
|
||||
Uses Wan 2.1 (via HuggingFace diffusers) for text-to-video and
|
||||
image-to-video generation. Heavy imports are lazy.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Lazy-loaded pipeline singletons
|
||||
_t2v_pipeline = None
|
||||
_i2v_pipeline = None
|
||||
|
||||
|
||||
def _get_t2v_pipeline():
|
||||
"""Lazy-load the text-to-video pipeline (Wan 2.1)."""
|
||||
global _t2v_pipeline
|
||||
if _t2v_pipeline is not None:
|
||||
return _t2v_pipeline
|
||||
|
||||
try:
|
||||
import torch
|
||||
from diffusers import DiffusionPipeline
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Creative dependencies not installed. "
|
||||
"Run: pip install 'timmy-time[creative]'"
|
||||
)
|
||||
|
||||
from config import settings
|
||||
model_id = getattr(settings, "wan_model_id", "Wan-AI/Wan2.1-T2V-1.3B")
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
dtype = torch.float16 if device == "cuda" else torch.float32
|
||||
|
||||
logger.info("Loading video model %s on %s …", model_id, device)
|
||||
_t2v_pipeline = DiffusionPipeline.from_pretrained(
|
||||
model_id, torch_dtype=dtype,
|
||||
).to(device)
|
||||
logger.info("Video model loaded.")
|
||||
return _t2v_pipeline
|
||||
|
||||
|
||||
def _output_dir() -> Path:
|
||||
from config import settings
|
||||
d = Path(getattr(settings, "video_output_dir", "data/video"))
|
||||
d.mkdir(parents=True, exist_ok=True)
|
||||
return d
|
||||
|
||||
|
||||
def _save_metadata(video_path: Path, meta: dict) -> Path:
|
||||
meta_path = video_path.with_suffix(".json")
|
||||
meta_path.write_text(json.dumps(meta, indent=2))
|
||||
return meta_path
|
||||
|
||||
|
||||
def _export_frames_to_mp4(frames, out_path: Path, fps: int = 24) -> None:
|
||||
"""Export a list of PIL Image frames to an MP4 file using moviepy."""
|
||||
import numpy as np
|
||||
from moviepy import ImageSequenceClip
|
||||
|
||||
frame_arrays = [np.array(f) for f in frames]
|
||||
clip = ImageSequenceClip(frame_arrays, fps=fps)
|
||||
clip.write_videofile(
|
||||
str(out_path), codec="libx264", audio=False, logger=None,
|
||||
)
|
||||
|
||||
|
||||
# ── Resolution presets ────────────────────────────────────────────────────────
|
||||
|
||||
RESOLUTION_PRESETS = {
|
||||
"480p": (854, 480),
|
||||
"720p": (1280, 720),
|
||||
}
|
||||
|
||||
VIDEO_STYLES = [
|
||||
"cinematic", "anime", "documentary", "abstract",
|
||||
"timelapse", "slow-motion", "music-video", "vlog",
|
||||
]
|
||||
|
||||
|
||||
# ── Public tools ──────────────────────────────────────────────────────────────
|
||||
|
||||
def generate_video_clip(
|
||||
prompt: str,
|
||||
duration: int = 5,
|
||||
resolution: str = "480p",
|
||||
fps: int = 24,
|
||||
seed: Optional[int] = None,
|
||||
) -> dict:
|
||||
"""Generate a short video clip from a text prompt.
|
||||
|
||||
Args:
|
||||
prompt: Text description of the desired video.
|
||||
duration: Target duration in seconds (2–10).
|
||||
resolution: "480p" or "720p".
|
||||
fps: Frames per second.
|
||||
seed: Optional seed for reproducibility.
|
||||
|
||||
Returns dict with ``path``, ``duration``, ``resolution``.
|
||||
"""
|
||||
pipe = _get_t2v_pipeline()
|
||||
import torch
|
||||
|
||||
duration = max(2, min(10, duration))
|
||||
w, h = RESOLUTION_PRESETS.get(resolution, RESOLUTION_PRESETS["480p"])
|
||||
num_frames = duration * fps
|
||||
|
||||
generator = torch.Generator(device=pipe.device)
|
||||
if seed is not None:
|
||||
generator.manual_seed(seed)
|
||||
|
||||
logger.info("Generating %ds video at %s …", duration, resolution)
|
||||
result = pipe(
|
||||
prompt=prompt,
|
||||
num_frames=num_frames,
|
||||
width=w,
|
||||
height=h,
|
||||
generator=generator,
|
||||
)
|
||||
frames = result.frames[0] if hasattr(result, "frames") else result.images
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.mp4"
|
||||
_export_frames_to_mp4(frames, out_path, fps=fps)
|
||||
|
||||
meta = {
|
||||
"id": uid, "prompt": prompt, "duration": duration,
|
||||
"resolution": resolution, "fps": fps, "seed": seed,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def image_to_video(
|
||||
image_path: str,
|
||||
prompt: str = "",
|
||||
duration: int = 5,
|
||||
fps: int = 24,
|
||||
) -> dict:
|
||||
"""Animate a still image into a video clip.
|
||||
|
||||
Args:
|
||||
image_path: Path to the source image.
|
||||
prompt: Optional motion / style guidance.
|
||||
duration: Target duration in seconds (2–10).
|
||||
"""
|
||||
pipe = _get_t2v_pipeline()
|
||||
from PIL import Image
|
||||
|
||||
duration = max(2, min(10, duration))
|
||||
img = Image.open(image_path).convert("RGB")
|
||||
num_frames = duration * fps
|
||||
|
||||
logger.info("Animating image %s → %ds video …", image_path, duration)
|
||||
result = pipe(
|
||||
prompt=prompt or "animate this image with natural motion",
|
||||
image=img,
|
||||
num_frames=num_frames,
|
||||
)
|
||||
frames = result.frames[0] if hasattr(result, "frames") else result.images
|
||||
|
||||
uid = uuid.uuid4().hex[:12]
|
||||
out_path = _output_dir() / f"{uid}.mp4"
|
||||
_export_frames_to_mp4(frames, out_path, fps=fps)
|
||||
|
||||
meta = {
|
||||
"id": uid, "source_image": image_path,
|
||||
"prompt": prompt, "duration": duration, "fps": fps,
|
||||
}
|
||||
_save_metadata(out_path, meta)
|
||||
|
||||
return {"success": True, "path": str(out_path), **meta}
|
||||
|
||||
|
||||
def list_video_styles() -> dict:
|
||||
"""Return supported video style presets."""
|
||||
return {"success": True, "styles": VIDEO_STYLES, "resolutions": list(RESOLUTION_PRESETS.keys())}
|
||||
|
||||
|
||||
# ── Tool catalogue ────────────────────────────────────────────────────────────
|
||||
|
||||
VIDEO_TOOL_CATALOG: dict[str, dict] = {
|
||||
"generate_video_clip": {
|
||||
"name": "Generate Video Clip",
|
||||
"description": "Generate a short video clip from a text prompt using Wan 2.1",
|
||||
"fn": generate_video_clip,
|
||||
},
|
||||
"image_to_video": {
|
||||
"name": "Image to Video",
|
||||
"description": "Animate a still image into a video clip",
|
||||
"fn": image_to_video,
|
||||
},
|
||||
"list_video_styles": {
|
||||
"name": "List Video Styles",
|
||||
"description": "List supported video style presets and resolutions",
|
||||
"fn": list_video_styles,
|
||||
},
|
||||
}
|
||||
69
tests/test_assembler.py
Normal file
69
tests/test_assembler.py
Normal file
@@ -0,0 +1,69 @@
|
||||
"""Tests for creative.assembler — Video assembly engine.
|
||||
|
||||
MoviePy is mocked for CI; these tests verify the interface contracts.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from creative.assembler import (
|
||||
ASSEMBLER_TOOL_CATALOG,
|
||||
stitch_clips,
|
||||
overlay_audio,
|
||||
add_title_card,
|
||||
add_subtitles,
|
||||
export_final,
|
||||
_MOVIEPY_AVAILABLE,
|
||||
)
|
||||
|
||||
|
||||
class TestAssemblerToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"stitch_clips", "overlay_audio", "add_title_card",
|
||||
"add_subtitles", "export_final",
|
||||
}
|
||||
assert expected == set(ASSEMBLER_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_callable(self):
|
||||
for tool_id, info in ASSEMBLER_TOOL_CATALOG.items():
|
||||
assert callable(info["fn"])
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
|
||||
|
||||
class TestStitchClipsInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_raises_on_empty_clips(self):
|
||||
"""Stitch with no clips should fail gracefully."""
|
||||
# MoviePy would fail on empty list
|
||||
with pytest.raises(Exception):
|
||||
stitch_clips([])
|
||||
|
||||
|
||||
class TestOverlayAudioInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_overlay_requires_valid_paths(self):
|
||||
with pytest.raises(Exception):
|
||||
overlay_audio("/nonexistent/video.mp4", "/nonexistent/audio.wav")
|
||||
|
||||
|
||||
class TestAddTitleCardInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_add_title_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
add_title_card("/nonexistent/video.mp4", "Title")
|
||||
|
||||
|
||||
class TestAddSubtitlesInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
add_subtitles("/nonexistent.mp4", [{"text": "Hi", "start": 0, "end": 1}])
|
||||
|
||||
|
||||
class TestExportFinalInterface:
|
||||
@pytest.mark.skipif(not _MOVIEPY_AVAILABLE, reason="MoviePy not installed")
|
||||
def test_requires_valid_video(self):
|
||||
with pytest.raises(Exception):
|
||||
export_final("/nonexistent/video.mp4")
|
||||
190
tests/test_creative_director.py
Normal file
190
tests/test_creative_director.py
Normal file
@@ -0,0 +1,190 @@
|
||||
"""Tests for creative.director — Creative Director pipeline.
|
||||
|
||||
Tests project management, pipeline orchestration, and tool catalogue.
|
||||
All AI model calls are mocked.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from creative.director import (
|
||||
create_project,
|
||||
get_project,
|
||||
list_projects,
|
||||
run_storyboard,
|
||||
run_music,
|
||||
run_video_generation,
|
||||
run_assembly,
|
||||
run_full_pipeline,
|
||||
CreativeProject,
|
||||
DIRECTOR_TOOL_CATALOG,
|
||||
_projects,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def clear_projects():
|
||||
"""Clear project store between tests."""
|
||||
_projects.clear()
|
||||
yield
|
||||
_projects.clear()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_project(tmp_path):
|
||||
"""Create a sample project with scenes."""
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project(
|
||||
title="Test Video",
|
||||
description="A test creative project",
|
||||
scenes=[
|
||||
{"description": "A sunrise over mountains"},
|
||||
{"description": "A river flowing through a valley"},
|
||||
{"description": "A sunset over the ocean"},
|
||||
],
|
||||
lyrics="La la la, the sun rises high",
|
||||
)
|
||||
return result["project"]["id"]
|
||||
|
||||
|
||||
class TestCreateProject:
|
||||
def test_creates_project(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("My Video", "A cool video")
|
||||
assert result["success"]
|
||||
assert result["project"]["title"] == "My Video"
|
||||
assert result["project"]["status"] == "planning"
|
||||
|
||||
def test_project_has_id(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("Test", "Test")
|
||||
assert len(result["project"]["id"]) == 12
|
||||
|
||||
def test_project_with_scenes(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project(
|
||||
"Scenes", "With scenes",
|
||||
scenes=[{"description": "Scene 1"}, {"description": "Scene 2"}],
|
||||
)
|
||||
assert result["project"]["scene_count"] == 2
|
||||
|
||||
|
||||
class TestGetProject:
|
||||
def test_get_existing(self, sample_project):
|
||||
result = get_project(sample_project)
|
||||
assert result is not None
|
||||
assert result["title"] == "Test Video"
|
||||
|
||||
def test_get_nonexistent(self):
|
||||
assert get_project("bogus") is None
|
||||
|
||||
|
||||
class TestListProjects:
|
||||
def test_empty(self):
|
||||
assert list_projects() == []
|
||||
|
||||
def test_with_projects(self, sample_project, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
create_project("Second", "desc")
|
||||
assert len(list_projects()) == 2
|
||||
|
||||
|
||||
class TestRunStoryboard:
|
||||
def test_fails_without_project(self):
|
||||
result = run_storyboard("bogus")
|
||||
assert not result["success"]
|
||||
assert "not found" in result["error"]
|
||||
|
||||
def test_fails_without_scenes(self, tmp_path):
|
||||
with patch("creative.director._project_dir", return_value=tmp_path):
|
||||
result = create_project("Empty", "No scenes")
|
||||
pid = result["project"]["id"]
|
||||
result = run_storyboard(pid)
|
||||
assert not result["success"]
|
||||
assert "No scenes" in result["error"]
|
||||
|
||||
def test_generates_frames(self, sample_project, tmp_path):
|
||||
mock_result = {
|
||||
"success": True,
|
||||
"frame_count": 3,
|
||||
"frames": [
|
||||
{"path": "/fake/1.png", "scene_index": 0, "prompt": "sunrise"},
|
||||
{"path": "/fake/2.png", "scene_index": 1, "prompt": "river"},
|
||||
{"path": "/fake/3.png", "scene_index": 2, "prompt": "sunset"},
|
||||
],
|
||||
}
|
||||
with patch("tools.image_tools.generate_storyboard", return_value=mock_result):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_storyboard(sample_project)
|
||||
assert result["success"]
|
||||
assert result["frame_count"] == 3
|
||||
|
||||
|
||||
class TestRunMusic:
|
||||
def test_fails_without_project(self):
|
||||
result = run_music("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_generates_track(self, sample_project):
|
||||
mock_result = {
|
||||
"success": True, "path": "/fake/song.wav",
|
||||
"genre": "pop", "duration": 60,
|
||||
}
|
||||
with patch("tools.music_tools.generate_song", return_value=mock_result):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_music(sample_project, genre="pop")
|
||||
assert result["success"]
|
||||
assert result["path"] == "/fake/song.wav"
|
||||
|
||||
|
||||
class TestRunVideoGeneration:
|
||||
def test_fails_without_project(self):
|
||||
result = run_video_generation("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_generates_clips(self, sample_project):
|
||||
mock_clip = {
|
||||
"success": True, "path": "/fake/clip.mp4",
|
||||
"duration": 5,
|
||||
}
|
||||
with patch("tools.video_tools.generate_video_clip", return_value=mock_clip):
|
||||
with patch("tools.video_tools.image_to_video", return_value=mock_clip):
|
||||
with patch("creative.director._save_project"):
|
||||
result = run_video_generation(sample_project)
|
||||
assert result["success"]
|
||||
assert result["clip_count"] == 3
|
||||
|
||||
|
||||
class TestRunAssembly:
|
||||
def test_fails_without_project(self):
|
||||
result = run_assembly("bogus")
|
||||
assert not result["success"]
|
||||
|
||||
def test_fails_without_clips(self, sample_project):
|
||||
result = run_assembly(sample_project)
|
||||
assert not result["success"]
|
||||
assert "No video clips" in result["error"]
|
||||
|
||||
|
||||
class TestCreativeProject:
|
||||
def test_to_dict(self):
|
||||
p = CreativeProject(title="Test", description="Desc")
|
||||
d = p.to_dict()
|
||||
assert d["title"] == "Test"
|
||||
assert d["status"] == "planning"
|
||||
assert d["scene_count"] == 0
|
||||
assert d["has_storyboard"] is False
|
||||
assert d["has_music"] is False
|
||||
|
||||
|
||||
class TestDirectorToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"create_project", "run_storyboard", "run_music",
|
||||
"run_video_generation", "run_assembly", "run_full_pipeline",
|
||||
}
|
||||
assert expected == set(DIRECTOR_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_callable(self):
|
||||
for tool_id, info in DIRECTOR_TOOL_CATALOG.items():
|
||||
assert callable(info["fn"])
|
||||
61
tests/test_creative_route.py
Normal file
61
tests/test_creative_route.py
Normal file
@@ -0,0 +1,61 @@
|
||||
"""Tests for the Creative Studio dashboard route."""
|
||||
|
||||
import os
|
||||
import pytest
|
||||
|
||||
os.environ.setdefault("TIMMY_TEST_MODE", "1")
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def client(tmp_path, monkeypatch):
|
||||
"""Test client with temp DB paths."""
|
||||
monkeypatch.setattr("swarm.tasks.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.registry.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.stats.DB_PATH", tmp_path / "swarm.db")
|
||||
monkeypatch.setattr("swarm.learner.DB_PATH", tmp_path / "swarm.db")
|
||||
|
||||
from dashboard.app import app
|
||||
return TestClient(app)
|
||||
|
||||
|
||||
class TestCreativeStudioPage:
|
||||
def test_creative_page_loads(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert resp.status_code == 200
|
||||
assert "Creative Studio" in resp.text
|
||||
|
||||
def test_creative_page_has_tabs(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert "tab-images" in resp.text
|
||||
assert "tab-music" in resp.text
|
||||
assert "tab-video" in resp.text
|
||||
assert "tab-director" in resp.text
|
||||
|
||||
def test_creative_page_shows_personas(self, client):
|
||||
resp = client.get("/creative/ui")
|
||||
assert "Pixel" in resp.text
|
||||
assert "Lyra" in resp.text
|
||||
assert "Reel" in resp.text
|
||||
|
||||
|
||||
class TestCreativeAPI:
|
||||
def test_projects_api_empty(self, client):
|
||||
resp = client.get("/creative/api/projects")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "projects" in data
|
||||
|
||||
def test_genres_api(self, client):
|
||||
resp = client.get("/creative/api/genres")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "genres" in data
|
||||
|
||||
def test_video_styles_api(self, client):
|
||||
resp = client.get("/creative/api/video-styles")
|
||||
assert resp.status_code == 200
|
||||
data = resp.json()
|
||||
assert "styles" in data
|
||||
assert "resolutions" in data
|
||||
@@ -100,8 +100,8 @@ def test_marketplace_has_timmy(client):
|
||||
def test_marketplace_has_planned_agents(client):
|
||||
response = client.get("/marketplace")
|
||||
data = response.json()
|
||||
# Total should be 7 (1 Timmy + 6 personas)
|
||||
assert data["total"] == 7
|
||||
# Total should be 10 (1 Timmy + 9 personas)
|
||||
assert data["total"] == 10
|
||||
# planned_count + active_count should equal total
|
||||
assert data["planned_count"] + data["active_count"] == data["total"]
|
||||
# Timmy should always be in the active list
|
||||
|
||||
183
tests/test_git_tools.py
Normal file
183
tests/test_git_tools.py
Normal file
@@ -0,0 +1,183 @@
|
||||
"""Tests for tools.git_tools — Git operations for Forge/Helm personas.
|
||||
|
||||
All tests use temporary git repositories to avoid touching the real
|
||||
working tree.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from pathlib import Path
|
||||
|
||||
from tools.git_tools import (
|
||||
git_init,
|
||||
git_status,
|
||||
git_add,
|
||||
git_commit,
|
||||
git_log,
|
||||
git_diff,
|
||||
git_branch,
|
||||
git_stash,
|
||||
git_blame,
|
||||
git_clone,
|
||||
GIT_TOOL_CATALOG,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def git_repo(tmp_path):
|
||||
"""Create a temporary git repo with one commit."""
|
||||
result = git_init(tmp_path)
|
||||
assert result["success"]
|
||||
|
||||
# Configure git identity for commits
|
||||
from git import Repo
|
||||
repo = Repo(str(tmp_path))
|
||||
repo.config_writer().set_value("user", "name", "Test").release()
|
||||
repo.config_writer().set_value("user", "email", "test@test.com").release()
|
||||
|
||||
# Create initial commit
|
||||
readme = tmp_path / "README.md"
|
||||
readme.write_text("# Test Repo\n")
|
||||
repo.index.add(["README.md"])
|
||||
repo.index.commit("Initial commit")
|
||||
|
||||
return tmp_path
|
||||
|
||||
|
||||
class TestGitInit:
|
||||
def test_init_creates_repo(self, tmp_path):
|
||||
path = tmp_path / "new_repo"
|
||||
result = git_init(path)
|
||||
assert result["success"]
|
||||
assert (path / ".git").is_dir()
|
||||
|
||||
def test_init_returns_path(self, tmp_path):
|
||||
path = tmp_path / "repo"
|
||||
result = git_init(path)
|
||||
assert result["path"] == str(path)
|
||||
|
||||
|
||||
class TestGitStatus:
|
||||
def test_clean_repo(self, git_repo):
|
||||
result = git_status(git_repo)
|
||||
assert result["success"]
|
||||
assert result["is_dirty"] is False
|
||||
assert result["untracked"] == []
|
||||
|
||||
def test_dirty_repo_untracked(self, git_repo):
|
||||
(git_repo / "new_file.txt").write_text("hello")
|
||||
result = git_status(git_repo)
|
||||
assert result["is_dirty"] is True
|
||||
assert "new_file.txt" in result["untracked"]
|
||||
|
||||
def test_reports_branch(self, git_repo):
|
||||
result = git_status(git_repo)
|
||||
assert result["branch"] in ("main", "master")
|
||||
|
||||
|
||||
class TestGitAddCommit:
|
||||
def test_add_and_commit(self, git_repo):
|
||||
(git_repo / "test.py").write_text("print('hi')\n")
|
||||
add_result = git_add(git_repo, ["test.py"])
|
||||
assert add_result["success"]
|
||||
|
||||
commit_result = git_commit(git_repo, "Add test.py")
|
||||
assert commit_result["success"]
|
||||
assert len(commit_result["sha"]) == 40
|
||||
assert commit_result["message"] == "Add test.py"
|
||||
|
||||
def test_add_all(self, git_repo):
|
||||
(git_repo / "a.txt").write_text("a")
|
||||
(git_repo / "b.txt").write_text("b")
|
||||
result = git_add(git_repo)
|
||||
assert result["success"]
|
||||
|
||||
|
||||
class TestGitLog:
|
||||
def test_log_returns_commits(self, git_repo):
|
||||
result = git_log(git_repo)
|
||||
assert result["success"]
|
||||
assert len(result["commits"]) >= 1
|
||||
first = result["commits"][0]
|
||||
assert "sha" in first
|
||||
assert "message" in first
|
||||
assert "author" in first
|
||||
assert "date" in first
|
||||
|
||||
def test_log_max_count(self, git_repo):
|
||||
result = git_log(git_repo, max_count=1)
|
||||
assert len(result["commits"]) == 1
|
||||
|
||||
|
||||
class TestGitDiff:
|
||||
def test_no_diff_on_clean(self, git_repo):
|
||||
result = git_diff(git_repo)
|
||||
assert result["success"]
|
||||
assert result["diff"] == ""
|
||||
|
||||
def test_diff_on_modified(self, git_repo):
|
||||
readme = git_repo / "README.md"
|
||||
readme.write_text("# Modified\n")
|
||||
result = git_diff(git_repo)
|
||||
assert result["success"]
|
||||
assert "Modified" in result["diff"]
|
||||
|
||||
|
||||
class TestGitBranch:
|
||||
def test_list_branches(self, git_repo):
|
||||
result = git_branch(git_repo)
|
||||
assert result["success"]
|
||||
assert len(result["branches"]) >= 1
|
||||
|
||||
def test_create_branch(self, git_repo):
|
||||
result = git_branch(git_repo, create="feature-x")
|
||||
assert result["success"]
|
||||
assert "feature-x" in result["branches"]
|
||||
assert result["created"] == "feature-x"
|
||||
|
||||
def test_switch_branch(self, git_repo):
|
||||
git_branch(git_repo, create="dev")
|
||||
result = git_branch(git_repo, switch="dev")
|
||||
assert result["active"] == "dev"
|
||||
|
||||
|
||||
class TestGitStash:
|
||||
def test_stash_and_pop(self, git_repo):
|
||||
readme = git_repo / "README.md"
|
||||
readme.write_text("# Changed\n")
|
||||
|
||||
stash_result = git_stash(git_repo, message="wip")
|
||||
assert stash_result["success"]
|
||||
assert stash_result["action"] == "stash"
|
||||
|
||||
# Working tree should be clean after stash
|
||||
status = git_status(git_repo)
|
||||
assert status["is_dirty"] is False
|
||||
|
||||
# Pop restores changes
|
||||
pop_result = git_stash(git_repo, pop=True)
|
||||
assert pop_result["success"]
|
||||
assert pop_result["action"] == "pop"
|
||||
|
||||
|
||||
class TestGitBlame:
|
||||
def test_blame_file(self, git_repo):
|
||||
result = git_blame(git_repo, "README.md")
|
||||
assert result["success"]
|
||||
assert "Test Repo" in result["blame"]
|
||||
|
||||
|
||||
class TestGitToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"git_clone", "git_status", "git_diff", "git_log",
|
||||
"git_blame", "git_branch", "git_add", "git_commit",
|
||||
"git_push", "git_pull", "git_stash",
|
||||
}
|
||||
assert expected == set(GIT_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in GIT_TOOL_CATALOG.items():
|
||||
assert "name" in info, f"{tool_id} missing name"
|
||||
assert "description" in info, f"{tool_id} missing description"
|
||||
assert "fn" in info, f"{tool_id} missing fn"
|
||||
assert callable(info["fn"]), f"{tool_id} fn not callable"
|
||||
120
tests/test_image_tools.py
Normal file
120
tests/test_image_tools.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Tests for tools.image_tools — Image generation (Pixel persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, metadata, and
|
||||
interface tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
from pathlib import Path
|
||||
|
||||
from tools.image_tools import (
|
||||
IMAGE_TOOL_CATALOG,
|
||||
generate_image,
|
||||
generate_storyboard,
|
||||
image_variations,
|
||||
_save_metadata,
|
||||
)
|
||||
|
||||
|
||||
class TestImageToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {"generate_image", "generate_storyboard", "image_variations"}
|
||||
assert expected == set(IMAGE_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in IMAGE_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestSaveMetadata:
|
||||
def test_saves_json_sidecar(self, tmp_path):
|
||||
img_path = tmp_path / "test.png"
|
||||
img_path.write_bytes(b"fake image")
|
||||
meta = {"prompt": "a cat", "width": 512}
|
||||
result = _save_metadata(img_path, meta)
|
||||
assert result.suffix == ".json"
|
||||
assert result.exists()
|
||||
|
||||
import json
|
||||
data = json.loads(result.read_text())
|
||||
assert data["prompt"] == "a cat"
|
||||
|
||||
|
||||
class TestGenerateImageInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
"""generate_image raises ImportError when diffusers not available."""
|
||||
with patch("tools.image_tools._pipeline", None):
|
||||
with patch("tools.image_tools._get_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_image("a cat")
|
||||
|
||||
def test_generate_image_with_mocked_pipeline(self, tmp_path):
|
||||
"""generate_image works end-to-end with a mocked pipeline."""
|
||||
import sys
|
||||
|
||||
mock_image = MagicMock()
|
||||
mock_image.save = MagicMock()
|
||||
|
||||
mock_pipe = MagicMock()
|
||||
mock_pipe.device = "cpu"
|
||||
mock_pipe.return_value.images = [mock_image]
|
||||
|
||||
mock_torch = MagicMock()
|
||||
mock_torch.Generator.return_value = MagicMock()
|
||||
|
||||
with patch.dict(sys.modules, {"torch": mock_torch}):
|
||||
with patch("tools.image_tools._get_pipeline", return_value=mock_pipe):
|
||||
with patch("tools.image_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_image("a cat", width=512, height=512, steps=1)
|
||||
|
||||
assert result["success"]
|
||||
assert result["prompt"] == "a cat"
|
||||
assert result["width"] == 512
|
||||
assert "path" in result
|
||||
|
||||
|
||||
class TestGenerateStoryboardInterface:
|
||||
def test_calls_generate_image_per_scene(self):
|
||||
"""Storyboard calls generate_image once per scene."""
|
||||
call_count = 0
|
||||
|
||||
def mock_gen_image(prompt, **kwargs):
|
||||
nonlocal call_count
|
||||
call_count += 1
|
||||
return {
|
||||
"success": True, "path": f"/fake/{call_count}.png",
|
||||
"id": str(call_count), "prompt": prompt,
|
||||
}
|
||||
|
||||
with patch("tools.image_tools.generate_image", side_effect=mock_gen_image):
|
||||
result = generate_storyboard(
|
||||
["sunrise", "mountain peak", "sunset"],
|
||||
steps=1,
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["frame_count"] == 3
|
||||
assert len(result["frames"]) == 3
|
||||
assert call_count == 3
|
||||
|
||||
|
||||
class TestImageVariationsInterface:
|
||||
def test_generates_multiple_variations(self):
|
||||
"""image_variations generates the requested number of results."""
|
||||
def mock_gen_image(prompt, **kwargs):
|
||||
return {
|
||||
"success": True, "path": "/fake.png",
|
||||
"id": "x", "prompt": prompt,
|
||||
"seed": kwargs.get("seed"),
|
||||
}
|
||||
|
||||
with patch("tools.image_tools.generate_image", side_effect=mock_gen_image):
|
||||
result = image_variations("a dog", count=3, steps=1)
|
||||
|
||||
assert result["success"]
|
||||
assert result["count"] == 3
|
||||
assert len(result["variations"]) == 3
|
||||
124
tests/test_music_tools.py
Normal file
124
tests/test_music_tools.py
Normal file
@@ -0,0 +1,124 @@
|
||||
"""Tests for tools.music_tools — Music generation (Lyra persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, interface, and
|
||||
metadata tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from tools.music_tools import (
|
||||
MUSIC_TOOL_CATALOG,
|
||||
GENRES,
|
||||
list_genres,
|
||||
generate_song,
|
||||
generate_instrumental,
|
||||
generate_vocals,
|
||||
)
|
||||
|
||||
|
||||
class TestMusicToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {
|
||||
"generate_song", "generate_instrumental",
|
||||
"generate_vocals", "list_genres",
|
||||
}
|
||||
assert expected == set(MUSIC_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in MUSIC_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestListGenres:
|
||||
def test_returns_genre_list(self):
|
||||
result = list_genres()
|
||||
assert result["success"]
|
||||
assert len(result["genres"]) > 10
|
||||
assert "pop" in result["genres"]
|
||||
assert "cinematic" in result["genres"]
|
||||
|
||||
|
||||
class TestGenres:
|
||||
def test_common_genres_present(self):
|
||||
for genre in ["pop", "rock", "hip-hop", "jazz", "electronic", "classical"]:
|
||||
assert genre in GENRES
|
||||
|
||||
|
||||
class TestGenerateSongInterface:
|
||||
def test_raises_without_ace_step(self):
|
||||
with patch("tools.music_tools._model", None):
|
||||
with patch("tools.music_tools._get_model", side_effect=ImportError("no ace-step")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_song("la la la")
|
||||
|
||||
def test_duration_clamped(self):
|
||||
"""Duration is clamped to 30–240 range."""
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=MagicMock()):
|
||||
with patch("tools.music_tools._save_metadata"):
|
||||
# Should clamp 5 to 30
|
||||
generate_song("lyrics", duration=5)
|
||||
call_kwargs = mock_model.generate.call_args[1]
|
||||
assert call_kwargs["duration"] == 30
|
||||
|
||||
def test_generate_song_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_song(
|
||||
"hello world", genre="rock", duration=60, title="Test Song"
|
||||
)
|
||||
|
||||
assert result["success"]
|
||||
assert result["genre"] == "rock"
|
||||
assert result["title"] == "Test Song"
|
||||
assert result["duration"] == 60
|
||||
|
||||
|
||||
class TestGenerateInstrumentalInterface:
|
||||
def test_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_instrumental("epic orchestral", genre="cinematic")
|
||||
|
||||
assert result["success"]
|
||||
assert result["genre"] == "cinematic"
|
||||
assert result["instrumental"] is True
|
||||
|
||||
|
||||
class TestGenerateVocalsInterface:
|
||||
def test_with_mocked_model(self, tmp_path):
|
||||
mock_audio = MagicMock()
|
||||
mock_audio.save = MagicMock()
|
||||
|
||||
mock_model = MagicMock()
|
||||
mock_model.generate.return_value = mock_audio
|
||||
|
||||
with patch("tools.music_tools._get_model", return_value=mock_model):
|
||||
with patch("tools.music_tools._output_dir", return_value=tmp_path):
|
||||
result = generate_vocals("do re mi", style="jazz")
|
||||
|
||||
assert result["success"]
|
||||
assert result["vocals_only"] is True
|
||||
assert result["style"] == "jazz"
|
||||
110
tests/test_spark_tools_creative.py
Normal file
110
tests/test_spark_tools_creative.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Tests for Spark engine tool-level and creative pipeline event capture.
|
||||
|
||||
Covers the new on_tool_executed() and on_creative_step() methods added
|
||||
in Phase 6.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from spark.engine import SparkEngine
|
||||
from spark.memory import get_events, count_events
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def tmp_spark_db(tmp_path, monkeypatch):
|
||||
db_path = tmp_path / "spark.db"
|
||||
monkeypatch.setattr("spark.memory.DB_PATH", db_path)
|
||||
monkeypatch.setattr("spark.eidos.DB_PATH", db_path)
|
||||
yield db_path
|
||||
|
||||
|
||||
class TestOnToolExecuted:
|
||||
def test_captures_tool_event(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_tool_executed("agent-a", "git_commit", task_id="t1")
|
||||
assert eid is not None
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
assert "git_commit" in events[0].description
|
||||
|
||||
def test_captures_tool_failure(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_tool_executed("agent-a", "generate_image", success=False)
|
||||
assert eid is not None
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
assert "FAIL" in events[0].description
|
||||
|
||||
def test_captures_duration(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("agent-a", "generate_song", duration_ms=5000)
|
||||
events = get_events(event_type="tool_executed")
|
||||
assert len(events) == 1
|
||||
|
||||
def test_disabled_returns_none(self):
|
||||
engine = SparkEngine(enabled=False)
|
||||
result = engine.on_tool_executed("agent-a", "git_push")
|
||||
assert result is None
|
||||
|
||||
def test_multiple_tool_events(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("agent-a", "git_add")
|
||||
engine.on_tool_executed("agent-a", "git_commit")
|
||||
engine.on_tool_executed("agent-a", "git_push")
|
||||
assert count_events("tool_executed") == 3
|
||||
|
||||
|
||||
class TestOnCreativeStep:
|
||||
def test_captures_creative_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
eid = engine.on_creative_step(
|
||||
project_id="proj-1",
|
||||
step_name="storyboard",
|
||||
agent_id="pixel-001",
|
||||
output_path="/data/images/frame.png",
|
||||
)
|
||||
assert eid is not None
|
||||
events = get_events(event_type="creative_step")
|
||||
assert len(events) == 1
|
||||
assert "storyboard" in events[0].description
|
||||
|
||||
def test_captures_failed_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_creative_step(
|
||||
project_id="proj-1",
|
||||
step_name="music",
|
||||
agent_id="lyra-001",
|
||||
success=False,
|
||||
)
|
||||
events = get_events(event_type="creative_step")
|
||||
assert len(events) == 1
|
||||
assert "FAIL" in events[0].description
|
||||
|
||||
def test_disabled_returns_none(self):
|
||||
engine = SparkEngine(enabled=False)
|
||||
result = engine.on_creative_step("p1", "storyboard", "pixel-001")
|
||||
assert result is None
|
||||
|
||||
def test_full_pipeline_events(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
steps = ["storyboard", "music", "video", "assembly"]
|
||||
agents = ["pixel-001", "lyra-001", "reel-001", "reel-001"]
|
||||
for step, agent in zip(steps, agents):
|
||||
engine.on_creative_step("proj-1", step, agent)
|
||||
assert count_events("creative_step") == 4
|
||||
|
||||
|
||||
class TestSparkStatusIncludesNewTypes:
|
||||
def test_status_includes_tool_executed(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_tool_executed("a", "git_commit")
|
||||
status = engine.status()
|
||||
assert "tool_executed" in status["event_types"]
|
||||
assert status["event_types"]["tool_executed"] == 1
|
||||
|
||||
def test_status_includes_creative_step(self):
|
||||
engine = SparkEngine(enabled=True)
|
||||
engine.on_creative_step("p1", "storyboard", "pixel-001")
|
||||
status = engine.status()
|
||||
assert "creative_step" in status["event_types"]
|
||||
assert status["event_types"]["creative_step"] == 1
|
||||
@@ -18,9 +18,9 @@ def tmp_swarm_db(tmp_path, monkeypatch):
|
||||
|
||||
# ── personas.py ───────────────────────────────────────────────────────────────
|
||||
|
||||
def test_all_six_personas_defined():
|
||||
def test_all_nine_personas_defined():
|
||||
from swarm.personas import PERSONAS
|
||||
expected = {"echo", "mace", "helm", "seer", "forge", "quill"}
|
||||
expected = {"echo", "mace", "helm", "seer", "forge", "quill", "pixel", "lyra", "reel"}
|
||||
assert expected == set(PERSONAS.keys())
|
||||
|
||||
|
||||
@@ -46,10 +46,10 @@ def test_get_persona_returns_none_for_unknown():
|
||||
assert get_persona("bogus") is None
|
||||
|
||||
|
||||
def test_list_personas_returns_all_six():
|
||||
def test_list_personas_returns_all_nine():
|
||||
from swarm.personas import list_personas
|
||||
personas = list_personas()
|
||||
assert len(personas) == 6
|
||||
assert len(personas) == 9
|
||||
|
||||
|
||||
def test_persona_capabilities_are_comma_strings():
|
||||
@@ -179,7 +179,7 @@ def test_coordinator_spawn_all_personas():
|
||||
from swarm import registry
|
||||
coord = SwarmCoordinator()
|
||||
names = []
|
||||
for pid in ["echo", "mace", "helm", "seer", "forge", "quill"]:
|
||||
for pid in ["echo", "mace", "helm", "seer", "forge", "quill", "pixel", "lyra", "reel"]:
|
||||
result = coord.spawn_persona(pid)
|
||||
names.append(result["name"])
|
||||
agents = registry.list_agents()
|
||||
|
||||
93
tests/test_video_tools.py
Normal file
93
tests/test_video_tools.py
Normal file
@@ -0,0 +1,93 @@
|
||||
"""Tests for tools.video_tools — Video generation (Reel persona).
|
||||
|
||||
Heavy AI model tests are skipped; only catalogue, interface, and
|
||||
resolution preset tests run in CI.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
from tools.video_tools import (
|
||||
VIDEO_TOOL_CATALOG,
|
||||
RESOLUTION_PRESETS,
|
||||
VIDEO_STYLES,
|
||||
list_video_styles,
|
||||
generate_video_clip,
|
||||
image_to_video,
|
||||
)
|
||||
|
||||
|
||||
class TestVideoToolCatalog:
|
||||
def test_catalog_has_all_tools(self):
|
||||
expected = {"generate_video_clip", "image_to_video", "list_video_styles"}
|
||||
assert expected == set(VIDEO_TOOL_CATALOG.keys())
|
||||
|
||||
def test_catalog_entries_have_required_keys(self):
|
||||
for tool_id, info in VIDEO_TOOL_CATALOG.items():
|
||||
assert "name" in info
|
||||
assert "description" in info
|
||||
assert "fn" in info
|
||||
assert callable(info["fn"])
|
||||
|
||||
|
||||
class TestResolutionPresets:
|
||||
def test_480p_preset(self):
|
||||
assert RESOLUTION_PRESETS["480p"] == (854, 480)
|
||||
|
||||
def test_720p_preset(self):
|
||||
assert RESOLUTION_PRESETS["720p"] == (1280, 720)
|
||||
|
||||
|
||||
class TestVideoStyles:
|
||||
def test_common_styles_present(self):
|
||||
for style in ["cinematic", "anime", "documentary"]:
|
||||
assert style in VIDEO_STYLES
|
||||
|
||||
|
||||
class TestListVideoStyles:
|
||||
def test_returns_styles_and_resolutions(self):
|
||||
result = list_video_styles()
|
||||
assert result["success"]
|
||||
assert "cinematic" in result["styles"]
|
||||
assert "480p" in result["resolutions"]
|
||||
assert "720p" in result["resolutions"]
|
||||
|
||||
|
||||
class TestGenerateVideoClipInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
with patch("tools.video_tools._t2v_pipeline", None):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
generate_video_clip("a sunset")
|
||||
|
||||
def test_duration_clamped(self):
|
||||
"""Duration is clamped to 2–10 range."""
|
||||
import sys
|
||||
|
||||
mock_pipe = MagicMock()
|
||||
mock_pipe.device = "cpu"
|
||||
mock_result = MagicMock()
|
||||
mock_result.frames = [[MagicMock() for _ in range(48)]]
|
||||
mock_pipe.return_value = mock_result
|
||||
|
||||
mock_torch = MagicMock()
|
||||
mock_torch.Generator.return_value = MagicMock()
|
||||
|
||||
out_dir = MagicMock()
|
||||
out_dir.__truediv__ = MagicMock(return_value=MagicMock(__str__=lambda s: "/fake/clip.mp4"))
|
||||
|
||||
with patch.dict(sys.modules, {"torch": mock_torch}):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", return_value=mock_pipe):
|
||||
with patch("tools.video_tools._export_frames_to_mp4"):
|
||||
with patch("tools.video_tools._output_dir", return_value=out_dir):
|
||||
with patch("tools.video_tools._save_metadata"):
|
||||
result = generate_video_clip("test", duration=50)
|
||||
assert result["duration"] == 10 # clamped
|
||||
|
||||
|
||||
class TestImageToVideoInterface:
|
||||
def test_raises_without_creative_deps(self):
|
||||
with patch("tools.video_tools._t2v_pipeline", None):
|
||||
with patch("tools.video_tools._get_t2v_pipeline", side_effect=ImportError("no diffusers")):
|
||||
with pytest.raises(ImportError):
|
||||
image_to_video("/fake/image.png", "animate")
|
||||
Reference in New Issue
Block a user