Refactor ascii-video skill: creative-first SKILL.md, consolidate reference files
This commit is contained in:
@@ -5,12 +5,26 @@ description: "Production pipeline for ASCII art video — any format. Converts v
|
||||
|
||||
# ASCII Video Production Pipeline
|
||||
|
||||
Full production pipeline for rendering any content as colored ASCII character video.
|
||||
## Creative Standard
|
||||
|
||||
This is visual art. ASCII characters are the medium; cinema is the standard.
|
||||
|
||||
**Before writing a single line of code**, articulate the creative concept. What is the mood? What visual story does this tell? What makes THIS project different from every other ASCII video? The user's prompt is a starting point — interpret it with creative ambition, not literal transcription.
|
||||
|
||||
**First-render excellence is non-negotiable.** The output must be visually striking without requiring revision rounds. If something looks generic, flat, or like "AI-generated ASCII art," it is wrong — rethink the creative concept before shipping.
|
||||
|
||||
**Go beyond the reference vocabulary.** The effect catalogs, shader presets, and palette libraries in the references are a starting vocabulary. For every project, combine, modify, and invent new patterns. The catalog is a palette of paints — you write the painting.
|
||||
|
||||
**Be proactively creative.** Extend the skill's vocabulary when the project calls for it. If the references don't have what the vision demands, build it. Include at least one visual moment the user didn't ask for but will appreciate — a transition, an effect, a color choice that elevates the whole piece.
|
||||
|
||||
**Cohesive aesthetic over technical correctness.** All scenes in a video must feel connected by a unifying visual language — shared color temperature, related character palettes, consistent motion vocabulary. A technically correct video where every scene uses a random different effect is an aesthetic failure.
|
||||
|
||||
**Dense, layered, considered.** Every frame should reward viewing. Never flat black backgrounds. Always multi-grid composition. Always per-scene variation. Always intentional color.
|
||||
|
||||
## Modes
|
||||
|
||||
| Mode | Input | Output | Read |
|
||||
|------|-------|--------|------|
|
||||
| Mode | Input | Output | Reference |
|
||||
|------|-------|--------|-----------|
|
||||
| **Video-to-ASCII** | Video file | ASCII recreation of source footage | `references/inputs.md` § Video Sampling |
|
||||
| **Audio-reactive** | Audio file | Generative visuals driven by audio features | `references/inputs.md` § Audio Analysis |
|
||||
| **Generative** | None (or seed params) | Procedural ASCII animation | `references/effects.md` |
|
||||
@@ -20,210 +34,154 @@ Full production pipeline for rendering any content as colored ASCII character vi
|
||||
|
||||
## Stack
|
||||
|
||||
Single self-contained Python script per project. No GPU.
|
||||
Single self-contained Python script per project. No GPU required.
|
||||
|
||||
| Layer | Tool | Purpose |
|
||||
|-------|------|---------|
|
||||
| Core | Python 3.10+, NumPy | Math, array ops, vectorized effects |
|
||||
| Signal | SciPy | FFT, peak detection (audio modes only) |
|
||||
| Imaging | Pillow (PIL) | Font rasterization, video frame decoding, image I/O |
|
||||
| Video I/O | ffmpeg (CLI) | Decode input, encode output segments, mux audio, mix tracks |
|
||||
| Parallel | concurrent.futures / multiprocessing | N workers for batch/clip rendering |
|
||||
| TTS | ElevenLabs API (or similar) | Generate narration clips for quote/testimonial videos |
|
||||
| Optional | OpenCV | Video frame sampling, edge detection, optical flow |
|
||||
| Signal | SciPy | FFT, peak detection (audio modes) |
|
||||
| Imaging | Pillow (PIL) | Font rasterization, frame decoding, image I/O |
|
||||
| Video I/O | ffmpeg (CLI) | Decode input, encode output, mux audio |
|
||||
| Parallel | concurrent.futures | N workers for batch/clip rendering |
|
||||
| TTS | ElevenLabs API (optional) | Generate narration clips |
|
||||
| Optional | OpenCV | Video frame sampling, edge detection |
|
||||
|
||||
## Pipeline Architecture (v2)
|
||||
## Pipeline Architecture
|
||||
|
||||
Every mode follows the same 6-stage pipeline. See `references/architecture.md` for implementation details, `references/scenes.md` for scene protocol, and `references/composition.md` for multi-grid composition and tonemap.
|
||||
Every mode follows the same 6-stage pipeline:
|
||||
|
||||
```
|
||||
┌─────────┐ ┌──────────┐ ┌───────────┐ ┌──────────┐ ┌─────────┐ ┌────────┐
|
||||
│ 1.INPUT │→│ 2.ANALYZE │→│ 3.SCENE_FN │→│ 4.TONEMAP │→│ 5.SHADE │→│ 6.ENCODE│
|
||||
│ load src │ │ features │ │ → canvas │ │ normalize │ │ post-fx │ │ → video │
|
||||
└─────────┘ └──────────┘ └───────────┘ └──────────┘ └─────────┘ └────────┘
|
||||
INPUT → ANALYZE → SCENE_FN → TONEMAP → SHADE → ENCODE
|
||||
```
|
||||
|
||||
1. **INPUT** — Load/decode source material (video frames, audio samples, images, or nothing)
|
||||
2. **ANALYZE** — Extract per-frame features (audio bands, video luminance/edges, motion vectors)
|
||||
3. **SCENE_FN** — Scene function renders directly to pixel canvas (`uint8 H,W,3`). May internally compose multiple character grids via `_render_vf()` + pixel blend modes. See `references/composition.md`
|
||||
4. **TONEMAP** — Percentile-based adaptive brightness normalization with per-scene gamma. Replaces linear brightness multipliers. See `references/composition.md` § Adaptive Tonemap
|
||||
5. **SHADE** — Apply post-processing `ShaderChain` + `FeedbackBuffer`. See `references/shaders.md`
|
||||
3. **SCENE_FN** — Scene function renders to pixel canvas (`uint8 H,W,3`). Composes multiple character grids via `_render_vf()` + pixel blend modes. See `references/composition.md`
|
||||
4. **TONEMAP** — Percentile-based adaptive brightness normalization. See `references/composition.md` § Adaptive Tonemap
|
||||
5. **SHADE** — Post-processing via `ShaderChain` + `FeedbackBuffer`. See `references/shaders.md`
|
||||
6. **ENCODE** — Pipe raw RGB frames to ffmpeg for H.264/GIF encoding
|
||||
|
||||
## Creative Direction
|
||||
|
||||
**Every project should look and feel different.** The references provide a vocabulary of building blocks — don't copy them verbatim. Combine, modify, and invent.
|
||||
|
||||
### Aesthetic Dimensions to Vary
|
||||
### Aesthetic Dimensions
|
||||
|
||||
| Dimension | Options | Reference |
|
||||
|-----------|---------|-----------|
|
||||
| **Character palette** | Density ramps, block elements, symbols, scripts (katakana, Greek, runes, braille), dots, project-specific | `architecture.md` § Character Palettes |
|
||||
| **Color strategy** | HSV (angle/distance/time/value mapped), OKLAB/OKLCH (perceptually uniform), discrete RGB palettes, auto-generated harmony (complementary/triadic/analogous/tetradic), monochrome, temperature | `architecture.md` § Color System |
|
||||
| **Color tint** | Warm, cool, amber, matrix green, neon pink, sepia, ice, blood, void, sunset | `shaders.md` § Color Grade |
|
||||
| **Background texture** | Sine fields, fBM noise, domain warp, voronoi cells, reaction-diffusion, cellular automata, video source | `effects.md` § Background Fills, Noise-Based Fields, Simulation-Based Fields |
|
||||
| **Primary effects** | Rings, spirals, tunnel, vortex, waves, interference, aurora, ripple, fire, strange attractors, SDFs (geometric shapes with smooth booleans) | `effects.md` § Radial / Wave / Fire / SDF-Based Fields |
|
||||
| **Particles** | Energy sparks, snow, rain, bubbles, runes, binary data, orbits, gravity wells, flocking boids, flow-field followers, trail-drawing particles | `effects.md` § Particle Systems |
|
||||
| **Shader mood** | Retro CRT, clean modern, glitch art, cinematic, dreamy, harsh industrial, psychedelic | `shaders.md` § Design Philosophy |
|
||||
| **Character palette** | Density ramps, block elements, symbols, scripts (katakana, Greek, runes, braille), project-specific | `architecture.md` § Palettes |
|
||||
| **Color strategy** | HSV, OKLAB/OKLCH, discrete RGB palettes, auto-generated harmony, monochrome, temperature | `architecture.md` § Color System |
|
||||
| **Background texture** | Sine fields, fBM noise, domain warp, voronoi, reaction-diffusion, cellular automata, video | `effects.md` |
|
||||
| **Primary effects** | Rings, spirals, tunnel, vortex, waves, interference, aurora, fire, SDFs, strange attractors | `effects.md` |
|
||||
| **Particles** | Sparks, snow, rain, bubbles, runes, orbits, flocking boids, flow-field followers, trails | `effects.md` § Particles |
|
||||
| **Shader mood** | Retro CRT, clean modern, glitch art, cinematic, dreamy, industrial, psychedelic | `shaders.md` |
|
||||
| **Grid density** | xs(8px) through xxl(40px), mixed per layer | `architecture.md` § Grid System |
|
||||
| **Font** | Menlo, Monaco, Courier, SF Mono, JetBrains Mono, Fira Code, IBM Plex | `architecture.md` § Font Selection |
|
||||
| **Coordinate space** | Cartesian, polar, tiled, rotated, skewed, fisheye, twisted, Möbius, domain-warped | `effects.md` § Coordinate Transforms |
|
||||
| **Mirror mode** | None, horizontal, vertical, quad, diagonal, kaleidoscope | `shaders.md` § Mirror Effects |
|
||||
| **Masking** | Circle, rect, ring, gradient, text stencil, value-field-as-mask, animated iris/wipe/dissolve | `composition.md` § Masking |
|
||||
| **Temporal motion** | Static, audio-reactive, eased keyframes, morphing between fields, temporal noise (smooth in-place evolution) | `effects.md` § Temporal Coherence |
|
||||
| **Transition style** | Crossfade, wipe (directional/radial), dissolve, glitch cut, iris open/close, mask-based reveal | `shaders.md` § Transitions, `composition.md` § Animated Masks |
|
||||
| **Aspect ratio** | Landscape (16:9), portrait (9:16), square (1:1), ultrawide (21:9) | `architecture.md` § Resolution Presets |
|
||||
| **Coordinate space** | Cartesian, polar, tiled, rotated, fisheye, Möbius, domain-warped | `effects.md` § Transforms |
|
||||
| **Feedback** | Zoom tunnel, rainbow trails, ghostly echo, rotating mandala, color evolution | `composition.md` § Feedback |
|
||||
| **Masking** | Circle, ring, gradient, text stencil, animated iris/wipe/dissolve | `composition.md` § Masking |
|
||||
| **Transitions** | Crossfade, wipe, dissolve, glitch cut, iris, mask-based reveal | `shaders.md` § Transitions |
|
||||
|
||||
### Per-Section Variation
|
||||
|
||||
Never use the same config for the entire video. For each section/scene/quote:
|
||||
- Choose a **different background effect** (or compose 2-3)
|
||||
- Choose a **different character palette** (match the mood)
|
||||
- Choose a **different color strategy** (or at minimum a different hue)
|
||||
- Vary **shader intensity** (more bloom during peaks, more grain during quiet)
|
||||
- Use **different particle types** if particles are active
|
||||
Never use the same config for the entire video. For each section/scene:
|
||||
- **Different background effect** (or compose 2-3)
|
||||
- **Different character palette** (match the mood)
|
||||
- **Different color strategy** (or at minimum a different hue)
|
||||
- **Vary shader intensity** (more bloom during peaks, more grain during quiet)
|
||||
- **Different particle types** if particles are active
|
||||
|
||||
### Project-Specific Invention
|
||||
|
||||
For every project, invent at least one of:
|
||||
- A custom character palette matching the theme
|
||||
- A custom background effect (combine/modify existing ones)
|
||||
- A custom background effect (combine/modify existing building blocks)
|
||||
- A custom color palette (discrete RGB set matching the brand/mood)
|
||||
- A custom particle character set
|
||||
- A novel scene transition or visual moment
|
||||
|
||||
Don't just pick from the catalog. The catalog is vocabulary — you write the poem.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Determine Mode and Gather Requirements
|
||||
### Step 1: Creative Vision
|
||||
|
||||
Before any code, articulate the creative concept:
|
||||
|
||||
- **Mood/atmosphere**: What should the viewer feel? Energetic, meditative, chaotic, elegant, ominous?
|
||||
- **Visual story**: What happens over the duration? Build tension? Transform? Dissolve?
|
||||
- **Color world**: Warm/cool? Monochrome? Neon? Earth tones? What's the dominant hue?
|
||||
- **Character texture**: Dense data? Sparse stars? Organic dots? Geometric blocks?
|
||||
- **What makes THIS different**: What's the one thing that makes this project unique?
|
||||
- **Emotional arc**: How do scenes progress? Open with energy, build to climax, resolve?
|
||||
|
||||
Map the user's prompt to aesthetic choices. A "chill lo-fi visualizer" demands different everything from a "glitch cyberpunk data stream."
|
||||
|
||||
### Step 2: Technical Design
|
||||
|
||||
Establish with user:
|
||||
- **Input source** — file path, format, duration
|
||||
- **Mode** — which of the 6 modes above
|
||||
- **Sections** — time-mapped style changes (timestamps → effect names)
|
||||
- **Resolution** — landscape 1920x1080 (default), portrait 1080x1920, square 1080x1080 @ 24fps; GIFs typically 640x360 @ 15fps
|
||||
- **Style direction** — dense/sparse, bright/dark, chaotic/minimal, color palette
|
||||
- **Text/branding** — easter eggs, overlays, credits, themed character sets
|
||||
- **Output format** — MP4 (default), GIF, PNG sequence
|
||||
- **Aspect ratio** — landscape (16:9), portrait (9:16 for TikTok/Reels/Stories), square (1:1 for IG feed)
|
||||
|
||||
### Step 2: Detect Hardware and Set Quality
|
||||
|
||||
Before building the script, detect the user's hardware and set appropriate defaults. See `references/optimization.md` § Hardware Detection.
|
||||
|
||||
```python
|
||||
hw = detect_hardware()
|
||||
profile = quality_profile(hw, target_duration, user_quality_pref)
|
||||
log(f"Hardware: {hw['cpu_count']} cores, {hw['mem_gb']:.1f}GB RAM")
|
||||
log(f"Render: {profile['vw']}x{profile['vh']} @{profile['fps']}fps, {profile['workers']} workers")
|
||||
```
|
||||
|
||||
Never hardcode worker counts, resolution, or CRF. Always detect and adapt.
|
||||
- **Resolution** — landscape 1920x1080 (default), portrait 1080x1920, square 1080x1080 @ 24fps
|
||||
- **Hardware detection** — auto-detect cores/RAM, set quality profile. See `references/optimization.md`
|
||||
- **Sections** — map timestamps to scene functions, each with its own effect/palette/color/shader config
|
||||
- **Output format** — MP4 (default), GIF (640x360 @ 15fps), PNG sequence
|
||||
|
||||
### Step 3: Build the Script
|
||||
|
||||
Write as a single Python file. Major components:
|
||||
Single Python file. Components (with references):
|
||||
|
||||
1. **Hardware detection + quality profile** — see `references/optimization.md`
|
||||
2. **Input loader** — mode-dependent; see `references/inputs.md`
|
||||
3. **Feature analyzer** — audio FFT, video luminance, or pass-through
|
||||
4. **Grid + renderer** — multi-density character grids with bitmap cache; `_render_vf()` helper for value/hue field → canvas
|
||||
5. **Character palettes** — multiple palettes chosen per project theme; see `references/architecture.md`
|
||||
6. **Color system** — HSV + discrete RGB palettes as needed; see `references/architecture.md`
|
||||
7. **Scene functions** — each returns `canvas (uint8 H,W,3)` directly. May compose multiple grids internally via pixel blend modes. See `references/scenes.md` + `references/composition.md`
|
||||
8. **Tonemap** — adaptive brightness normalization with per-scene gamma; see `references/composition.md`
|
||||
9. **Shader pipeline** — `ShaderChain` + `FeedbackBuffer` per-section config; see `references/shaders.md`
|
||||
10. **Scene table + dispatcher** — maps time ranges to scene functions + shader/feedback configs; see `references/scenes.md`
|
||||
11. **Parallel encoder** — N-worker batch clip rendering with ffmpeg pipes
|
||||
1. **Hardware detection + quality profile** — `references/optimization.md`
|
||||
2. **Input loader** — mode-dependent; `references/inputs.md`
|
||||
3. **Feature analyzer** — audio FFT, video luminance, or synthetic
|
||||
4. **Grid + renderer** — multi-density grids with bitmap cache; `references/architecture.md`
|
||||
5. **Character palettes** — multiple per project; `references/architecture.md` § Palettes
|
||||
6. **Color system** — HSV + discrete RGB + harmony generation; `references/architecture.md` § Color
|
||||
7. **Scene functions** — each returns `canvas (uint8 H,W,3)`; `references/scenes.md`
|
||||
8. **Tonemap** — adaptive brightness normalization; `references/composition.md`
|
||||
9. **Shader pipeline** — `ShaderChain` + `FeedbackBuffer`; `references/shaders.md`
|
||||
10. **Scene table + dispatcher** — time → scene function + config; `references/scenes.md`
|
||||
11. **Parallel encoder** — N-worker clip rendering with ffmpeg pipes
|
||||
12. **Main** — orchestrate full pipeline
|
||||
|
||||
### Step 4: Handle Critical Bugs
|
||||
### Step 4: Quality Verification
|
||||
|
||||
#### Font Cell Height (macOS Pillow)
|
||||
- **Test frames first**: render single frames at key timestamps before full render
|
||||
- **Brightness check**: `canvas.mean() > 8` for all ASCII content. If dark, lower gamma
|
||||
- **Visual coherence**: do all scenes feel like they belong to the same video?
|
||||
- **Creative vision check**: does the output match the concept from Step 1? If it looks generic, go back
|
||||
|
||||
`textbbox()` returns wrong height. Use `font.getmetrics()`:
|
||||
## Critical Implementation Notes
|
||||
|
||||
```python
|
||||
ascent, descent = font.getmetrics()
|
||||
cell_height = ascent + descent # correct
|
||||
```
|
||||
### Brightness — Use `tonemap()`, Not Linear Multipliers
|
||||
|
||||
#### ffmpeg Pipe Deadlock
|
||||
|
||||
Never use `stderr=subprocess.PIPE` with long-running ffmpeg. Redirect to file:
|
||||
|
||||
```python
|
||||
stderr_fh = open(err_path, "w")
|
||||
pipe = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.DEVNULL, stderr=stderr_fh)
|
||||
```
|
||||
|
||||
#### Brightness — Use `tonemap()`, Not Linear Multipliers
|
||||
|
||||
ASCII on black is inherently dark. This is the #1 visual issue. **Do NOT use linear `* N` brightness multipliers** — they clip highlights and wash out the image. Instead, use the **adaptive tonemap** function from `references/composition.md`:
|
||||
This is the #1 visual issue. ASCII on black is inherently dark. **Never use `canvas * N` multipliers** — they clip highlights. Use adaptive tonemap:
|
||||
|
||||
```python
|
||||
def tonemap(canvas, gamma=0.75):
|
||||
"""Percentile-based adaptive normalization + gamma. Replaces all brightness multipliers."""
|
||||
f = canvas.astype(np.float32)
|
||||
lo = np.percentile(f, 1) # black point (1st percentile)
|
||||
hi = np.percentile(f, 99.5) # white point (99.5th percentile)
|
||||
if hi - lo < 1: hi = lo + 1
|
||||
f = (f - lo) / (hi - lo)
|
||||
f = np.clip(f, 0, 1) ** gamma # gamma < 1 = brighter mids
|
||||
lo, hi = np.percentile(f[::4, ::4], [1, 99.5])
|
||||
if hi - lo < 10: hi = lo + 10
|
||||
f = np.clip((f - lo) / (hi - lo), 0, 1) ** gamma
|
||||
return (f * 255).astype(np.uint8)
|
||||
```
|
||||
|
||||
Pipeline ordering: `scene_fn() → tonemap() → FeedbackBuffer → ShaderChain → ffmpeg`
|
||||
Pipeline: `scene_fn() → tonemap() → FeedbackBuffer → ShaderChain → ffmpeg`
|
||||
|
||||
Per-scene gamma overrides for destructive effects:
|
||||
- Default: `gamma=0.75`
|
||||
- Solarize scenes: `gamma=0.55` (solarize darkens above-threshold pixels)
|
||||
- Posterize scenes: `gamma=0.50` (quantization loses brightness range)
|
||||
- Already-bright scenes: `gamma=0.85`
|
||||
Per-scene gamma: default 0.75, solarize 0.55, posterize 0.50, bright scenes 0.85. Use `screen` blend (not `overlay`) for dark layers.
|
||||
|
||||
Additional brightness best practices:
|
||||
- Dense animated backgrounds — never flat black, always fill the grid
|
||||
- Vignette minimum clamped to 0.15 (not 0.12)
|
||||
- Bloom threshold lowered to 130 (not 170) so more pixels contribute to glow
|
||||
- Use `screen` blend mode (not `overlay`) when compositing dark ASCII layers — overlay squares dark values: `2 * 0.12 * 0.12 = 0.03`
|
||||
### Font Cell Height
|
||||
|
||||
#### Font Compatibility
|
||||
macOS Pillow: `textbbox()` returns wrong height. Use `font.getmetrics()`: `cell_height = ascent + descent`. See `references/troubleshooting.md`.
|
||||
|
||||
Not all Unicode characters render in all fonts. Validate palettes at init:
|
||||
```python
|
||||
for c in palette:
|
||||
img = Image.new("L", (20, 20), 0)
|
||||
ImageDraw.Draw(img).text((0, 0), c, fill=255, font=font)
|
||||
if np.array(img).max() == 0:
|
||||
log(f"WARNING: char '{c}' (U+{ord(c):04X}) not in font, removing from palette")
|
||||
```
|
||||
### ffmpeg Pipe Deadlock
|
||||
|
||||
### Step 4b: Per-Clip Architecture (for segmented videos)
|
||||
Never `stderr=subprocess.PIPE` with long-running ffmpeg — buffer fills at 64KB and deadlocks. Redirect to file. See `references/troubleshooting.md`.
|
||||
|
||||
When the video has discrete segments (quotes, scenes, chapters), render each as a separate clip file. This enables:
|
||||
- Re-rendering individual clips without touching the rest (`--clip q05`)
|
||||
- Faster iteration on specific sections
|
||||
- Easy reordering or trimming in post
|
||||
### Font Compatibility
|
||||
|
||||
```python
|
||||
segments = [
|
||||
{"id": "intro", "start": 0.0, "end": 5.0, "type": "intro"},
|
||||
{"id": "q00", "start": 5.0, "end": 12.0, "type": "quote", "qi": 0, ...},
|
||||
{"id": "t00", "start": 12.0, "end": 13.5, "type": "transition", ...},
|
||||
{"id": "outro", "start": 208.0, "end": 211.6, "type": "outro"},
|
||||
]
|
||||
Not all Unicode chars render in all fonts. Validate palettes at init — render each char, check for blank output. See `references/troubleshooting.md`.
|
||||
|
||||
from concurrent.futures import ProcessPoolExecutor, as_completed
|
||||
with ProcessPoolExecutor(max_workers=hw["workers"]) as pool:
|
||||
futures = {pool.submit(render_clip, seg, features, path): seg["id"]
|
||||
for seg, path in clip_args}
|
||||
for fut in as_completed(futures):
|
||||
fut.result()
|
||||
```
|
||||
### Per-Clip Architecture
|
||||
|
||||
CLI: `--clip q00 t00 q01` to re-render specific clips, `--list` to show segments, `--skip-render` to re-stitch only.
|
||||
For segmented videos (quotes, scenes, chapters), render each as a separate clip file for parallel rendering and selective re-rendering. See `references/scenes.md`.
|
||||
|
||||
### Step 5: Render and Iterate
|
||||
|
||||
Performance targets per frame:
|
||||
## Performance Targets
|
||||
|
||||
| Component | Budget |
|
||||
|-----------|--------|
|
||||
@@ -233,24 +191,15 @@ Performance targets per frame:
|
||||
| Shader pipeline | 5-25ms |
|
||||
| **Total** | ~100-200ms/frame |
|
||||
|
||||
**Fast iteration**: render single test frames to check brightness/layout before full render:
|
||||
```python
|
||||
canvas = render_single_frame(frame_index, features, renderer)
|
||||
Image.fromarray(canvas).save("test.png")
|
||||
```
|
||||
|
||||
**Brightness verification**: sample 5-10 frames across video, check `mean > 8` for ASCII content.
|
||||
|
||||
## References
|
||||
|
||||
| File | Contents |
|
||||
|------|----------|
|
||||
| `references/architecture.md` | Grid system (landscape/portrait/square resolution presets), font selection, character palettes (library of 20+), color system (HSV + OKLAB/OKLCH + discrete RGB + color harmony generation + perceptual gradient interpolation), `_render_vf()` helper, compositing, v2 effect function contract |
|
||||
| `references/inputs.md` | All input sources: audio analysis, video sampling, image conversion, text/lyrics, TTS integration (ElevenLabs, voice assignment, audio mixing) |
|
||||
| `references/effects.md` | Effect building blocks: 20+ value field generators (trig, noise/fBM, domain warp, voronoi, reaction-diffusion, cellular automata, strange attractors, SDFs), 8 hue field generators, coordinate transforms (rotate/tile/polar/Möbius), temporal coherence (easing, keyframes, morphing), radial/wave/fire effects, advanced particles (flocking, flow fields, trails), composing guide |
|
||||
| `references/shaders.md` | 38 shader implementations (geometry, channel, color, glow, noise, pattern, tone, glitch, mirror), `ShaderChain` class, full `_apply_shader_step()` dispatch, audio-reactive scaling, transitions, tint presets |
|
||||
| `references/composition.md` | **v2 core**: pixel blend modes (20 modes with implementations), multi-grid composition, `_render_vf()` helper, adaptive `tonemap()`, per-scene gamma, `FeedbackBuffer` with spatial transforms, `PixelBlendStack`, masking/stencil system (shape masks, text stencils, animated masks, boolean ops) |
|
||||
| `references/scenes.md` | **v2 scene protocol**: scene function contract (local time convention), `Renderer` class, `SCENES` table structure, `render_clip()` loop, beat-synced cutting, parallel rendering + pickling constraints, 4 complete scene examples, scene design checklist |
|
||||
| `references/design-patterns.md` | **Scene composition patterns**: layer hierarchy (bg/content/accent), directional parameter arcs vs oscillation, scene concepts and visual metaphors, counter-rotating dual systems, wave collision, progressive fragmentation, entropy/consumption, staggered layer entry (crescendo), scene ordering |
|
||||
| `references/troubleshooting.md` | NumPy broadcasting traps, blend mode pitfalls, multiprocessing/pickling issues, brightness diagnostics, ffmpeg deadlocks, font issues, performance bottlenecks, common mistakes |
|
||||
| `references/optimization.md` | Hardware detection, adaptive quality profiles (draft/preview/production/max), CLI integration, vectorized effect patterns, parallel rendering, memory management |
|
||||
| `references/architecture.md` | Grid system, resolution presets, font selection, character palettes (20+), color system (HSV + OKLAB + discrete RGB + harmony generation), `_render_vf()` helper, GridLayer class |
|
||||
| `references/composition.md` | Pixel blend modes (20 modes), `blend_canvas()`, multi-grid composition, adaptive `tonemap()`, `FeedbackBuffer`, `PixelBlendStack`, masking/stencil system |
|
||||
| `references/effects.md` | Effect building blocks: value field generators, hue fields, noise/fBM/domain warp, voronoi, reaction-diffusion, cellular automata, SDFs, strange attractors, particle systems, coordinate transforms, temporal coherence |
|
||||
| `references/shaders.md` | `ShaderChain`, `_apply_shader_step()` dispatch, 38 shader catalog, audio-reactive scaling, transitions, tint presets, output format encoding, terminal rendering |
|
||||
| `references/scenes.md` | Scene protocol, `Renderer` class, `SCENES` table, `render_clip()`, beat-synced cutting, parallel rendering, design patterns (layer hierarchy, directional arcs, visual metaphors, compositional techniques), complete scene examples at every complexity level, scene design checklist |
|
||||
| `references/inputs.md` | Audio analysis (FFT, bands, beats), video sampling, image conversion, text/lyrics, TTS integration (ElevenLabs, voice assignment, audio mixing) |
|
||||
| `references/optimization.md` | Hardware detection, quality profiles, vectorized patterns, parallel rendering, memory management, performance budgets |
|
||||
| `references/troubleshooting.md` | NumPy broadcasting traps, blend mode pitfalls, multiprocessing/pickling, brightness diagnostics, ffmpeg issues, font problems, common mistakes |
|
||||
|
||||
@@ -1,14 +1,6 @@
|
||||
# Architecture Reference
|
||||
|
||||
**Cross-references:**
|
||||
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
|
||||
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Shader pipeline, feedback buffer, output encoding: `shaders.md`
|
||||
- Complete scene examples: `examples.md`
|
||||
- Input sources (audio analysis, video, TTS): `inputs.md`
|
||||
- Performance tuning, hardware detection: `optimization.md`
|
||||
- Common bugs (broadcasting, font, encoding): `troubleshooting.md`
|
||||
> **See also:** composition.md · effects.md · scenes.md · shaders.md · inputs.md · optimization.md · troubleshooting.md
|
||||
|
||||
## Grid System
|
||||
|
||||
|
||||
@@ -2,13 +2,7 @@
|
||||
|
||||
The composable system is the core of visual complexity. It operates at three levels: pixel-level blend modes, multi-grid composition, and adaptive brightness management. This document covers all three, plus the masking/stencil system for spatial control.
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
|
||||
- Effect building blocks (value fields, hue fields, particles): `effects.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Shader pipeline, feedback buffer: `shaders.md`
|
||||
- Complete scene examples with blend/mask usage: `examples.md`
|
||||
- Blend mode pitfalls (overlay crush, division by zero): `troubleshooting.md`
|
||||
> **See also:** architecture.md · effects.md · scenes.md · shaders.md · troubleshooting.md
|
||||
|
||||
## Pixel-Level Blend Modes
|
||||
|
||||
|
||||
@@ -1,193 +0,0 @@
|
||||
# Scene Design Patterns
|
||||
|
||||
**Cross-references:**
|
||||
- Scene protocol, SCENES table: `scenes.md`
|
||||
- Blend modes, multi-grid composition, tonemap: `composition.md`
|
||||
- Effect building blocks (value fields, noise, SDFs): `effects.md`
|
||||
- Shader pipeline, feedback buffer: `shaders.md`
|
||||
- Complete scene examples: `examples.md`
|
||||
|
||||
Higher-order patterns for composing scenes that feel intentional rather than random. These patterns use the existing building blocks (value fields, blend modes, shaders, feedback) but organize them with compositional intent.
|
||||
|
||||
## Layer Hierarchy
|
||||
|
||||
Every scene should have clear visual layers with distinct roles:
|
||||
|
||||
| Layer | Grid | Brightness | Purpose |
|
||||
|-------|------|-----------|---------|
|
||||
| **Background** | xs or sm (dense) | 0.1–0.25 | Atmosphere, texture. Never competes with content. |
|
||||
| **Content** | md (balanced) | 0.4–0.8 | The main visual idea. Carries the scene's concept. |
|
||||
| **Accent** | lg or sm (sparse) | 0.5–1.0 (sparse coverage) | Highlights, punctuation, sparse bright points. |
|
||||
|
||||
The background sets mood. The content layer is what the scene *is about*. The accent adds visual interest without overwhelming.
|
||||
|
||||
```python
|
||||
def fx_example(r, f, t, S):
|
||||
local = t
|
||||
progress = min(local / 5.0, 1.0)
|
||||
|
||||
g_bg = r.get_grid("sm")
|
||||
g_main = r.get_grid("md")
|
||||
g_accent = r.get_grid("lg")
|
||||
|
||||
# --- Background: dim atmosphere ---
|
||||
bg_val = vf_smooth_noise(g_bg, f, t * 0.3, S, octaves=2, bri=0.15)
|
||||
# ... render bg to canvas
|
||||
|
||||
# --- Content: the main visual idea ---
|
||||
content_val = vf_spiral(g_main, f, t, S, n_arms=n_arms, tightness=tightness)
|
||||
# ... render content on top of canvas
|
||||
|
||||
# --- Accent: sparse highlights ---
|
||||
accent_val = vf_noise_static(g_accent, f, t, S, density=0.05)
|
||||
# ... render accent on top
|
||||
|
||||
return canvas
|
||||
```
|
||||
|
||||
## Directional Parameter Arcs
|
||||
|
||||
Parameters should *go somewhere* over the scene's duration — not oscillate aimlessly with `sin(t * N)`.
|
||||
|
||||
**Bad:** `twist = 3.0 + 2.0 * math.sin(t * 0.6)` — wobbles back and forth, feels aimless.
|
||||
|
||||
**Good:** `twist = 2.0 + progress * 5.0` — starts gentle, ends intense. The scene *builds*.
|
||||
|
||||
Use `progress = min(local / duration, 1.0)` (0→1 over the scene) to drive directional change:
|
||||
|
||||
| Pattern | Formula | Feel |
|
||||
|---------|---------|------|
|
||||
| Linear ramp | `progress * range` | Steady buildup |
|
||||
| Ease-out | `1 - (1 - progress) ** 2` | Fast start, gentle finish |
|
||||
| Ease-in | `progress ** 2` | Slow start, accelerating |
|
||||
| Step reveal | `np.clip((progress - 0.5) / 0.25, 0, 1)` | Nothing until 50%, then fades in |
|
||||
| Build + plateau | `min(1.0, progress * 1.5)` | Reaches full at 67%, holds |
|
||||
|
||||
Oscillation is fine for *secondary* parameters (saturation shimmer, hue drift). But the *defining* parameter of the scene should have a direction.
|
||||
|
||||
### Examples of Directional Arcs
|
||||
|
||||
| Scene concept | Parameter | Arc |
|
||||
|--------------|-----------|-----|
|
||||
| Emergence | Ring radius | 0 → max (ease-out) |
|
||||
| Shatter | Voronoi cell count | 8 → 38 (linear) |
|
||||
| Descent | Tunnel speed | 2.0 → 10.0 (linear) |
|
||||
| Mandala | Shape complexity | ring → +polygon → +star → +rosette (step reveals) |
|
||||
| Crescendo | Layer count | 1 → 7 (staggered entry) |
|
||||
| Entropy | Geometry visibility | 1.0 → 0.0 (consumed) |
|
||||
|
||||
## Scene Concepts
|
||||
|
||||
Each scene should be built around a *visual idea*, not an effect name.
|
||||
|
||||
**Bad:** "fx_plasma_cascade" — named after the effect. No concept.
|
||||
**Good:** "fx_emergence" — a point of light expands into a field. The name tells you *what happens*.
|
||||
|
||||
Good scene concepts have:
|
||||
1. A **visual metaphor** (emergence, descent, collision, entropy)
|
||||
2. A **directional arc** (things change from A to B, not oscillate)
|
||||
3. **Motivated layer choices** (each layer serves the concept)
|
||||
4. **Motivated feedback** (transform direction matches the metaphor)
|
||||
|
||||
| Concept | Metaphor | Feedback transform | Why |
|
||||
|---------|----------|-------------------|-----|
|
||||
| Emergence | Birth, expansion | zoom-out | Past frames expand outward |
|
||||
| Descent | Falling, acceleration | zoom-in | Past frames rush toward center |
|
||||
| Inferno | Rising fire | shift-up | Past frames rise with the flames |
|
||||
| Entropy | Decay, dissolution | none | Clean, no persistence — things disappear |
|
||||
| Crescendo | Accumulation | zoom + hue_shift | Everything compounds and shifts |
|
||||
|
||||
## Compositional Techniques
|
||||
|
||||
### Counter-Rotating Dual Systems
|
||||
|
||||
Two instances of the same effect rotating in opposite directions create visual interference:
|
||||
|
||||
```python
|
||||
# Primary spiral (clockwise)
|
||||
s1_val = vf_spiral(g_main, f, t * 1.5, S, n_arms=n_arms_1, tightness=tightness_1)
|
||||
|
||||
# Counter-rotating spiral (counter-clockwise via negative time)
|
||||
s2_val = vf_spiral(g_accent, f, -t * 1.2, S, n_arms=n_arms_2, tightness=tightness_2)
|
||||
|
||||
# Screen blend creates bright interference at crossing points
|
||||
canvas = blend_canvas(canvas_with_s1, c2, "screen", 0.7)
|
||||
```
|
||||
|
||||
Works with spirals, vortexes, rings. The counter-rotation creates constantly shifting interference patterns.
|
||||
|
||||
### Wave Collision
|
||||
|
||||
Two wave fronts converging from opposite sides, meeting at a collision point:
|
||||
|
||||
```python
|
||||
collision_phase = abs(progress - 0.5) * 2 # 1→0→1 (0 at collision)
|
||||
|
||||
# Wave A approaches from left
|
||||
offset_a = (1 - progress) * g.cols * 0.4
|
||||
wave_a = np.sin((g.cc + offset_a) * 0.08 + t * 2) * 0.5 + 0.5
|
||||
|
||||
# Wave B approaches from right
|
||||
offset_b = -(1 - progress) * g.cols * 0.4
|
||||
wave_b = np.sin((g.cc + offset_b) * 0.08 - t * 2) * 0.5 + 0.5
|
||||
|
||||
# Interference peaks at collision
|
||||
combined = wave_a * 0.5 + wave_b * 0.5 + np.abs(wave_a - wave_b) * (1 - collision_phase) * 0.5
|
||||
```
|
||||
|
||||
### Progressive Fragmentation
|
||||
|
||||
Voronoi with cell count increasing over time — visual shattering:
|
||||
|
||||
```python
|
||||
n_pts = int(8 + progress * 30) # 8 cells → 38 cells
|
||||
# Pre-generate enough points, slice to n_pts
|
||||
px = base_x[:n_pts] + np.sin(t * 0.3 + np.arange(n_pts) * 0.7) * (3 + progress * 3)
|
||||
```
|
||||
|
||||
The edge glow width can also increase with progress to emphasize the cracks.
|
||||
|
||||
### Entropy / Consumption
|
||||
|
||||
A clean geometric pattern being overtaken by an organic process:
|
||||
|
||||
```python
|
||||
# Geometry fades out
|
||||
geo_val = clean_pattern * max(0.05, 1.0 - progress * 0.9)
|
||||
|
||||
# Organic process grows in
|
||||
rd_val = vf_reaction_diffusion(g, f, t, S) * min(1.0, progress * 1.5)
|
||||
|
||||
# Render geometry first, organic on top — organic consumes geometry
|
||||
```
|
||||
|
||||
### Staggered Layer Entry (Crescendo)
|
||||
|
||||
Layers enter one at a time, building to overwhelming density:
|
||||
|
||||
```python
|
||||
def layer_strength(enter_t, ramp=1.5):
|
||||
"""0.0 until enter_t, ramps to 1.0 over ramp seconds."""
|
||||
return max(0.0, min(1.0, (local - enter_t) / ramp))
|
||||
|
||||
# Layer 1: always present
|
||||
s1 = layer_strength(0.0)
|
||||
# Layer 2: enters at 2s
|
||||
s2 = layer_strength(2.0)
|
||||
# Layer 3: enters at 4s
|
||||
s3 = layer_strength(4.0)
|
||||
# ... etc
|
||||
|
||||
# Each layer uses a different effect, grid, palette, and blend mode
|
||||
# Screen blend between layers so they accumulate light
|
||||
```
|
||||
|
||||
For a 15-second crescendo, 7 layers entering every 2 seconds works well. Use different blend modes (screen for most, add for energy, colordodge for the final wash).
|
||||
|
||||
## Scene Ordering
|
||||
|
||||
For a multi-scene reel or video:
|
||||
- **Vary mood between adjacent scenes** — don't put two calm scenes next to each other
|
||||
- **Randomize order** rather than grouping by type — prevents "effect demo" feel
|
||||
- **End on the strongest scene** — crescendo or something with a clear payoff
|
||||
- **Open with energy** — grab attention in the first 2 seconds
|
||||
@@ -2,13 +2,7 @@
|
||||
|
||||
Effect building blocks that produce visual patterns. In v2, these are used **inside scene functions** that return a pixel canvas directly. The building blocks below operate on grid coordinate arrays and produce `(chars, colors)` or value/hue fields that the scene function renders to canvas via `_render_vf()`.
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, color: `architecture.md`
|
||||
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Shader pipeline, feedback buffer: `shaders.md`
|
||||
- Complete scene examples using these effects: `examples.md`
|
||||
- Common bugs (broadcasting, clipping): `troubleshooting.md`
|
||||
> **See also:** architecture.md · composition.md · scenes.md · shaders.md · troubleshooting.md
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
@@ -109,142 +103,7 @@ def bg_cellular(g, f, t, n_centers=12, hue=0.5, bri=0.6, pal=PAL_BLOCKS):
|
||||
|
||||
---
|
||||
|
||||
## Radial Effects
|
||||
|
||||
### Concentric Rings
|
||||
Bass/sub-driven pulsing rings from center. Scale ring count and thickness with bass energy.
|
||||
```python
|
||||
def eff_rings(g, f, t, hue=0.5, n_base=6, pal=PAL_DEFAULT):
|
||||
n_rings = int(n_base + f["sub_r"] * 25 + f["bass"] * 10)
|
||||
spacing = 2 + f["bass_r"] * 7 + f["rms"] * 3
|
||||
ring_cv = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for ri in range(n_rings):
|
||||
rad = (ri+1) * spacing + f["bdecay"] * 15
|
||||
wobble = f["mid_r"]*5*np.sin(g.angle*3 + t*4) + f["hi_r"]*3*np.sin(g.angle*7 - t*6)
|
||||
rd = np.abs(g.dist - rad - wobble)
|
||||
th = 1 + f["sub"] * 3
|
||||
ring_cv = np.maximum(ring_cv, np.clip((1 - rd/th) * (0.4 + f["bass"]*0.8), 0, 1))
|
||||
# Color by angle + distance for rainbow rings
|
||||
h = g.angle/(2*np.pi) + g.dist*0.005 + f["sub_r"]*0.2
|
||||
return ring_cv, h
|
||||
```
|
||||
|
||||
### Radial Rays
|
||||
```python
|
||||
def eff_rays(g, f, t, n_base=8, hue=0.5):
|
||||
n_rays = int(n_base + f["hi_r"] * 25)
|
||||
ray = np.clip(np.cos(g.angle*n_rays + t*3) * f["bdecay"]*0.6 * (1-g.dist_n), 0, 0.7)
|
||||
return ray
|
||||
```
|
||||
|
||||
### Spiral Arms (Logarithmic)
|
||||
```python
|
||||
def eff_spiral(g, f, t, n_arms=3, tightness=2.5, hue=0.5):
|
||||
arm_cv = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for ai in range(n_arms):
|
||||
offset = ai * 2*np.pi / n_arms
|
||||
log_r = np.log(g.dist + 1) * tightness
|
||||
arm_phase = g.angle + offset - log_r + t * 0.8
|
||||
arm_val = np.clip(np.cos(arm_phase * n_arms) * 0.6 + 0.2, 0, 1)
|
||||
arm_val *= (0.4 + f["rms"]*0.6) * np.clip(1 - g.dist_n*0.5, 0.2, 1)
|
||||
arm_cv = np.maximum(arm_cv, arm_val)
|
||||
return arm_cv
|
||||
```
|
||||
|
||||
### Center Glow / Pulse
|
||||
```python
|
||||
def eff_glow(g, f, t, intensity=0.6, spread=2.0):
|
||||
return np.clip(intensity * np.exp(-g.dist_n * spread) * (0.5 + f["rms"]*2 + np.sin(t*1.2)*0.2), 0, 0.9)
|
||||
```
|
||||
|
||||
### Tunnel / Depth
|
||||
```python
|
||||
def eff_tunnel(g, f, t, speed=3.0, complexity=6):
|
||||
tunnel_d = 1.0 / (g.dist_n + 0.1)
|
||||
v1 = np.sin(tunnel_d*2 - t*speed) * 0.45 + 0.55
|
||||
v2 = np.sin(g.angle*complexity + tunnel_d*1.5 - t*2) * 0.35 + 0.55
|
||||
return v1 * 0.5 + v2 * 0.5
|
||||
```
|
||||
|
||||
### Vortex (Rotating Distortion)
|
||||
```python
|
||||
def eff_vortex(g, f, t, twist=3.0, pulse=True):
|
||||
"""Twisting radial pattern -- distance modulates angle."""
|
||||
twisted = g.angle + g.dist_n * twist * np.sin(t * 0.5)
|
||||
val = np.sin(twisted * 4 - t * 2) * 0.5 + 0.5
|
||||
if pulse:
|
||||
val *= 0.5 + f.get("bass", 0.3) * 0.8
|
||||
return np.clip(val, 0, 1)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Wave Effects
|
||||
|
||||
### Multi-Band Frequency Waves
|
||||
Each frequency band draws its own wave at different spatial/temporal frequencies:
|
||||
```python
|
||||
def eff_freq_waves(g, f, t, bands=None):
|
||||
if bands is None:
|
||||
bands = [("sub",0.06,1.2,0.0), ("bass",0.10,2.0,0.08), ("lomid",0.15,3.0,0.16),
|
||||
("mid",0.22,4.5,0.25), ("himid",0.32,6.5,0.4), ("hi",0.45,8.5,0.55)]
|
||||
mid = g.rows / 2.0
|
||||
composite = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for band_key, sf, tf, hue_base in bands:
|
||||
amp = f.get(band_key, 0.3) * g.rows * 0.4
|
||||
y_wave = mid - np.sin(g.cc*sf + t*tf) * amp
|
||||
y_wave += np.sin(g.cc*sf*2.3 + t*tf*1.7) * amp * 0.2 # harmonic
|
||||
dist = np.abs(g.rr - y_wave)
|
||||
thickness = 2 + f.get(band_key, 0.3) * 5
|
||||
intensity = np.clip((1 - dist/thickness) * f.get(band_key, 0.3) * 1.5, 0, 1)
|
||||
composite = np.maximum(composite, intensity)
|
||||
return composite
|
||||
```
|
||||
|
||||
### Interference Pattern
|
||||
6-8 overlapping sine waves creating moire-like patterns:
|
||||
```python
|
||||
def eff_interference(g, f, t, n_waves=5):
|
||||
"""Parametric interference -- vary n_waves for complexity."""
|
||||
# Each wave has different orientation, frequency, and feature driver
|
||||
drivers = ["mid_r", "himid_r", "bass_r", "lomid_r", "hi_r"]
|
||||
vals = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for i in range(min(n_waves, len(drivers))):
|
||||
angle = i * np.pi / n_waves # spread orientations
|
||||
freq = 0.06 + i * 0.03
|
||||
sp = 0.5 + i * 0.3
|
||||
proj = g.cc * np.cos(angle) + g.rr * np.sin(angle)
|
||||
vals += np.sin(proj * freq + t * sp) * f.get(drivers[i], 0.3) * 2.5
|
||||
return np.clip(vals * 0.12 + 0.45, 0.1, 1)
|
||||
```
|
||||
|
||||
### Aurora / Horizontal Bands
|
||||
```python
|
||||
def eff_aurora(g, f, t, hue=0.4, n_bands=3):
|
||||
val = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for i in range(n_bands):
|
||||
freq_r = 0.08 + i * 0.04
|
||||
freq_c = 0.012 + i * 0.008
|
||||
sp_r = 0.7 + i * 0.3
|
||||
sp_c = 0.18 + i * 0.12
|
||||
val += np.sin(g.rr*freq_r + t*sp_r) * np.sin(g.cc*freq_c + t*sp_c) * (0.6 / n_bands)
|
||||
return np.clip(val * (f.get("lomid_r", 0.3)*3 + 0.2), 0, 0.7)
|
||||
```
|
||||
|
||||
### Ripple (Point-Source Waves)
|
||||
```python
|
||||
def eff_ripple(g, f, t, sources=None, freq=0.3, damping=0.02):
|
||||
"""Concentric ripples from point sources. Sources = [(row_frac, col_frac), ...]"""
|
||||
if sources is None:
|
||||
sources = [(0.5, 0.5)] # center
|
||||
val = np.zeros((g.rows, g.cols), dtype=np.float32)
|
||||
for ry, rx in sources:
|
||||
dy = g.rr - g.rows * ry
|
||||
dx = g.cc - g.cols * rx
|
||||
d = np.sqrt(dy**2 + dx**2)
|
||||
val += np.sin(d * freq - t * 4) * np.exp(-d * damping) * 0.5
|
||||
return np.clip(val + 0.5, 0, 1)
|
||||
```
|
||||
> **Note:** The v1 `eff_rings`, `eff_rays`, `eff_spiral`, `eff_glow`, `eff_tunnel`, `eff_vortex`, `eff_freq_waves`, `eff_interference`, `eff_aurora`, and `eff_ripple` functions are superseded by the `vf_*` value field generators below (used via `_render_vf()`). The `vf_*` versions integrate with the multi-grid composition pipeline and are preferred for all new scenes.
|
||||
|
||||
---
|
||||
|
||||
@@ -1967,3 +1826,40 @@ def scene_complex(r, f, t, S):
|
||||
```
|
||||
|
||||
Vary the **value field combo**, **hue field**, **palette**, **blend modes**, **feedback config**, and **shader chain** per section for maximum visual variety. With 12 value fields × 8 hue fields × 14 palettes × 20 blend modes × 7 feedback transforms × 38 shaders, the combinations are effectively infinite.
|
||||
|
||||
---
|
||||
|
||||
## Combining Effects — Creative Guide
|
||||
|
||||
The catalog above is vocabulary. Here's how to compose it into something that looks intentional.
|
||||
|
||||
### Layering for Depth
|
||||
Every scene should have at least two layers at different grid densities:
|
||||
- **Background** (sm or xs): dense, dim texture that prevents flat black. fBM, smooth noise, or domain warp at low brightness (bri=0.15-0.25).
|
||||
- **Content** (md): the main visual — rings, voronoi, spirals, tunnel. Full brightness.
|
||||
- **Accent** (lg or xl): sparse highlights — particles, text stencil, glow pulse. Screen-blended on top.
|
||||
|
||||
### Interesting Effect Pairs
|
||||
| Pair | Blend | Why it works |
|
||||
|------|-------|-------------|
|
||||
| fBM + voronoi edges | `screen` | Organic fills the cells, edges add structure |
|
||||
| Domain warp + plasma | `difference` | Psychedelic organic interference |
|
||||
| Tunnel + vortex | `screen` | Depth perspective + rotational energy |
|
||||
| Spiral + interference | `exclusion` | Moire patterns from different spatial frequencies |
|
||||
| Reaction-diffusion + fire | `add` | Living organic base + dynamic foreground |
|
||||
| SDF geometry + domain warp | `screen` | Clean shapes floating in organic texture |
|
||||
|
||||
### Effects as Masks
|
||||
Any value field can be used as a mask for another effect via `mask_from_vf()`:
|
||||
- Voronoi cells masking fire (fire visible only inside cells)
|
||||
- fBM masking a solid color layer (organic color clouds)
|
||||
- SDF shapes masking a reaction-diffusion field
|
||||
- Animated iris/wipe revealing one effect over another
|
||||
|
||||
### Inventing New Effects
|
||||
For every project, create at least one effect that isn't in the catalog:
|
||||
- **Combine two vf_* functions** with math: `np.clip(vf_fbm(...) * vf_rings(...), 0, 1)`
|
||||
- **Apply coordinate transforms** before evaluation: `vf_plasma(twisted_grid, ...)`
|
||||
- **Use one field to modulate another's parameters**: `vf_spiral(..., tightness=2 + vf_fbm(...) * 5)`
|
||||
- **Stack time offsets**: render the same field at `t` and `t - 0.5`, difference-blend for motion trails
|
||||
- **Mirror a value field** through an SDF boundary for kaleidoscopic geometry
|
||||
|
||||
@@ -1,416 +0,0 @@
|
||||
# Scene Examples
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
|
||||
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
|
||||
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Shader pipeline, feedback buffer, ShaderChain: `shaders.md`
|
||||
- Input sources (audio features, video features): `inputs.md`
|
||||
- Performance tuning: `optimization.md`
|
||||
- Common bugs: `troubleshooting.md`
|
||||
|
||||
Copy-paste-ready scene functions at increasing complexity. Each is a complete, working v2 scene function that returns a pixel canvas. See `scenes.md` for the scene protocol and `composition.md` for blend modes and tonemap.
|
||||
|
||||
---
|
||||
|
||||
## Minimal — Single Grid, Single Effect
|
||||
|
||||
### Breathing Plasma
|
||||
|
||||
One grid, one value field, one hue field. The simplest possible scene.
|
||||
|
||||
```python
|
||||
def fx_breathing_plasma(r, f, t, S):
|
||||
"""Plasma field with time-cycling hue. Audio modulates brightness."""
|
||||
canvas = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_plasma(g, f, t, S) * 1.3,
|
||||
hf_time_cycle(0.08), PAL_DENSE, f, t, S, sat=0.8)
|
||||
return canvas
|
||||
```
|
||||
|
||||
### Reaction-Diffusion Coral
|
||||
|
||||
Single grid, simulation-based field. Evolves organically over time.
|
||||
|
||||
```python
|
||||
def fx_coral(r, f, t, S):
|
||||
"""Gray-Scott reaction-diffusion — coral branching pattern.
|
||||
Slow-evolving, organic. Best for ambient/chill sections."""
|
||||
canvas = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
|
||||
feed=0.037, kill=0.060, steps_per_frame=6, init_mode="center"),
|
||||
hf_distance(0.55, 0.015), PAL_DOTS, f, t, S, sat=0.7)
|
||||
return canvas
|
||||
```
|
||||
|
||||
### SDF Geometry
|
||||
|
||||
Geometric shapes from SDFs. Clean, precise, graphic.
|
||||
|
||||
```python
|
||||
def fx_sdf_rings(r, f, t, S):
|
||||
"""Concentric SDF rings with smooth pulsing."""
|
||||
def val_fn(g, f, t, S):
|
||||
d1 = sdf_ring(g, radius=0.15 + f.get("bass", 0.3) * 0.05, thickness=0.015)
|
||||
d2 = sdf_ring(g, radius=0.25 + f.get("mid", 0.3) * 0.05, thickness=0.012)
|
||||
d3 = sdf_ring(g, radius=0.35 + f.get("hi", 0.3) * 0.04, thickness=0.010)
|
||||
combined = sdf_smooth_union(sdf_smooth_union(d1, d2, 0.05), d3, 0.05)
|
||||
return sdf_glow(combined, falloff=0.08) * (0.5 + f.get("rms", 0.3) * 0.8)
|
||||
canvas = _render_vf(r, "md", val_fn, hf_angle(0.0), PAL_STARS, f, t, S, sat=0.85)
|
||||
return canvas
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Standard — Two Grids + Blend
|
||||
|
||||
### Tunnel Through Noise
|
||||
|
||||
Two grids at different densities, screen blended. The fine noise texture shows through the coarser tunnel characters.
|
||||
|
||||
```python
|
||||
def fx_tunnel_noise(r, f, t, S):
|
||||
"""Tunnel depth on md grid + fBM noise on sm grid, screen blended."""
|
||||
canvas_a = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=4.0, complexity=8) * 1.2,
|
||||
hf_distance(0.5, 0.02), PAL_BLOCKS, f, t, S, sat=0.7)
|
||||
|
||||
canvas_b = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=4, freq=0.05, speed=0.15) * 1.3,
|
||||
hf_time_cycle(0.06), PAL_RUNE, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "screen", 0.7)
|
||||
```
|
||||
|
||||
### Voronoi Cells + Spiral Overlay
|
||||
|
||||
Voronoi cell edges with a spiral arm pattern overlaid.
|
||||
|
||||
```python
|
||||
def fx_voronoi_spiral(r, f, t, S):
|
||||
"""Voronoi edge detection on md + logarithmic spiral on lg."""
|
||||
canvas_a = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_voronoi(g, f, t, S,
|
||||
n_cells=15, mode="edge", edge_width=2.0, speed=0.4),
|
||||
hf_angle(0.2), PAL_CIRCUIT, f, t, S, sat=0.75)
|
||||
|
||||
canvas_b = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_spiral(g, f, t, S, n_arms=4, tightness=3.0) * 1.2,
|
||||
hf_distance(0.1, 0.03), PAL_BLOCKS, f, t, S, sat=0.9)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "exclusion", 0.6)
|
||||
```
|
||||
|
||||
### Domain-Warped fBM
|
||||
|
||||
Two layers of the same fBM, one domain-warped, difference-blended for psychedelic organic texture.
|
||||
|
||||
```python
|
||||
def fx_organic_warp(r, f, t, S):
|
||||
"""Clean fBM vs domain-warped fBM, difference blended."""
|
||||
canvas_a = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04, speed=0.1),
|
||||
hf_plasma(0.2), PAL_DENSE, f, t, S, sat=0.6)
|
||||
|
||||
canvas_b = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
|
||||
warp_strength=20.0, freq=0.05, speed=0.15),
|
||||
hf_time_cycle(0.05), PAL_BRAILLE, f, t, S, sat=0.7)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "difference", 0.7)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complex — Three Grids + Conditional + Feedback
|
||||
|
||||
### Psychedelic Cathedral
|
||||
|
||||
Three-grid composition with beat-triggered kaleidoscope and feedback zoom tunnel. The most visually complex pattern.
|
||||
|
||||
```python
|
||||
def fx_cathedral(r, f, t, S):
|
||||
"""Three-layer cathedral: interference + rings + noise, kaleidoscope on beat,
|
||||
feedback zoom tunnel."""
|
||||
# Layer 1: interference pattern on sm grid
|
||||
canvas_a = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_interference(g, f, t, S, n_waves=7) * 1.3,
|
||||
hf_angle(0.0), PAL_MATH, f, t, S, sat=0.8)
|
||||
|
||||
# Layer 2: pulsing rings on md grid
|
||||
canvas_b = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_rings(g, f, t, S, n_base=10, spacing_base=3) * 1.4,
|
||||
hf_distance(0.3, 0.02), PAL_STARS, f, t, S, sat=0.9)
|
||||
|
||||
# Layer 3: temporal noise on lg grid (slow morph)
|
||||
canvas_c = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_temporal_noise(g, f, t, S,
|
||||
freq=0.04, t_freq=0.2, octaves=3),
|
||||
hf_time_cycle(0.12), PAL_BLOCKS, f, t, S, sat=0.7)
|
||||
|
||||
# Blend: A screen B, then difference with C
|
||||
result = blend_canvas(canvas_a, canvas_b, "screen", 0.8)
|
||||
result = blend_canvas(result, canvas_c, "difference", 0.5)
|
||||
|
||||
# Beat-triggered kaleidoscope
|
||||
if f.get("bdecay", 0) > 0.3:
|
||||
folds = 6 if f.get("sub_r", 0.3) > 0.4 else 8
|
||||
result = sh_kaleidoscope(result.copy(), folds=folds)
|
||||
|
||||
return result
|
||||
|
||||
# Scene table entry with feedback:
|
||||
# {"start": 30.0, "end": 50.0, "name": "cathedral", "fx": fx_cathedral,
|
||||
# "gamma": 0.65, "shaders": [("bloom", {"thr": 110}), ("chromatic", {"amt": 4}),
|
||||
# ("vignette", {"s": 0.2}), ("grain", {"amt": 8})],
|
||||
# "feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
|
||||
# "transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}}
|
||||
```
|
||||
|
||||
### Masked Reaction-Diffusion with Attractor Overlay
|
||||
|
||||
Reaction-diffusion visible only through an animated iris mask, with a strange attractor density field underneath.
|
||||
|
||||
```python
|
||||
def fx_masked_life(r, f, t, S):
|
||||
"""Attractor base + reaction-diffusion visible through iris mask + particles."""
|
||||
g_sm = r.get_grid("sm")
|
||||
g_md = r.get_grid("md")
|
||||
|
||||
# Layer 1: strange attractor density field (background)
|
||||
canvas_bg = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_strange_attractor(g, f, t, S,
|
||||
attractor="clifford", n_points=30000),
|
||||
hf_time_cycle(0.04), PAL_DOTS, f, t, S, sat=0.5)
|
||||
|
||||
# Layer 2: reaction-diffusion (foreground, will be masked)
|
||||
canvas_rd = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
|
||||
feed=0.046, kill=0.063, steps_per_frame=4, init_mode="ring"),
|
||||
hf_angle(0.15), PAL_HALFFILL, f, t, S, sat=0.85)
|
||||
|
||||
# Animated iris mask — opens over first 5 seconds of scene
|
||||
scene_start = S.get("_scene_start", t)
|
||||
if "_scene_start" not in S:
|
||||
S["_scene_start"] = t
|
||||
mask = mask_iris(g_md, t, scene_start, scene_start + 5.0,
|
||||
max_radius=0.6)
|
||||
canvas_rd = apply_mask_canvas(canvas_rd, mask, bg_canvas=canvas_bg)
|
||||
|
||||
# Layer 3: flow-field particles following the R-D gradient
|
||||
rd_field = vf_reaction_diffusion(g_sm, f, t, S,
|
||||
feed=0.046, kill=0.063, steps_per_frame=0) # read without stepping
|
||||
ch_p, co_p = update_flow_particles(S, g_sm, f, rd_field,
|
||||
n=300, speed=0.8, char_set=list("·•◦∘°"))
|
||||
canvas_p = g_sm.render(ch_p, co_p)
|
||||
|
||||
result = blend_canvas(canvas_rd, canvas_p, "add", 0.7)
|
||||
return result
|
||||
```
|
||||
|
||||
### Morphing Field Sequence with Eased Keyframes
|
||||
|
||||
Demonstrates temporal coherence: smooth morphing between effects with keyframed parameters.
|
||||
|
||||
```python
|
||||
def fx_morphing_journey(r, f, t, S):
|
||||
"""Morphs through 4 value fields over 20 seconds with eased transitions.
|
||||
Parameters (twist, arm count) also keyframed."""
|
||||
# Keyframed twist parameter
|
||||
twist = keyframe(t, [(0, 1.0), (5, 5.0), (10, 2.0), (15, 8.0), (20, 1.0)],
|
||||
ease_fn=ease_in_out_cubic, loop=True)
|
||||
|
||||
# Sequence of value fields with 2s crossfade
|
||||
fields = [
|
||||
lambda g, f, t, S: vf_plasma(g, f, t, S),
|
||||
lambda g, f, t, S: vf_vortex(g, f, t, S, twist=twist),
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04),
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S, warp_strength=15),
|
||||
]
|
||||
durations = [5.0, 5.0, 5.0, 5.0]
|
||||
|
||||
val_fn = lambda g, f, t, S: vf_sequence(g, f, t, S, fields, durations,
|
||||
crossfade=2.0)
|
||||
|
||||
# Render with slowly rotating hue
|
||||
canvas = _render_vf(r, "md", val_fn, hf_time_cycle(0.06),
|
||||
PAL_DENSE, f, t, S, sat=0.8)
|
||||
|
||||
# Second layer: tiled version of same sequence at smaller grid
|
||||
tiled_fn = lambda g, f, t, S: vf_sequence(
|
||||
make_tgrid(g, *uv_tile(g, 3, 3, mirror=True)),
|
||||
f, t, S, fields, durations, crossfade=2.0)
|
||||
canvas_b = _render_vf(r, "sm", tiled_fn, hf_angle(0.1),
|
||||
PAL_RUNE, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas, canvas_b, "screen", 0.5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Specialized — Unique State Patterns
|
||||
|
||||
### Game of Life with Ghost Trails
|
||||
|
||||
Cellular automaton with analog fade trails. Beat injects random cells.
|
||||
|
||||
```python
|
||||
def fx_life(r, f, t, S):
|
||||
"""Conway's Game of Life with fading ghost trails.
|
||||
Beat events inject random live cells for disruption."""
|
||||
canvas = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
|
||||
rule="life", steps_per_frame=1, fade=0.92, density=0.25),
|
||||
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.8)
|
||||
|
||||
# Overlay: coral automaton on lg grid for chunky texture
|
||||
canvas_b = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
|
||||
rule="coral", steps_per_frame=1, fade=0.85, density=0.15, seed=99),
|
||||
hf_time_cycle(0.1), PAL_HATCH, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas, canvas_b, "screen", 0.5)
|
||||
```
|
||||
|
||||
### Boids Flock Over Voronoi
|
||||
|
||||
Emergent swarm movement over a cellular background.
|
||||
|
||||
```python
|
||||
def fx_boid_swarm(r, f, t, S):
|
||||
"""Flocking boids over animated voronoi cells."""
|
||||
# Background: voronoi cells
|
||||
canvas_bg = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_voronoi(g, f, t, S,
|
||||
n_cells=20, mode="distance", speed=0.2),
|
||||
hf_distance(0.4, 0.02), PAL_CIRCUIT, f, t, S, sat=0.5)
|
||||
|
||||
# Foreground: boids
|
||||
g = r.get_grid("md")
|
||||
ch_b, co_b = update_boids(S, g, f, n_boids=150, perception=6.0,
|
||||
max_speed=1.5, char_set=list("▸▹►▻→⟶"))
|
||||
canvas_boids = g.render(ch_b, co_b)
|
||||
|
||||
# Trails for the boids
|
||||
# (boid positions are stored in S["boid_x"], S["boid_y"])
|
||||
S["px"] = list(S.get("boid_x", []))
|
||||
S["py"] = list(S.get("boid_y", []))
|
||||
ch_t, co_t = draw_particle_trails(S, g, max_trail=6, fade=0.6)
|
||||
canvas_trails = g.render(ch_t, co_t)
|
||||
|
||||
result = blend_canvas(canvas_bg, canvas_trails, "add", 0.3)
|
||||
result = blend_canvas(result, canvas_boids, "add", 0.9)
|
||||
return result
|
||||
```
|
||||
|
||||
### Fire Rising Through SDF Text Stencil
|
||||
|
||||
Fire effect visible only through text letterforms.
|
||||
|
||||
```python
|
||||
def fx_fire_text(r, f, t, S):
|
||||
"""Fire columns visible through text stencil. Text acts as window."""
|
||||
g = r.get_grid("lg")
|
||||
|
||||
# Full-screen fire (will be masked)
|
||||
canvas_fire = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: np.clip(
|
||||
vf_fbm(g, f, t, S, octaves=4, freq=0.08, speed=0.8) *
|
||||
(1.0 - g.rr / g.rows) * # fade toward top
|
||||
(0.6 + f.get("bass", 0.3) * 0.8), 0, 1),
|
||||
hf_fixed(0.05), PAL_BLOCKS, f, t, S, sat=0.9) # fire hue
|
||||
|
||||
# Background: dark domain warp
|
||||
canvas_bg = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
|
||||
warp_strength=8, freq=0.03, speed=0.05) * 0.3,
|
||||
hf_fixed(0.6), PAL_DENSE, f, t, S, sat=0.4)
|
||||
|
||||
# Text stencil mask
|
||||
mask = mask_text(g, "FIRE", row_frac=0.45)
|
||||
# Expand vertically for multi-row coverage
|
||||
for offset in range(-2, 3):
|
||||
shifted = mask_text(g, "FIRE", row_frac=0.45 + offset / g.rows)
|
||||
mask = mask_union(mask, shifted)
|
||||
|
||||
canvas_masked = apply_mask_canvas(canvas_fire, mask, bg_canvas=canvas_bg)
|
||||
return canvas_masked
|
||||
```
|
||||
|
||||
### Portrait Mode: Vertical Rain + Quote
|
||||
|
||||
Optimized for 9:16. Uses vertical space for long rain trails and stacked text.
|
||||
|
||||
```python
|
||||
def fx_portrait_rain_quote(r, f, t, S):
|
||||
"""Portrait-optimized: matrix rain (long vertical trails) with stacked quote.
|
||||
Designed for 1080x1920 (9:16)."""
|
||||
g = r.get_grid("md") # ~112x100 in portrait
|
||||
|
||||
# Matrix rain — long trails benefit from portrait's extra rows
|
||||
ch, co, S = eff_matrix_rain(g, f, t, S,
|
||||
hue=0.33, bri=0.6, pal=PAL_KATA, speed_base=0.4, speed_beat=2.5)
|
||||
canvas_rain = g.render(ch, co)
|
||||
|
||||
# Tunnel depth underneath for texture
|
||||
canvas_tunnel = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=3.0, complexity=6) * 0.8,
|
||||
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.5)
|
||||
|
||||
result = blend_canvas(canvas_tunnel, canvas_rain, "screen", 0.8)
|
||||
|
||||
# Quote text — portrait layout: short lines, many of them
|
||||
g_text = r.get_grid("lg") # ~90x80 in portrait
|
||||
quote_lines = layout_text_portrait(
|
||||
"The code is the art and the art is the code",
|
||||
max_chars_per_line=20)
|
||||
# Center vertically
|
||||
block_start = (g_text.rows - len(quote_lines)) // 2
|
||||
ch_t = np.full((g_text.rows, g_text.cols), " ", dtype="U1")
|
||||
co_t = np.zeros((g_text.rows, g_text.cols, 3), dtype=np.uint8)
|
||||
total_chars = sum(len(l) for l in quote_lines)
|
||||
progress = min(1.0, (t - S.get("_scene_start", t)) / 3.0)
|
||||
if "_scene_start" not in S: S["_scene_start"] = t
|
||||
render_typewriter(ch_t, co_t, quote_lines, block_start, g_text.cols,
|
||||
progress, total_chars, (200, 255, 220), t)
|
||||
canvas_text = g_text.render(ch_t, co_t)
|
||||
|
||||
result = blend_canvas(result, canvas_text, "add", 0.9)
|
||||
return result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scene Table Template
|
||||
|
||||
Wire scenes into a complete video:
|
||||
|
||||
```python
|
||||
SCENES = [
|
||||
{"start": 0.0, "end": 5.0, "name": "coral",
|
||||
"fx": fx_coral, "grid": "sm", "gamma": 0.70,
|
||||
"shaders": [("bloom", {"thr": 110}), ("vignette", {"s": 0.2})],
|
||||
"feedback": {"decay": 0.8, "blend": "screen", "opacity": 0.3,
|
||||
"transform": "zoom", "transform_amt": 0.01}},
|
||||
|
||||
{"start": 5.0, "end": 15.0, "name": "tunnel_noise",
|
||||
"fx": fx_tunnel_noise, "grid": "md", "gamma": 0.75,
|
||||
"shaders": [("chromatic", {"amt": 3}), ("bloom", {"thr": 120}),
|
||||
("scanlines", {"intensity": 0.06}), ("grain", {"amt": 8})],
|
||||
"feedback": None},
|
||||
|
||||
{"start": 15.0, "end": 35.0, "name": "cathedral",
|
||||
"fx": fx_cathedral, "grid": "sm", "gamma": 0.65,
|
||||
"shaders": [("bloom", {"thr": 100}), ("chromatic", {"amt": 5}),
|
||||
("color_wobble", {"amt": 0.2}), ("vignette", {"s": 0.18})],
|
||||
"feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
|
||||
"transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}},
|
||||
|
||||
{"start": 35.0, "end": 50.0, "name": "morphing",
|
||||
"fx": fx_morphing_journey, "grid": "md", "gamma": 0.70,
|
||||
"shaders": [("bloom", {"thr": 110}), ("grain", {"amt": 6})],
|
||||
"feedback": {"decay": 0.7, "blend": "screen", "opacity": 0.25,
|
||||
"transform": "rotate_cw", "transform_amt": 0.003}},
|
||||
]
|
||||
```
|
||||
@@ -1,13 +1,6 @@
|
||||
# Input Sources
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, resolution presets: `architecture.md`
|
||||
- Effect building blocks (audio-reactive modulation): `effects.md`
|
||||
- Scene protocol, SCENES table (feature routing): `scenes.md`
|
||||
- Shader pipeline, output encoding: `shaders.md`
|
||||
- Performance tuning (audio chunking, WAV caching): `optimization.md`
|
||||
- Common bugs (sample rate, dtype, silence handling): `troubleshooting.md`
|
||||
- Complete scene examples with feature usage: `examples.md`
|
||||
> **See also:** architecture.md · effects.md · scenes.md · shaders.md · optimization.md · troubleshooting.md
|
||||
|
||||
## Audio Analysis
|
||||
|
||||
|
||||
@@ -1,14 +1,6 @@
|
||||
# Optimization Reference
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, resolution presets, portrait GridLayer: `architecture.md`
|
||||
- Effect building blocks (pre-computation strategies): `effects.md`
|
||||
- `_render_vf()`, tonemap (subsampled percentile): `composition.md`
|
||||
- Scene protocol, render_clip: `scenes.md`
|
||||
- Shader pipeline, encoding (ffmpeg flags): `shaders.md`
|
||||
- Input sources (audio chunking, WAV extraction): `inputs.md`
|
||||
- Common bugs (memory, OOM, frame drops): `troubleshooting.md`
|
||||
- Complete scene examples: `examples.md`
|
||||
> **See also:** architecture.md · composition.md · scenes.md · shaders.md · inputs.md · troubleshooting.md
|
||||
|
||||
## Hardware Detection
|
||||
|
||||
|
||||
@@ -1,18 +1,214 @@
|
||||
# Scene System Reference
|
||||
# Scene System & Creative Composition
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
|
||||
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
|
||||
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
|
||||
- Shader pipeline, feedback buffer, ShaderChain: `shaders.md`
|
||||
- Complete scene examples at every complexity level: `examples.md`
|
||||
- Input sources (audio features, video features): `inputs.md`
|
||||
- Performance tuning, portrait CLI: `optimization.md`
|
||||
- Common bugs (state leaks, frame drops): `troubleshooting.md`
|
||||
> **See also:** architecture.md · composition.md · effects.md · shaders.md
|
||||
|
||||
## Scene Design Philosophy
|
||||
|
||||
Scenes are storytelling units, not effect demos. Every scene needs:
|
||||
- A **concept** — what is happening visually? Not "plasma + rings" but "emergence from void" or "crystallization"
|
||||
- An **arc** — how does it change over its duration? Build, decay, transform, reveal?
|
||||
- A **role** — how does it serve the larger video narrative? Opening tension, peak energy, resolution?
|
||||
|
||||
The design patterns below provide compositional techniques. The scene examples show them in practice at increasing complexity. The protocol section covers the technical contract.
|
||||
|
||||
Good scene design starts with the concept, then selects effects and parameters that serve it. The design patterns section shows *how* to compose layers intentionally. The examples section shows complete working scenes at every complexity level. The protocol section covers the technical contract that all scenes must follow.
|
||||
|
||||
---
|
||||
|
||||
## Scene Design Patterns
|
||||
|
||||
Higher-order patterns for composing scenes that feel intentional rather than random. These patterns use the existing building blocks (value fields, blend modes, shaders, feedback) but organize them with compositional intent.
|
||||
|
||||
## Layer Hierarchy
|
||||
|
||||
Every scene should have clear visual layers with distinct roles:
|
||||
|
||||
| Layer | Grid | Brightness | Purpose |
|
||||
|-------|------|-----------|---------|
|
||||
| **Background** | xs or sm (dense) | 0.1–0.25 | Atmosphere, texture. Never competes with content. |
|
||||
| **Content** | md (balanced) | 0.4–0.8 | The main visual idea. Carries the scene's concept. |
|
||||
| **Accent** | lg or sm (sparse) | 0.5–1.0 (sparse coverage) | Highlights, punctuation, sparse bright points. |
|
||||
|
||||
The background sets mood. The content layer is what the scene *is about*. The accent adds visual interest without overwhelming.
|
||||
|
||||
```python
|
||||
def fx_example(r, f, t, S):
|
||||
local = t
|
||||
progress = min(local / 5.0, 1.0)
|
||||
|
||||
g_bg = r.get_grid("sm")
|
||||
g_main = r.get_grid("md")
|
||||
g_accent = r.get_grid("lg")
|
||||
|
||||
# --- Background: dim atmosphere ---
|
||||
bg_val = vf_smooth_noise(g_bg, f, t * 0.3, S, octaves=2, bri=0.15)
|
||||
# ... render bg to canvas
|
||||
|
||||
# --- Content: the main visual idea ---
|
||||
content_val = vf_spiral(g_main, f, t, S, n_arms=n_arms, tightness=tightness)
|
||||
# ... render content on top of canvas
|
||||
|
||||
# --- Accent: sparse highlights ---
|
||||
accent_val = vf_noise_static(g_accent, f, t, S, density=0.05)
|
||||
# ... render accent on top
|
||||
|
||||
return canvas
|
||||
```
|
||||
|
||||
## Directional Parameter Arcs
|
||||
|
||||
Parameters should *go somewhere* over the scene's duration — not oscillate aimlessly with `sin(t * N)`.
|
||||
|
||||
**Bad:** `twist = 3.0 + 2.0 * math.sin(t * 0.6)` — wobbles back and forth, feels aimless.
|
||||
|
||||
**Good:** `twist = 2.0 + progress * 5.0` — starts gentle, ends intense. The scene *builds*.
|
||||
|
||||
Use `progress = min(local / duration, 1.0)` (0→1 over the scene) to drive directional change:
|
||||
|
||||
| Pattern | Formula | Feel |
|
||||
|---------|---------|------|
|
||||
| Linear ramp | `progress * range` | Steady buildup |
|
||||
| Ease-out | `1 - (1 - progress) ** 2` | Fast start, gentle finish |
|
||||
| Ease-in | `progress ** 2` | Slow start, accelerating |
|
||||
| Step reveal | `np.clip((progress - 0.5) / 0.25, 0, 1)` | Nothing until 50%, then fades in |
|
||||
| Build + plateau | `min(1.0, progress * 1.5)` | Reaches full at 67%, holds |
|
||||
|
||||
Oscillation is fine for *secondary* parameters (saturation shimmer, hue drift). But the *defining* parameter of the scene should have a direction.
|
||||
|
||||
### Examples of Directional Arcs
|
||||
|
||||
| Scene concept | Parameter | Arc |
|
||||
|--------------|-----------|-----|
|
||||
| Emergence | Ring radius | 0 → max (ease-out) |
|
||||
| Shatter | Voronoi cell count | 8 → 38 (linear) |
|
||||
| Descent | Tunnel speed | 2.0 → 10.0 (linear) |
|
||||
| Mandala | Shape complexity | ring → +polygon → +star → +rosette (step reveals) |
|
||||
| Crescendo | Layer count | 1 → 7 (staggered entry) |
|
||||
| Entropy | Geometry visibility | 1.0 → 0.0 (consumed) |
|
||||
|
||||
## Scene Concepts
|
||||
|
||||
Each scene should be built around a *visual idea*, not an effect name.
|
||||
|
||||
**Bad:** "fx_plasma_cascade" — named after the effect. No concept.
|
||||
**Good:** "fx_emergence" — a point of light expands into a field. The name tells you *what happens*.
|
||||
|
||||
Good scene concepts have:
|
||||
1. A **visual metaphor** (emergence, descent, collision, entropy)
|
||||
2. A **directional arc** (things change from A to B, not oscillate)
|
||||
3. **Motivated layer choices** (each layer serves the concept)
|
||||
4. **Motivated feedback** (transform direction matches the metaphor)
|
||||
|
||||
| Concept | Metaphor | Feedback transform | Why |
|
||||
|---------|----------|-------------------|-----|
|
||||
| Emergence | Birth, expansion | zoom-out | Past frames expand outward |
|
||||
| Descent | Falling, acceleration | zoom-in | Past frames rush toward center |
|
||||
| Inferno | Rising fire | shift-up | Past frames rise with the flames |
|
||||
| Entropy | Decay, dissolution | none | Clean, no persistence — things disappear |
|
||||
| Crescendo | Accumulation | zoom + hue_shift | Everything compounds and shifts |
|
||||
|
||||
## Compositional Techniques
|
||||
|
||||
### Counter-Rotating Dual Systems
|
||||
|
||||
Two instances of the same effect rotating in opposite directions create visual interference:
|
||||
|
||||
```python
|
||||
# Primary spiral (clockwise)
|
||||
s1_val = vf_spiral(g_main, f, t * 1.5, S, n_arms=n_arms_1, tightness=tightness_1)
|
||||
|
||||
# Counter-rotating spiral (counter-clockwise via negative time)
|
||||
s2_val = vf_spiral(g_accent, f, -t * 1.2, S, n_arms=n_arms_2, tightness=tightness_2)
|
||||
|
||||
# Screen blend creates bright interference at crossing points
|
||||
canvas = blend_canvas(canvas_with_s1, c2, "screen", 0.7)
|
||||
```
|
||||
|
||||
Works with spirals, vortexes, rings. The counter-rotation creates constantly shifting interference patterns.
|
||||
|
||||
### Wave Collision
|
||||
|
||||
Two wave fronts converging from opposite sides, meeting at a collision point:
|
||||
|
||||
```python
|
||||
collision_phase = abs(progress - 0.5) * 2 # 1→0→1 (0 at collision)
|
||||
|
||||
# Wave A approaches from left
|
||||
offset_a = (1 - progress) * g.cols * 0.4
|
||||
wave_a = np.sin((g.cc + offset_a) * 0.08 + t * 2) * 0.5 + 0.5
|
||||
|
||||
# Wave B approaches from right
|
||||
offset_b = -(1 - progress) * g.cols * 0.4
|
||||
wave_b = np.sin((g.cc + offset_b) * 0.08 - t * 2) * 0.5 + 0.5
|
||||
|
||||
# Interference peaks at collision
|
||||
combined = wave_a * 0.5 + wave_b * 0.5 + np.abs(wave_a - wave_b) * (1 - collision_phase) * 0.5
|
||||
```
|
||||
|
||||
### Progressive Fragmentation
|
||||
|
||||
Voronoi with cell count increasing over time — visual shattering:
|
||||
|
||||
```python
|
||||
n_pts = int(8 + progress * 30) # 8 cells → 38 cells
|
||||
# Pre-generate enough points, slice to n_pts
|
||||
px = base_x[:n_pts] + np.sin(t * 0.3 + np.arange(n_pts) * 0.7) * (3 + progress * 3)
|
||||
```
|
||||
|
||||
The edge glow width can also increase with progress to emphasize the cracks.
|
||||
|
||||
### Entropy / Consumption
|
||||
|
||||
A clean geometric pattern being overtaken by an organic process:
|
||||
|
||||
```python
|
||||
# Geometry fades out
|
||||
geo_val = clean_pattern * max(0.05, 1.0 - progress * 0.9)
|
||||
|
||||
# Organic process grows in
|
||||
rd_val = vf_reaction_diffusion(g, f, t, S) * min(1.0, progress * 1.5)
|
||||
|
||||
# Render geometry first, organic on top — organic consumes geometry
|
||||
```
|
||||
|
||||
### Staggered Layer Entry (Crescendo)
|
||||
|
||||
Layers enter one at a time, building to overwhelming density:
|
||||
|
||||
```python
|
||||
def layer_strength(enter_t, ramp=1.5):
|
||||
"""0.0 until enter_t, ramps to 1.0 over ramp seconds."""
|
||||
return max(0.0, min(1.0, (local - enter_t) / ramp))
|
||||
|
||||
# Layer 1: always present
|
||||
s1 = layer_strength(0.0)
|
||||
# Layer 2: enters at 2s
|
||||
s2 = layer_strength(2.0)
|
||||
# Layer 3: enters at 4s
|
||||
s3 = layer_strength(4.0)
|
||||
# ... etc
|
||||
|
||||
# Each layer uses a different effect, grid, palette, and blend mode
|
||||
# Screen blend between layers so they accumulate light
|
||||
```
|
||||
|
||||
For a 15-second crescendo, 7 layers entering every 2 seconds works well. Use different blend modes (screen for most, add for energy, colordodge for the final wash).
|
||||
|
||||
## Scene Ordering
|
||||
|
||||
For a multi-scene reel or video:
|
||||
- **Vary mood between adjacent scenes** — don't put two calm scenes next to each other
|
||||
- **Randomize order** rather than grouping by type — prevents "effect demo" feel
|
||||
- **End on the strongest scene** — crescendo or something with a clear payoff
|
||||
- **Open with energy** — grab attention in the first 2 seconds
|
||||
|
||||
---
|
||||
|
||||
## Scene Protocol
|
||||
|
||||
Scenes are the top-level creative unit. Each scene is a time-bounded segment with its own effect function, shader chain, feedback configuration, and tone-mapping gamma.
|
||||
|
||||
## Scene Protocol (v2)
|
||||
### Scene Protocol (v2)
|
||||
|
||||
### Function Signature
|
||||
|
||||
@@ -404,3 +600,412 @@ For each scene:
|
||||
7. **Configure feedback** for trailing/recursive looks — or None for clean cuts
|
||||
8. **Set gamma** if using destructive shaders (solarize, posterize)
|
||||
9. **Test with --test-frame** at the scene's midpoint before full render
|
||||
|
||||
---
|
||||
|
||||
## Scene Examples
|
||||
|
||||
Copy-paste-ready scene functions at increasing complexity. Each is a complete, working v2 scene function that returns a pixel canvas. See the Scene Protocol section above for the scene protocol and `composition.md` for blend modes and tonemap.
|
||||
|
||||
---
|
||||
|
||||
### Minimal — Single Grid, Single Effect
|
||||
|
||||
### Breathing Plasma
|
||||
|
||||
One grid, one value field, one hue field. The simplest possible scene.
|
||||
|
||||
```python
|
||||
def fx_breathing_plasma(r, f, t, S):
|
||||
"""Plasma field with time-cycling hue. Audio modulates brightness."""
|
||||
canvas = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_plasma(g, f, t, S) * 1.3,
|
||||
hf_time_cycle(0.08), PAL_DENSE, f, t, S, sat=0.8)
|
||||
return canvas
|
||||
```
|
||||
|
||||
### Reaction-Diffusion Coral
|
||||
|
||||
Single grid, simulation-based field. Evolves organically over time.
|
||||
|
||||
```python
|
||||
def fx_coral(r, f, t, S):
|
||||
"""Gray-Scott reaction-diffusion — coral branching pattern.
|
||||
Slow-evolving, organic. Best for ambient/chill sections."""
|
||||
canvas = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
|
||||
feed=0.037, kill=0.060, steps_per_frame=6, init_mode="center"),
|
||||
hf_distance(0.55, 0.015), PAL_DOTS, f, t, S, sat=0.7)
|
||||
return canvas
|
||||
```
|
||||
|
||||
### SDF Geometry
|
||||
|
||||
Geometric shapes from SDFs. Clean, precise, graphic.
|
||||
|
||||
```python
|
||||
def fx_sdf_rings(r, f, t, S):
|
||||
"""Concentric SDF rings with smooth pulsing."""
|
||||
def val_fn(g, f, t, S):
|
||||
d1 = sdf_ring(g, radius=0.15 + f.get("bass", 0.3) * 0.05, thickness=0.015)
|
||||
d2 = sdf_ring(g, radius=0.25 + f.get("mid", 0.3) * 0.05, thickness=0.012)
|
||||
d3 = sdf_ring(g, radius=0.35 + f.get("hi", 0.3) * 0.04, thickness=0.010)
|
||||
combined = sdf_smooth_union(sdf_smooth_union(d1, d2, 0.05), d3, 0.05)
|
||||
return sdf_glow(combined, falloff=0.08) * (0.5 + f.get("rms", 0.3) * 0.8)
|
||||
canvas = _render_vf(r, "md", val_fn, hf_angle(0.0), PAL_STARS, f, t, S, sat=0.85)
|
||||
return canvas
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Standard — Two Grids + Blend
|
||||
|
||||
### Tunnel Through Noise
|
||||
|
||||
Two grids at different densities, screen blended. The fine noise texture shows through the coarser tunnel characters.
|
||||
|
||||
```python
|
||||
def fx_tunnel_noise(r, f, t, S):
|
||||
"""Tunnel depth on md grid + fBM noise on sm grid, screen blended."""
|
||||
canvas_a = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=4.0, complexity=8) * 1.2,
|
||||
hf_distance(0.5, 0.02), PAL_BLOCKS, f, t, S, sat=0.7)
|
||||
|
||||
canvas_b = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=4, freq=0.05, speed=0.15) * 1.3,
|
||||
hf_time_cycle(0.06), PAL_RUNE, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "screen", 0.7)
|
||||
```
|
||||
|
||||
### Voronoi Cells + Spiral Overlay
|
||||
|
||||
Voronoi cell edges with a spiral arm pattern overlaid.
|
||||
|
||||
```python
|
||||
def fx_voronoi_spiral(r, f, t, S):
|
||||
"""Voronoi edge detection on md + logarithmic spiral on lg."""
|
||||
canvas_a = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_voronoi(g, f, t, S,
|
||||
n_cells=15, mode="edge", edge_width=2.0, speed=0.4),
|
||||
hf_angle(0.2), PAL_CIRCUIT, f, t, S, sat=0.75)
|
||||
|
||||
canvas_b = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_spiral(g, f, t, S, n_arms=4, tightness=3.0) * 1.2,
|
||||
hf_distance(0.1, 0.03), PAL_BLOCKS, f, t, S, sat=0.9)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "exclusion", 0.6)
|
||||
```
|
||||
|
||||
### Domain-Warped fBM
|
||||
|
||||
Two layers of the same fBM, one domain-warped, difference-blended for psychedelic organic texture.
|
||||
|
||||
```python
|
||||
def fx_organic_warp(r, f, t, S):
|
||||
"""Clean fBM vs domain-warped fBM, difference blended."""
|
||||
canvas_a = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04, speed=0.1),
|
||||
hf_plasma(0.2), PAL_DENSE, f, t, S, sat=0.6)
|
||||
|
||||
canvas_b = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
|
||||
warp_strength=20.0, freq=0.05, speed=0.15),
|
||||
hf_time_cycle(0.05), PAL_BRAILLE, f, t, S, sat=0.7)
|
||||
|
||||
return blend_canvas(canvas_a, canvas_b, "difference", 0.7)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Complex — Three Grids + Conditional + Feedback
|
||||
|
||||
### Psychedelic Cathedral
|
||||
|
||||
Three-grid composition with beat-triggered kaleidoscope and feedback zoom tunnel. The most visually complex pattern.
|
||||
|
||||
```python
|
||||
def fx_cathedral(r, f, t, S):
|
||||
"""Three-layer cathedral: interference + rings + noise, kaleidoscope on beat,
|
||||
feedback zoom tunnel."""
|
||||
# Layer 1: interference pattern on sm grid
|
||||
canvas_a = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_interference(g, f, t, S, n_waves=7) * 1.3,
|
||||
hf_angle(0.0), PAL_MATH, f, t, S, sat=0.8)
|
||||
|
||||
# Layer 2: pulsing rings on md grid
|
||||
canvas_b = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_rings(g, f, t, S, n_base=10, spacing_base=3) * 1.4,
|
||||
hf_distance(0.3, 0.02), PAL_STARS, f, t, S, sat=0.9)
|
||||
|
||||
# Layer 3: temporal noise on lg grid (slow morph)
|
||||
canvas_c = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_temporal_noise(g, f, t, S,
|
||||
freq=0.04, t_freq=0.2, octaves=3),
|
||||
hf_time_cycle(0.12), PAL_BLOCKS, f, t, S, sat=0.7)
|
||||
|
||||
# Blend: A screen B, then difference with C
|
||||
result = blend_canvas(canvas_a, canvas_b, "screen", 0.8)
|
||||
result = blend_canvas(result, canvas_c, "difference", 0.5)
|
||||
|
||||
# Beat-triggered kaleidoscope
|
||||
if f.get("bdecay", 0) > 0.3:
|
||||
folds = 6 if f.get("sub_r", 0.3) > 0.4 else 8
|
||||
result = sh_kaleidoscope(result.copy(), folds=folds)
|
||||
|
||||
return result
|
||||
|
||||
# Scene table entry with feedback:
|
||||
# {"start": 30.0, "end": 50.0, "name": "cathedral", "fx": fx_cathedral,
|
||||
# "gamma": 0.65, "shaders": [("bloom", {"thr": 110}), ("chromatic", {"amt": 4}),
|
||||
# ("vignette", {"s": 0.2}), ("grain", {"amt": 8})],
|
||||
# "feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
|
||||
# "transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}}
|
||||
```
|
||||
|
||||
### Masked Reaction-Diffusion with Attractor Overlay
|
||||
|
||||
Reaction-diffusion visible only through an animated iris mask, with a strange attractor density field underneath.
|
||||
|
||||
```python
|
||||
def fx_masked_life(r, f, t, S):
|
||||
"""Attractor base + reaction-diffusion visible through iris mask + particles."""
|
||||
g_sm = r.get_grid("sm")
|
||||
g_md = r.get_grid("md")
|
||||
|
||||
# Layer 1: strange attractor density field (background)
|
||||
canvas_bg = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_strange_attractor(g, f, t, S,
|
||||
attractor="clifford", n_points=30000),
|
||||
hf_time_cycle(0.04), PAL_DOTS, f, t, S, sat=0.5)
|
||||
|
||||
# Layer 2: reaction-diffusion (foreground, will be masked)
|
||||
canvas_rd = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
|
||||
feed=0.046, kill=0.063, steps_per_frame=4, init_mode="ring"),
|
||||
hf_angle(0.15), PAL_HALFFILL, f, t, S, sat=0.85)
|
||||
|
||||
# Animated iris mask — opens over first 5 seconds of scene
|
||||
scene_start = S.get("_scene_start", t)
|
||||
if "_scene_start" not in S:
|
||||
S["_scene_start"] = t
|
||||
mask = mask_iris(g_md, t, scene_start, scene_start + 5.0,
|
||||
max_radius=0.6)
|
||||
canvas_rd = apply_mask_canvas(canvas_rd, mask, bg_canvas=canvas_bg)
|
||||
|
||||
# Layer 3: flow-field particles following the R-D gradient
|
||||
rd_field = vf_reaction_diffusion(g_sm, f, t, S,
|
||||
feed=0.046, kill=0.063, steps_per_frame=0) # read without stepping
|
||||
ch_p, co_p = update_flow_particles(S, g_sm, f, rd_field,
|
||||
n=300, speed=0.8, char_set=list("·•◦∘°"))
|
||||
canvas_p = g_sm.render(ch_p, co_p)
|
||||
|
||||
result = blend_canvas(canvas_rd, canvas_p, "add", 0.7)
|
||||
return result
|
||||
```
|
||||
|
||||
### Morphing Field Sequence with Eased Keyframes
|
||||
|
||||
Demonstrates temporal coherence: smooth morphing between effects with keyframed parameters.
|
||||
|
||||
```python
|
||||
def fx_morphing_journey(r, f, t, S):
|
||||
"""Morphs through 4 value fields over 20 seconds with eased transitions.
|
||||
Parameters (twist, arm count) also keyframed."""
|
||||
# Keyframed twist parameter
|
||||
twist = keyframe(t, [(0, 1.0), (5, 5.0), (10, 2.0), (15, 8.0), (20, 1.0)],
|
||||
ease_fn=ease_in_out_cubic, loop=True)
|
||||
|
||||
# Sequence of value fields with 2s crossfade
|
||||
fields = [
|
||||
lambda g, f, t, S: vf_plasma(g, f, t, S),
|
||||
lambda g, f, t, S: vf_vortex(g, f, t, S, twist=twist),
|
||||
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04),
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S, warp_strength=15),
|
||||
]
|
||||
durations = [5.0, 5.0, 5.0, 5.0]
|
||||
|
||||
val_fn = lambda g, f, t, S: vf_sequence(g, f, t, S, fields, durations,
|
||||
crossfade=2.0)
|
||||
|
||||
# Render with slowly rotating hue
|
||||
canvas = _render_vf(r, "md", val_fn, hf_time_cycle(0.06),
|
||||
PAL_DENSE, f, t, S, sat=0.8)
|
||||
|
||||
# Second layer: tiled version of same sequence at smaller grid
|
||||
tiled_fn = lambda g, f, t, S: vf_sequence(
|
||||
make_tgrid(g, *uv_tile(g, 3, 3, mirror=True)),
|
||||
f, t, S, fields, durations, crossfade=2.0)
|
||||
canvas_b = _render_vf(r, "sm", tiled_fn, hf_angle(0.1),
|
||||
PAL_RUNE, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas, canvas_b, "screen", 0.5)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Specialized — Unique State Patterns
|
||||
|
||||
### Game of Life with Ghost Trails
|
||||
|
||||
Cellular automaton with analog fade trails. Beat injects random cells.
|
||||
|
||||
```python
|
||||
def fx_life(r, f, t, S):
|
||||
"""Conway's Game of Life with fading ghost trails.
|
||||
Beat events inject random live cells for disruption."""
|
||||
canvas = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
|
||||
rule="life", steps_per_frame=1, fade=0.92, density=0.25),
|
||||
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.8)
|
||||
|
||||
# Overlay: coral automaton on lg grid for chunky texture
|
||||
canvas_b = _render_vf(r, "lg",
|
||||
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
|
||||
rule="coral", steps_per_frame=1, fade=0.85, density=0.15, seed=99),
|
||||
hf_time_cycle(0.1), PAL_HATCH, f, t, S, sat=0.6)
|
||||
|
||||
return blend_canvas(canvas, canvas_b, "screen", 0.5)
|
||||
```
|
||||
|
||||
### Boids Flock Over Voronoi
|
||||
|
||||
Emergent swarm movement over a cellular background.
|
||||
|
||||
```python
|
||||
def fx_boid_swarm(r, f, t, S):
|
||||
"""Flocking boids over animated voronoi cells."""
|
||||
# Background: voronoi cells
|
||||
canvas_bg = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_voronoi(g, f, t, S,
|
||||
n_cells=20, mode="distance", speed=0.2),
|
||||
hf_distance(0.4, 0.02), PAL_CIRCUIT, f, t, S, sat=0.5)
|
||||
|
||||
# Foreground: boids
|
||||
g = r.get_grid("md")
|
||||
ch_b, co_b = update_boids(S, g, f, n_boids=150, perception=6.0,
|
||||
max_speed=1.5, char_set=list("▸▹►▻→⟶"))
|
||||
canvas_boids = g.render(ch_b, co_b)
|
||||
|
||||
# Trails for the boids
|
||||
# (boid positions are stored in S["boid_x"], S["boid_y"])
|
||||
S["px"] = list(S.get("boid_x", []))
|
||||
S["py"] = list(S.get("boid_y", []))
|
||||
ch_t, co_t = draw_particle_trails(S, g, max_trail=6, fade=0.6)
|
||||
canvas_trails = g.render(ch_t, co_t)
|
||||
|
||||
result = blend_canvas(canvas_bg, canvas_trails, "add", 0.3)
|
||||
result = blend_canvas(result, canvas_boids, "add", 0.9)
|
||||
return result
|
||||
```
|
||||
|
||||
### Fire Rising Through SDF Text Stencil
|
||||
|
||||
Fire effect visible only through text letterforms.
|
||||
|
||||
```python
|
||||
def fx_fire_text(r, f, t, S):
|
||||
"""Fire columns visible through text stencil. Text acts as window."""
|
||||
g = r.get_grid("lg")
|
||||
|
||||
# Full-screen fire (will be masked)
|
||||
canvas_fire = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: np.clip(
|
||||
vf_fbm(g, f, t, S, octaves=4, freq=0.08, speed=0.8) *
|
||||
(1.0 - g.rr / g.rows) * # fade toward top
|
||||
(0.6 + f.get("bass", 0.3) * 0.8), 0, 1),
|
||||
hf_fixed(0.05), PAL_BLOCKS, f, t, S, sat=0.9) # fire hue
|
||||
|
||||
# Background: dark domain warp
|
||||
canvas_bg = _render_vf(r, "md",
|
||||
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
|
||||
warp_strength=8, freq=0.03, speed=0.05) * 0.3,
|
||||
hf_fixed(0.6), PAL_DENSE, f, t, S, sat=0.4)
|
||||
|
||||
# Text stencil mask
|
||||
mask = mask_text(g, "FIRE", row_frac=0.45)
|
||||
# Expand vertically for multi-row coverage
|
||||
for offset in range(-2, 3):
|
||||
shifted = mask_text(g, "FIRE", row_frac=0.45 + offset / g.rows)
|
||||
mask = mask_union(mask, shifted)
|
||||
|
||||
canvas_masked = apply_mask_canvas(canvas_fire, mask, bg_canvas=canvas_bg)
|
||||
return canvas_masked
|
||||
```
|
||||
|
||||
### Portrait Mode: Vertical Rain + Quote
|
||||
|
||||
Optimized for 9:16. Uses vertical space for long rain trails and stacked text.
|
||||
|
||||
```python
|
||||
def fx_portrait_rain_quote(r, f, t, S):
|
||||
"""Portrait-optimized: matrix rain (long vertical trails) with stacked quote.
|
||||
Designed for 1080x1920 (9:16)."""
|
||||
g = r.get_grid("md") # ~112x100 in portrait
|
||||
|
||||
# Matrix rain — long trails benefit from portrait's extra rows
|
||||
ch, co, S = eff_matrix_rain(g, f, t, S,
|
||||
hue=0.33, bri=0.6, pal=PAL_KATA, speed_base=0.4, speed_beat=2.5)
|
||||
canvas_rain = g.render(ch, co)
|
||||
|
||||
# Tunnel depth underneath for texture
|
||||
canvas_tunnel = _render_vf(r, "sm",
|
||||
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=3.0, complexity=6) * 0.8,
|
||||
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.5)
|
||||
|
||||
result = blend_canvas(canvas_tunnel, canvas_rain, "screen", 0.8)
|
||||
|
||||
# Quote text — portrait layout: short lines, many of them
|
||||
g_text = r.get_grid("lg") # ~90x80 in portrait
|
||||
quote_lines = layout_text_portrait(
|
||||
"The code is the art and the art is the code",
|
||||
max_chars_per_line=20)
|
||||
# Center vertically
|
||||
block_start = (g_text.rows - len(quote_lines)) // 2
|
||||
ch_t = np.full((g_text.rows, g_text.cols), " ", dtype="U1")
|
||||
co_t = np.zeros((g_text.rows, g_text.cols, 3), dtype=np.uint8)
|
||||
total_chars = sum(len(l) for l in quote_lines)
|
||||
progress = min(1.0, (t - S.get("_scene_start", t)) / 3.0)
|
||||
if "_scene_start" not in S: S["_scene_start"] = t
|
||||
render_typewriter(ch_t, co_t, quote_lines, block_start, g_text.cols,
|
||||
progress, total_chars, (200, 255, 220), t)
|
||||
canvas_text = g_text.render(ch_t, co_t)
|
||||
|
||||
result = blend_canvas(result, canvas_text, "add", 0.9)
|
||||
return result
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scene Table Template
|
||||
|
||||
Wire scenes into a complete video:
|
||||
|
||||
```python
|
||||
SCENES = [
|
||||
{"start": 0.0, "end": 5.0, "name": "coral",
|
||||
"fx": fx_coral, "grid": "sm", "gamma": 0.70,
|
||||
"shaders": [("bloom", {"thr": 110}), ("vignette", {"s": 0.2})],
|
||||
"feedback": {"decay": 0.8, "blend": "screen", "opacity": 0.3,
|
||||
"transform": "zoom", "transform_amt": 0.01}},
|
||||
|
||||
{"start": 5.0, "end": 15.0, "name": "tunnel_noise",
|
||||
"fx": fx_tunnel_noise, "grid": "md", "gamma": 0.75,
|
||||
"shaders": [("chromatic", {"amt": 3}), ("bloom", {"thr": 120}),
|
||||
("scanlines", {"intensity": 0.06}), ("grain", {"amt": 8})],
|
||||
"feedback": None},
|
||||
|
||||
{"start": 15.0, "end": 35.0, "name": "cathedral",
|
||||
"fx": fx_cathedral, "grid": "sm", "gamma": 0.65,
|
||||
"shaders": [("bloom", {"thr": 100}), ("chromatic", {"amt": 5}),
|
||||
("color_wobble", {"amt": 0.2}), ("vignette", {"s": 0.18})],
|
||||
"feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
|
||||
"transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}},
|
||||
|
||||
{"start": 35.0, "end": 50.0, "name": "morphing",
|
||||
"fx": fx_morphing_journey, "grid": "md", "gamma": 0.70,
|
||||
"shaders": [("bloom", {"thr": 110}), ("grain", {"amt": 6})],
|
||||
"feedback": {"decay": 0.7, "blend": "screen", "opacity": 0.25,
|
||||
"transform": "rotate_cw", "transform_amt": 0.003}},
|
||||
]
|
||||
```
|
||||
|
||||
@@ -2,14 +2,9 @@
|
||||
|
||||
Post-processing effects applied to the pixel canvas (`numpy uint8 array, shape (H,W,3)`) after character rendering and before encoding. Also covers **pixel-level blend modes**, **feedback buffers**, and the **ShaderChain** compositor.
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
|
||||
- Effect building blocks (value fields, noise, SDFs): `effects.md`
|
||||
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Complete scene examples with shader usage: `examples.md`
|
||||
- Performance tuning (frame budget, worker count): `optimization.md`
|
||||
- Encoding pitfalls (ffmpeg flags, color space): `troubleshooting.md`
|
||||
> **See also:** composition.md (blend modes, tonemap) · effects.md · scenes.md · architecture.md · optimization.md · troubleshooting.md
|
||||
>
|
||||
> **Blend modes:** For the 20 pixel blend modes and `blend_canvas()`, see `composition.md`. All blending uses `blend_canvas(base, top, mode, opacity)`.
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
|
||||
@@ -1,14 +1,19 @@
|
||||
# Troubleshooting Reference
|
||||
|
||||
**Cross-references:**
|
||||
- Grid system, palettes, font selection: `architecture.md`
|
||||
- Effect building blocks (value fields, noise, SDFs): `effects.md`
|
||||
- `_render_vf()`, blend modes, tonemap: `composition.md`
|
||||
- Scene protocol, render_clip, SCENES table: `scenes.md`
|
||||
- Shader pipeline, feedback buffer, encoding: `shaders.md`
|
||||
- Input sources (audio, video, TTS): `inputs.md`
|
||||
- Performance tuning, hardware detection: `optimization.md`
|
||||
- Complete scene examples: `examples.md`
|
||||
> **See also:** composition.md · architecture.md · shaders.md · scenes.md · optimization.md
|
||||
|
||||
## Quick Diagnostic
|
||||
|
||||
| Symptom | Likely Cause | Fix |
|
||||
|---------|-------------|-----|
|
||||
| All black output | tonemap gamma too high or no effects rendering | Lower gamma to 0.5, check scene_fn returns non-zero canvas |
|
||||
| Washed out / too bright | Linear brightness multiplier instead of tonemap | Replace `canvas * N` with `tonemap(canvas, gamma=0.75)` |
|
||||
| ffmpeg hangs mid-render | stderr=subprocess.PIPE deadlock | Redirect stderr to file |
|
||||
| "read-only" array error | broadcast_to view without .copy() | Add `.copy()` after broadcast_to |
|
||||
| PicklingError | Lambda or closure in SCENES table | Define all fx_* at module level |
|
||||
| Random dark holes in output | Font missing Unicode glyphs | Validate palettes at init |
|
||||
| Audio-visual desync | Frame timing accumulation | Use integer frame counter, compute t fresh each frame |
|
||||
| Single-color flat output | Hue field shape mismatch | Ensure h,s,v arrays all (rows,cols) before hsv2rgb |
|
||||
|
||||
Common bugs, gotchas, and platform-specific issues encountered during ASCII video development.
|
||||
|
||||
@@ -339,3 +344,22 @@ val = np.clip(vf_plasma(g, f, t, S) * 1.5, 0, 1)
|
||||
```
|
||||
|
||||
The `_render_vf()` helper clips automatically, but if you're building custom scenes, clip explicitly.
|
||||
|
||||
## Brightness Best Practices
|
||||
|
||||
- Dense animated backgrounds — never flat black, always fill the grid
|
||||
- Vignette minimum clamped to 0.15 (not 0.12)
|
||||
- Bloom threshold 130 (not 170) so more pixels contribute to glow
|
||||
- Use `screen` blend mode (not `overlay`) for dark ASCII layers — overlay squares dark values: `2 * 0.12 * 0.12 = 0.03`
|
||||
- FeedbackBuffer decay minimum 0.5 — below that, feedback disappears too fast to see
|
||||
- Value field floor: `vf * 0.8 + 0.05` ensures no cell is truly zero
|
||||
- Per-scene gamma overrides: default 0.75, solarize 0.55, posterize 0.50, bright scenes 0.85
|
||||
- Test frames early: render single frames at key timestamps before committing to full render
|
||||
|
||||
**Quick checklist before full render:**
|
||||
1. Render 3 test frames (start, middle, end)
|
||||
2. Check `canvas.mean() > 8` after tonemap
|
||||
3. Check no scene is visually flat black
|
||||
4. Verify per-section variation (different bg/palette/color per scene)
|
||||
5. Confirm shader chain includes bloom (threshold 130)
|
||||
6. Confirm vignette strength ≤ 0.25
|
||||
|
||||
Reference in New Issue
Block a user