update ascii-video skill: design patterns, local time, examples

- New references/design-patterns.md: layer hierarchy (bg/content/accent),
  directional parameter arcs, scene concepts and visual metaphors,
  counter-rotating systems, wave collision, progressive fragmentation,
  entropy/consumption, staggered crescendo buildup, scene ordering
- New references/examples.md: copy-paste-ready scenes at every complexity
- Update scenes.md: local time convention (t=0 at scene start)
- Update SKILL.md: add design-patterns.md to reference table
- Add README.md to hermes-agent copy
- Sync all reference docs with canonical source (SHL0MS/ascii-video)
This commit is contained in:
SHL0MS
2026-03-13 19:13:12 -04:00
parent bfb82b5cee
commit cda5910ab0
12 changed files with 3503 additions and 95 deletions

View File

@@ -0,0 +1,249 @@
# ☤ ASCII Video
Renders any content as colored ASCII character video. Audio, video, images, text, or pure math in, MP4/GIF/PNG sequence out. Full RGB color per character cell, 1080p 24fps default. No GPU.
Built for [Hermes Agent](https://github.com/NousResearch/hermes-agent). Usable in any coding agent. Canonical source lives here; synced to [`NousResearch/hermes-agent/skills/creative/ascii-video`](https://github.com/NousResearch/hermes-agent/tree/main/skills/creative/ascii-video) via PR.
## What this is
A skill that teaches an agent how to build single-file Python renderers for ASCII video from scratch. The agent gets the full pipeline: grid system, font rasterization, effect library, shader chain, audio analysis, parallel encoding. It writes the renderer, runs it, gets video.
The output is actual video. Not terminal escape codes. Frames are computed as grids of colored characters, composited onto pixel canvases with pre-rasterized font bitmaps, post-processed through shaders, piped to ffmpeg.
## Modes
| Mode | Input | Output |
|------|-------|--------|
| Video-to-ASCII | A video file | ASCII recreation of the footage |
| Audio-reactive | An audio file | Visuals driven by frequency bands, beats, energy |
| Generative | Nothing | Procedural animation from math |
| Hybrid | Video + audio | ASCII video with audio-reactive overlays |
| Lyrics/text | Audio + timed text (SRT) | Karaoke-style text with effects |
| TTS narration | Text quotes + API key | Narrated video with typewriter text and generated speech |
## Pipeline
Every mode follows the same 6-stage path:
```
INPUT --> ANALYZE --> SCENE_FN --> TONEMAP --> SHADE --> ENCODE
```
1. **Input** loads source material (or nothing for generative).
2. **Analyze** extracts per-frame features. Audio gets 6-band FFT, RMS, spectral centroid, flatness, flux, beat detection with exponential decay. Video gets luminance, edges, motion.
3. **Scene function** returns a pixel canvas directly. Composes multiple character grids at different densities, value/hue fields, pixel blend modes. This is where the visuals happen.
4. **Tonemap** does adaptive percentile-based brightness normalization with per-scene gamma. ASCII on black is inherently dark. Linear multipliers don't work. This does.
5. **Shade** runs a `ShaderChain` (38 composable shaders) plus a `FeedbackBuffer` for temporal recursion with spatial transforms.
6. **Encode** pipes raw RGB frames to ffmpeg for H.264 encoding. Segments concatenated, audio muxed.
## Grid system
Characters render on fixed-size grids. Layer multiple densities for depth.
| Size | Font | Grid at 1080p | Use |
|------|------|---------------|-----|
| xs | 8px | 400x108 | Ultra-dense data fields |
| sm | 10px | 320x83 | Rain, starfields |
| md | 16px | 192x56 | Default balanced |
| lg | 20px | 160x45 | Readable text |
| xl | 24px | 137x37 | Large titles |
| xxl | 40px | 80x22 | Giant minimal |
Rendering the same scene on `sm` and `lg` then screen-blending them creates natural texture interference. Fine detail shows through gaps in coarse characters. Most scenes use two or three grids.
## Character palettes (20+)
Each sorted dark-to-bright, each a different visual texture. Validated against the font at init so broken glyphs get dropped silently.
| Family | Examples | Feel |
|--------|----------|------|
| Density ramps | ` .:-=+#@█` | Classic ASCII art gradient |
| Block elements | ` ░▒▓█▄▀▐▌` | Chunky, digital |
| Braille | ` ⠁⠂⠃...⠿` | Fine-grained pointillism |
| Dots | ` ⋅∘∙●◉◎` | Smooth, organic |
| Stars | ` ·✧✦✩✨★✶` | Sparkle, celestial |
| Half-fills | ` ◔◑◕◐◒◓◖◗◙` | Directional fill progression |
| Crosshatch | ` ▣▤▥▦▧▨▩` | Hatched density ramp |
| Math | ` ·∘∙•°±×÷≈≠≡∞∫∑Ω` | Scientific, abstract |
| Box drawing | ` ─│┌┐└┘├┤┬┴┼` | Structural, circuit-like |
| Katakana | ` ·ヲァィゥェォャュ...` | Matrix rain |
| Greek | ` αβγδεζηθ...ω` | Classical, academic |
| Runes | ` ᚠᚢᚦᚱᚷᛁᛇᛒᛖᛚᛞᛟ` | Mystical, ancient |
| Alchemical | ` ☉☽♀♂♃♄♅♆♇` | Esoteric |
| Arrows | ` ←↑→↓↔↕↖↗↘↙` | Directional, kinetic |
| Music | ` ♪♫♬♩♭♮♯○●` | Musical |
| Project-specific | ` .·~=≈∞⚡☿✦★⊕◊◆▲▼●■` | Themed per project |
Custom palettes are built per project to match the content.
## Color strategies
| Strategy | How it maps hue | Good for |
|----------|----------------|----------|
| Angle-mapped | Position angle from center | Rainbow radial effects |
| Distance-mapped | Distance from center | Depth, tunnels |
| Frequency-mapped | Audio spectral centroid | Timbral shifting |
| Value-mapped | Brightness level | Heat maps, fire |
| Time-cycled | Slow rotation over time | Ambient, chill |
| Source-sampled | Original video pixel colors | Video-to-ASCII |
| Palette-indexed | Discrete lookup table | Retro, flat graphic |
| Temperature | Warm-to-cool blend | Emotional tone |
| Complementary | Hue + opposite | Bold, dramatic |
| Triadic | Three equidistant hues | Psychedelic, vibrant |
| Analogous | Neighboring hues | Harmonious, subtle |
| Monochrome | Fixed hue, vary S/V | Noir, focused |
Plus 10 discrete RGB palettes (neon, pastel, cyberpunk, vaporwave, earth, ice, blood, forest, mono-green, mono-amber).
## Effects
### Backgrounds
| Effect | Description | Parameters |
|--------|-------------|------------|
| Sine field | Layered sinusoidal interference | freq, speed, octave count |
| Smooth noise | Multi-octave Perlin approximation | octaves, scale |
| Cellular | Voronoi-like moving cells | n_centers, speed |
| Noise/static | Random per-cell flicker | density |
| Video source | Downsampled video frame | brightness |
### Primary effects
| Effect | Description |
|--------|-------------|
| Concentric rings | Bass-driven pulsing rings with wobble |
| Radial rays | Spoke pattern, beat-triggered |
| Spiral arms | Logarithmic spiral, configurable arm count/tightness |
| Tunnel | Infinite depth perspective |
| Vortex | Twisting radial distortion |
| Frequency waves | Per-band sine waves at different heights |
| Interference | Overlapping sine waves creating moire |
| Aurora | Horizontal flowing bands |
| Ripple | Point-source concentric waves |
| Fire columns | Rising flames with heat-color gradient |
| Spectrum bars | Mirrored frequency visualizer |
| Waveform | Oscilloscope-style trace |
### Particle systems
| Type | Behavior | Character sets |
|------|----------|---------------|
| Explosion | Beat-triggered radial burst | `*+#@⚡✦★█▓` |
| Sparks | Short-lived bright dots | `·•●★✶*+` |
| Embers | Rising from bottom with drift | `·•●★` |
| Snow | Falling with wind sway | `❄❅❆·•*○` |
| Rain | Fast vertical streaks | `│┃║/\` |
| Bubbles | Rising, expanding | `○◎◉●∘∙°` |
| Data | Falling hex/binary | `01{}[]<>/\` |
| Runes | Mystical floating symbols | `ᚠᚢᚦᚱᚷᛁ✦★` |
| Orbit | Circular/elliptical paths | `·•●` |
| Gravity well | Attracted to point sources | configurable |
| Dissolve | Spread across screen, fade | configurable |
| Starfield | 3D projected, approaching | configurable |
## Shader pipeline
38 composable shaders, applied to the pixel canvas after character rendering. Configurable per section.
| Category | Shaders |
|----------|---------|
| Geometry | CRT barrel, pixelate, wave distort, displacement map, kaleidoscope, mirror (h/v/quad/diag) |
| Channel | Chromatic aberration (beat-reactive), channel shift, channel swap, RGB split radial |
| Color | Invert, posterize, threshold, solarize, hue rotate, saturation, color grade, color wobble, color ramp |
| Glow/Blur | Bloom, edge glow, soft focus, radial blur |
| Noise | Film grain (beat-reactive), static noise |
| Lines/Patterns | Scanlines, halftone |
| Tone | Vignette, contrast, gamma, levels, brightness |
| Glitch/Data | Glitch bands (beat-reactive), block glitch, pixel sort, data bend |
12 color tint presets: warm, cool, matrix green, amber, sepia, neon pink, ice, blood, forest, void, sunset, neutral.
7 mood presets for common shader combos:
| Mood | Shaders |
|------|---------|
| Retro terminal | CRT + scanlines + grain + amber/green tint |
| Clean modern | Light bloom + subtle vignette |
| Glitch art | Heavy chromatic + glitch bands + color wobble |
| Cinematic | Bloom + vignette + grain + color grade |
| Dreamy | Heavy bloom + soft focus + color wobble |
| Harsh/industrial | High contrast + grain + scanlines, no bloom |
| Psychedelic | Color wobble + chromatic + kaleidoscope mirror |
## Blend modes and composition
20 pixel blend modes for layering canvases: normal, add, subtract, multiply, screen, overlay, softlight, hardlight, difference, exclusion, colordodge, colorburn, linearlight, vividlight, pin_light, hard_mix, lighten, darken, grain_extract, grain_merge.
Mirror modes: horizontal, vertical, quad, diagonal, kaleidoscope (6-fold radial). Beat-triggered.
Transitions: crossfade, directional wipe, radial wipe, dissolve, glitch cut.
## Hardware adaptation
Auto-detects CPU count, RAM, platform, ffmpeg. Adapts worker count, resolution, FPS.
| Profile | Resolution | FPS | When |
|---------|-----------|-----|------|
| `draft` | 960x540 | 12 | Check timing/layout |
| `preview` | 1280x720 | 15 | Review effects |
| `production` | 1920x1080 | 24 | Final output |
| `max` | 3840x2160 | 30 | Ultra-high |
| `auto` | Detected | 24 | Adapts to hardware + duration |
`auto` estimates render time and downgrades if it would take over an hour. Low-memory systems drop to 720p automatically.
### Render times (1080p 24fps, ~180ms/frame/worker)
| Duration | 4 workers | 8 workers | 16 workers |
|----------|-----------|-----------|------------|
| 30s | ~3 min | ~2 min | ~1 min |
| 2 min | ~13 min | ~7 min | ~4 min |
| 5 min | ~33 min | ~17 min | ~9 min |
| 10 min | ~65 min | ~33 min | ~17 min |
720p roughly halves these. 4K roughly quadruples them.
## Known pitfalls
**Brightness.** ASCII characters are small bright dots on black. Most frame pixels are background. Linear `* N` multipliers clip highlights and wash out. Use `tonemap()` with per-scene gamma instead. Default gamma 0.75, solarize scenes 0.55, posterize 0.50.
**Render bottleneck.** The per-cell Python loop compositing font bitmaps runs at ~100-150ms/frame. Unavoidable without Cython/C. Everything else must be vectorized numpy. Python for-loops over rows/cols in effect functions will tank performance.
**ffmpeg deadlock.** Never `stderr=subprocess.PIPE` on long-running encodes. Buffer fills at ~64KB, process hangs. Redirect stderr to a file.
**Font cell height.** Pillow's `textbbox()` returns wrong height on macOS. Use `font.getmetrics()` for `ascent + descent`.
**Font compatibility.** Not all Unicode renders in all fonts. Palettes validated at init, blank glyphs silently removed.
## Requirements
◆ Python 3.10+
◆ NumPy, Pillow, SciPy (audio modes)
◆ ffmpeg on PATH
◆ A monospace font (Menlo, Courier, Monaco, auto-detected)
◆ Optional: OpenCV, ElevenLabs API key (TTS mode)
## File structure
```
├── SKILL.md # Modes, workflow, creative direction
├── README.md # This file
└── references/
├── architecture.md # Grid system, fonts, palettes, color, _render_vf()
├── effects.md # Value fields, hue fields, backgrounds, particles
├── shaders.md # 38 shaders, ShaderChain, tint presets, transitions
├── composition.md # Blend modes, multi-grid, tonemap, FeedbackBuffer
├── scenes.md # Scene protocol, SCENES table, render_clip(), examples
├── design-patterns.md # Layer hierarchy, directional arcs, scene concepts
├── inputs.md # Audio analysis, video sampling, text, TTS
├── optimization.md # Hardware detection, vectorized patterns, parallelism
└── troubleshooting.md # Broadcasting traps, blend pitfalls, diagnostics
```
## Projects built with this
✦ 85-second highlight reel. 15 scenes (14×5s + 15s crescendo finale), randomized order, directional parameter arcs, layer hierarchy composition. Showcases the full effect vocabulary: fBM, voronoi fragmentation, reaction-diffusion, cellular automata, dual counter-rotating spirals, wave collision, domain warping, tunnel descent, kaleidoscope symmetry, boid flocking, fire simulation, glitch corruption, and a 7-layer crescendo buildup.
✦ Audio-reactive music visualizer. 3.5 min, 8 sections with distinct effects, beat-triggered particles and glitch, cycling palettes.
✦ TTS narrated testimonial video. 23 quotes, per-quote ElevenLabs voices, background music at 15% wide stereo, per-clip re-rendering for iterative editing.

View File

@@ -59,16 +59,20 @@ Every mode follows the same 6-stage pipeline. See `references/architecture.md` f
| Dimension | Options | Reference |
|-----------|---------|-----------|
| **Character palette** | Density ramps, block elements, symbols, scripts (katakana, Greek, runes, braille), dots, project-specific | `architecture.md` § Character Palettes |
| **Color strategy** | HSV (angle/distance/time/value mapped), discrete RGB palettes, monochrome, complementary, triadic, temperature | `architecture.md` § Color System |
| **Color strategy** | HSV (angle/distance/time/value mapped), OKLAB/OKLCH (perceptually uniform), discrete RGB palettes, auto-generated harmony (complementary/triadic/analogous/tetradic), monochrome, temperature | `architecture.md` § Color System |
| **Color tint** | Warm, cool, amber, matrix green, neon pink, sepia, ice, blood, void, sunset | `shaders.md` § Color Grade |
| **Background texture** | Sine fields, noise, smooth noise, cellular/voronoi, video source | `effects.md` § Background Fills |
| **Primary effects** | Rings, spirals, tunnel, vortex, waves, interference, aurora, ripple, fire | `effects.md` § Radial / Wave / Fire |
| **Particles** | Energy sparks, snow, rain, bubbles, runes, binary data, orbits, gravity wells | `effects.md` § Particle Systems |
| **Background texture** | Sine fields, fBM noise, domain warp, voronoi cells, reaction-diffusion, cellular automata, video source | `effects.md` § Background Fills, Noise-Based Fields, Simulation-Based Fields |
| **Primary effects** | Rings, spirals, tunnel, vortex, waves, interference, aurora, ripple, fire, strange attractors, SDFs (geometric shapes with smooth booleans) | `effects.md` § Radial / Wave / Fire / SDF-Based Fields |
| **Particles** | Energy sparks, snow, rain, bubbles, runes, binary data, orbits, gravity wells, flocking boids, flow-field followers, trail-drawing particles | `effects.md` § Particle Systems |
| **Shader mood** | Retro CRT, clean modern, glitch art, cinematic, dreamy, harsh industrial, psychedelic | `shaders.md` § Design Philosophy |
| **Grid density** | xs(8px) through xxl(40px), mixed per layer | `architecture.md` § Grid System |
| **Font** | Menlo, Monaco, Courier, SF Mono, JetBrains Mono, Fira Code, IBM Plex | `architecture.md` § Font Selection |
| **Coordinate space** | Cartesian, polar, tiled, rotated, skewed, fisheye, twisted, Möbius, domain-warped | `effects.md` § Coordinate Transforms |
| **Mirror mode** | None, horizontal, vertical, quad, diagonal, kaleidoscope | `shaders.md` § Mirror Effects |
| **Transition style** | Crossfade, wipe (directional/radial), dissolve, glitch cut | `shaders.md` § Transitions |
| **Masking** | Circle, rect, ring, gradient, text stencil, value-field-as-mask, animated iris/wipe/dissolve | `composition.md` § Masking |
| **Temporal motion** | Static, audio-reactive, eased keyframes, morphing between fields, temporal noise (smooth in-place evolution) | `effects.md` § Temporal Coherence |
| **Transition style** | Crossfade, wipe (directional/radial), dissolve, glitch cut, iris open/close, mask-based reveal | `shaders.md` § Transitions, `composition.md` § Animated Masks |
| **Aspect ratio** | Landscape (16:9), portrait (9:16), square (1:1), ultrawide (21:9) | `architecture.md` § Resolution Presets |
### Per-Section Variation
@@ -95,10 +99,11 @@ Establish with user:
- **Input source** — file path, format, duration
- **Mode** — which of the 6 modes above
- **Sections** — time-mapped style changes (timestamps → effect names)
- **Resolution** — default 1920x1080 @ 24fps; GIFs typically 640x360 @ 15fps
- **Resolution** — landscape 1920x1080 (default), portrait 1080x1920, square 1080x1080 @ 24fps; GIFs typically 640x360 @ 15fps
- **Style direction** — dense/sparse, bright/dark, chaotic/minimal, color palette
- **Text/branding** — easter eggs, overlays, credits, themed character sets
- **Output format** — MP4 (default), GIF, PNG sequence
- **Aspect ratio** — landscape (16:9), portrait (9:16 for TikTok/Reels/Stories), square (1:1 for IG feed)
### Step 2: Detect Hardware and Set Quality
@@ -240,11 +245,12 @@ Image.fromarray(canvas).save("test.png")
| File | Contents |
|------|----------|
| `references/architecture.md` | Grid system, font selection, character palettes (library of 20+), color system (HSV + discrete RGB), `_render_vf()` helper, compositing, v2 effect function contract |
| `references/architecture.md` | Grid system (landscape/portrait/square resolution presets), font selection, character palettes (library of 20+), color system (HSV + OKLAB/OKLCH + discrete RGB + color harmony generation + perceptual gradient interpolation), `_render_vf()` helper, compositing, v2 effect function contract |
| `references/inputs.md` | All input sources: audio analysis, video sampling, image conversion, text/lyrics, TTS integration (ElevenLabs, voice assignment, audio mixing) |
| `references/effects.md` | Effect building blocks: 12 value field generators (`vf_sinefield` through `vf_noise_static`), 8 hue field generators (`hf_fixed` through `hf_plasma`), radial/wave/fire effects, particles, composing guide |
| `references/effects.md` | Effect building blocks: 20+ value field generators (trig, noise/fBM, domain warp, voronoi, reaction-diffusion, cellular automata, strange attractors, SDFs), 8 hue field generators, coordinate transforms (rotate/tile/polar/Möbius), temporal coherence (easing, keyframes, morphing), radial/wave/fire effects, advanced particles (flocking, flow fields, trails), composing guide |
| `references/shaders.md` | 38 shader implementations (geometry, channel, color, glow, noise, pattern, tone, glitch, mirror), `ShaderChain` class, full `_apply_shader_step()` dispatch, audio-reactive scaling, transitions, tint presets |
| `references/composition.md` | **v2 core**: pixel blend modes (20 modes with implementations), multi-grid composition, `_render_vf()` helper, adaptive `tonemap()`, per-scene gamma, `FeedbackBuffer` with spatial transforms, `PixelBlendStack` |
| `references/scenes.md` | **v2 scene protocol**: scene function contract, `Renderer` class, `SCENES` table structure, `render_clip()` loop, beat-synced cutting, parallel rendering + pickling constraints, 4 complete scene examples, scene design checklist |
| `references/composition.md` | **v2 core**: pixel blend modes (20 modes with implementations), multi-grid composition, `_render_vf()` helper, adaptive `tonemap()`, per-scene gamma, `FeedbackBuffer` with spatial transforms, `PixelBlendStack`, masking/stencil system (shape masks, text stencils, animated masks, boolean ops) |
| `references/scenes.md` | **v2 scene protocol**: scene function contract (local time convention), `Renderer` class, `SCENES` table structure, `render_clip()` loop, beat-synced cutting, parallel rendering + pickling constraints, 4 complete scene examples, scene design checklist |
| `references/design-patterns.md` | **Scene composition patterns**: layer hierarchy (bg/content/accent), directional parameter arcs vs oscillation, scene concepts and visual metaphors, counter-rotating dual systems, wave collision, progressive fragmentation, entropy/consumption, staggered layer entry (crescendo), scene ordering |
| `references/troubleshooting.md` | NumPy broadcasting traps, blend mode pitfalls, multiprocessing/pickling issues, brightness diagnostics, ffmpeg deadlocks, font issues, performance bottlenecks, common mistakes |
| `references/optimization.md` | Hardware detection, adaptive quality profiles (draft/preview/production/max), CLI integration, vectorized effect patterns, parallel rendering, memory management |

View File

@@ -1,12 +1,43 @@
# Architecture Reference
**Cross-references:**
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
- Scene protocol, render_clip, SCENES table: `scenes.md`
- Shader pipeline, feedback buffer, output encoding: `shaders.md`
- Complete scene examples: `examples.md`
- Input sources (audio analysis, video, TTS): `inputs.md`
- Performance tuning, hardware detection: `optimization.md`
- Common bugs (broadcasting, font, encoding): `troubleshooting.md`
## Grid System
### Resolution Presets
```python
RESOLUTION_PRESETS = {
"landscape": (1920, 1080), # 16:9 — YouTube, default
"portrait": (1080, 1920), # 9:16 — TikTok, Reels, Stories
"square": (1080, 1080), # 1:1 — Instagram feed
"ultrawide": (2560, 1080), # 21:9 — cinematic
"landscape4k":(3840, 2160), # 16:9 — 4K
"portrait4k": (2160, 3840), # 9:16 — 4K portrait
}
def get_resolution(preset="landscape", custom=None):
"""Returns (VW, VH) tuple."""
if custom:
return custom
return RESOLUTION_PRESETS.get(preset, RESOLUTION_PRESETS["landscape"])
```
### Multi-Density Grids
Pre-initialize multiple grid sizes. Switch per section for visual variety.
Pre-initialize multiple grid sizes. Switch per section for visual variety. Grid dimensions auto-compute from resolution:
| Key | Font Size | Grid (1920x1080) | Use |
**Landscape (1920x1080):**
| Key | Font Size | Grid (cols x rows) | Use |
|-----|-----------|-------------------|-----|
| xs | 8 | 400x108 | Ultra-dense data fields |
| sm | 10 | 320x83 | Dense detail, rain, starfields |
@@ -15,7 +46,34 @@ Pre-initialize multiple grid sizes. Switch per section for visual variety.
| xl | 24 | 137x37 | Short quotes, large titles |
| xxl | 40 | 80x22 | Giant text, minimal |
**Grid sizing for text-heavy content**: When displaying readable text (quotes, lyrics, testimonials), use 20px (`lg`) as the primary grid. This gives 160 columns -- plenty for lines up to ~50 chars centered. For very short quotes (< 60 chars, <= 3 lines), 24px (`xl`) makes them more impactful. Only init the grids you actually use -- each grid pre-rasterizes all characters which costs ~0.3-0.5s.
**Portrait (1080x1920):**
| Key | Font Size | Grid (cols x rows) | Use |
|-----|-----------|-------------------|-----|
| xs | 8 | 225x192 | Ultra-dense, tall data columns |
| sm | 10 | 180x148 | Dense detail, vertical rain |
| md | 16 | 112x100 | Default balanced |
| lg | 20 | 90x80 | Readable text (~30 chars/line centered) |
| xl | 24 | 75x66 | Short quotes, stacked |
| xxl | 40 | 45x39 | Giant text, minimal |
**Square (1080x1080):**
| Key | Font Size | Grid (cols x rows) | Use |
|-----|-----------|-------------------|-----|
| sm | 10 | 180x83 | Dense detail |
| md | 16 | 112x56 | Default balanced |
| lg | 20 | 90x45 | Readable text |
**Key differences in portrait mode:**
- Fewer columns (90 at `lg` vs 160) — lines must be shorter or wrap
- Many more rows (80 at `lg` vs 45) — vertical stacking is natural
- Aspect ratio correction flips: `asp = cw / ch` still works but the visual emphasis is vertical
- Radial effects appear as tall ellipses unless corrected
- Vertical effects (rain, embers, fire columns) are naturally enhanced
- Horizontal effects (spectrum bars, waveforms) need rotation or compression
**Grid sizing for text in portrait**: Use `lg` (20px) for 2-3 word lines. Max comfortable line length is ~25-30 chars. For longer quotes, break aggressively into many short lines stacked vertically — portrait has vertical space to spare. `xl` (24px) works for single words or very short phrases.
Grid dimensions: `cols = VW // cell_width`, `rows = VH // cell_height`.
@@ -59,7 +117,23 @@ FONT_PREFS_LINUX = [
("Noto Sans Mono", "/usr/share/fonts/truetype/noto/NotoSansMono-Regular.ttf"),
("Ubuntu Mono", "/usr/share/fonts/truetype/ubuntu/UbuntuMono-R.ttf"),
]
FONT_PREFS = FONT_PREFS_MACOS if platform.system() == "Darwin" else FONT_PREFS_LINUX
FONT_PREFS_WINDOWS = [
("Consolas", r"C:\Windows\Fonts\consola.ttf"),
("Courier New", r"C:\Windows\Fonts\cour.ttf"),
("Lucida Console", r"C:\Windows\Fonts\lucon.ttf"),
("Cascadia Code", os.path.expandvars(r"%LOCALAPPDATA%\Microsoft\Windows\Fonts\CascadiaCode.ttf")),
("Cascadia Mono", os.path.expandvars(r"%LOCALAPPDATA%\Microsoft\Windows\Fonts\CascadiaMono.ttf")),
]
def _get_font_prefs():
s = platform.system()
if s == "Darwin":
return FONT_PREFS_MACOS
elif s == "Windows":
return FONT_PREFS_WINDOWS
return FONT_PREFS_LINUX
FONT_PREFS = _get_font_prefs()
```
**Multi-font rendering**: use different fonts for different layers (e.g., monospace for background, a bolder variant for overlay text). Each GridLayer owns its own font:
@@ -77,8 +151,8 @@ Before initializing grids, gather all characters that need bitmap pre-rasterizat
all_chars = set()
for pal in [PAL_DEFAULT, PAL_DENSE, PAL_BLOCKS, PAL_RUNE, PAL_KATA,
PAL_GREEK, PAL_MATH, PAL_DOTS, PAL_BRAILLE, PAL_STARS,
PAL_BINARY, PAL_MUSIC, PAL_BOX, PAL_CIRCUIT, PAL_ARROWS,
PAL_HERMES]: # ... all palettes used in project
PAL_HALFFILL, PAL_HATCH, PAL_BINARY, PAL_MUSIC, PAL_BOX,
PAL_CIRCUIT, PAL_ARROWS, PAL_HERMES]: # ... all palettes used in project
all_chars.update(pal)
# Add any overlay text characters
all_chars.update("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789 .,-:;!?/|")
@@ -87,21 +161,31 @@ all_chars.discard(" ") # space is never rendered
### GridLayer Initialization
Each grid pre-computes coordinate arrays for vectorized effect math:
Each grid pre-computes coordinate arrays for vectorized effect math. The grid automatically adapts to any resolution (landscape, portrait, square):
```python
class GridLayer:
def __init__(self, font_path, font_size):
def __init__(self, font_path, font_size, vw=None, vh=None):
"""Initialize grid for any resolution.
vw, vh: video width/height in pixels. Defaults to global VW, VH."""
vw = vw or VW; vh = vh or VH
self.vw = vw; self.vh = vh
self.font = ImageFont.truetype(font_path, font_size)
asc, desc = self.font.getmetrics()
bbox = self.font.getbbox("M")
self.cw = bbox[2] - bbox[0] # character cell width
self.ch = asc + desc # CRITICAL: not textbbox height
self.cols = VW // self.cw
self.rows = VH // self.ch
self.ox = (VW - self.cols * self.cw) // 2 # centering
self.oy = (VH - self.rows * self.ch) // 2
self.cols = vw // self.cw
self.rows = vh // self.ch
self.ox = (vw - self.cols * self.cw) // 2 # centering
self.oy = (vh - self.rows * self.ch) // 2
# Aspect ratio metadata
self.aspect = vw / vh # >1 = landscape, <1 = portrait, 1 = square
self.is_portrait = vw < vh
self.is_landscape = vw > vh
# Index arrays
self.rr = np.arange(self.rows, dtype=np.float32)[:, None]
@@ -219,9 +303,11 @@ PAL_ARABIC = " \u0627\u0628\u062a\u062b\u062c\u062d\u062e\u062f\u0630\u0631\u0
#### Dot / Point Progressions
```python
PAL_DOTS = " \u22c5\u2218\u2219\u25cf\u25c9\u25ce\u25c6\u2726\u2605" # dot size progression
PAL_BRAILLE = " \u2801\u2802\u2803\u2804\u2805\u2806\u2807\u2808\u2809\u280a\u280b\u280c\u280d\u280e\u280f\u2810\u2811\u2812\u2813\u2814\u2815\u2816\u2817\u2818\u2819\u281a\u281b\u281c\u281d\u281e\u281f\u283f" # braille patterns
PAL_STARS = " \u00b7\u2727\u2726\u2729\u2728\u2605\u2736\u2733\u2738" # star progression
PAL_DOTS = " ⋅∘∙●◉◎◆✦★" # dot size progression
PAL_BRAILLE = " ⠁⠂⠃⠄⠅⠆⠇⠈⠉⠊⠋⠌⠍⠎⠏⠐⠑⠒⠓⠔⠕⠖⠗⠘⠙⠚⠛⠜⠝⠞⠟⠿" # braille patterns
PAL_STARS = " ·✧✦✩✨★✶✳✸" # star progression
PAL_HALFFILL = " ◔◑◕◐◒◓◖◗◙" # directional half-fill progression
PAL_HATCH = " ▣▤▥▦▧▨▩" # crosshatch density ramp
```
#### Project-Specific (examples -- invent new ones per project)
@@ -353,6 +439,202 @@ def rgb_palette_map(val, mask, palette):
return R, G, B
```
### OKLAB Color Space (Perceptually Uniform)
HSV hue is perceptually non-uniform: green occupies far more visual range than blue. OKLAB / OKLCH provide perceptually even color steps — hue increments of 0.1 look equally different regardless of starting hue. Use OKLAB for:
- Gradient interpolation (no unwanted intermediate hues)
- Color harmony generation (perceptually balanced palettes)
- Smooth color transitions over time
```python
# --- sRGB <-> Linear sRGB ---
def srgb_to_linear(c):
"""Convert sRGB [0,1] to linear light. c: float32 array."""
return np.where(c <= 0.04045, c / 12.92, ((c + 0.055) / 1.055) ** 2.4)
def linear_to_srgb(c):
"""Convert linear light to sRGB [0,1]."""
return np.where(c <= 0.0031308, c * 12.92, 1.055 * np.power(np.maximum(c, 0), 1/2.4) - 0.055)
# --- Linear sRGB <-> OKLAB ---
def linear_rgb_to_oklab(r, g, b):
"""Linear sRGB to OKLAB. r,g,b: float32 arrays [0,1].
Returns (L, a, b) where L=[0,1], a,b=[-0.4, 0.4] approx."""
l_ = 0.4122214708 * r + 0.5363325363 * g + 0.0514459929 * b
m_ = 0.2119034982 * r + 0.6806995451 * g + 0.1073969566 * b
s_ = 0.0883024619 * r + 0.2817188376 * g + 0.6299787005 * b
l_c = np.cbrt(l_); m_c = np.cbrt(m_); s_c = np.cbrt(s_)
L = 0.2104542553 * l_c + 0.7936177850 * m_c - 0.0040720468 * s_c
a = 1.9779984951 * l_c - 2.4285922050 * m_c + 0.4505937099 * s_c
b_ = 0.0259040371 * l_c + 0.7827717662 * m_c - 0.8086757660 * s_c
return L, a, b_
def oklab_to_linear_rgb(L, a, b):
"""OKLAB to linear sRGB. Returns (r, g, b) float32 arrays [0,1]."""
l_ = L + 0.3963377774 * a + 0.2158037573 * b
m_ = L - 0.1055613458 * a - 0.0638541728 * b
s_ = L - 0.0894841775 * a - 1.2914855480 * b
l_c = l_ ** 3; m_c = m_ ** 3; s_c = s_ ** 3
r = +4.0767416621 * l_c - 3.3077115913 * m_c + 0.2309699292 * s_c
g = -1.2684380046 * l_c + 2.6097574011 * m_c - 0.3413193965 * s_c
b_ = -0.0041960863 * l_c - 0.7034186147 * m_c + 1.7076147010 * s_c
return np.clip(r, 0, 1), np.clip(g, 0, 1), np.clip(b_, 0, 1)
# --- Convenience: sRGB uint8 <-> OKLAB ---
def rgb_to_oklab(R, G, B):
"""sRGB uint8 arrays to OKLAB."""
r = srgb_to_linear(R.astype(np.float32) / 255.0)
g = srgb_to_linear(G.astype(np.float32) / 255.0)
b = srgb_to_linear(B.astype(np.float32) / 255.0)
return linear_rgb_to_oklab(r, g, b)
def oklab_to_rgb(L, a, b):
"""OKLAB to sRGB uint8 arrays."""
r, g, b_ = oklab_to_linear_rgb(L, a, b)
R = np.clip(linear_to_srgb(r) * 255, 0, 255).astype(np.uint8)
G = np.clip(linear_to_srgb(g) * 255, 0, 255).astype(np.uint8)
B = np.clip(linear_to_srgb(b_) * 255, 0, 255).astype(np.uint8)
return R, G, B
# --- OKLCH (cylindrical form of OKLAB) ---
def oklab_to_oklch(L, a, b):
"""OKLAB to OKLCH. Returns (L, C, H) where H is in [0, 1] (normalized)."""
C = np.sqrt(a**2 + b**2)
H = (np.arctan2(b, a) / (2 * np.pi)) % 1.0
return L, C, H
def oklch_to_oklab(L, C, H):
"""OKLCH to OKLAB. H in [0, 1]."""
angle = H * 2 * np.pi
a = C * np.cos(angle)
b = C * np.sin(angle)
return L, a, b
```
### Gradient Interpolation (OKLAB vs HSV)
Interpolating colors through OKLAB avoids the hue detours that HSV produces:
```python
def lerp_oklab(color_a, color_b, t_array):
"""Interpolate between two sRGB colors through OKLAB.
color_a, color_b: (R, G, B) tuples 0-255
t_array: float32 array [0,1] — interpolation parameter per pixel.
Returns (R, G, B) uint8 arrays."""
La, aa, ba = rgb_to_oklab(
np.full_like(t_array, color_a[0], dtype=np.uint8),
np.full_like(t_array, color_a[1], dtype=np.uint8),
np.full_like(t_array, color_a[2], dtype=np.uint8))
Lb, ab, bb = rgb_to_oklab(
np.full_like(t_array, color_b[0], dtype=np.uint8),
np.full_like(t_array, color_b[1], dtype=np.uint8),
np.full_like(t_array, color_b[2], dtype=np.uint8))
L = La + (Lb - La) * t_array
a = aa + (ab - aa) * t_array
b = ba + (bb - ba) * t_array
return oklab_to_rgb(L, a, b)
def lerp_oklch(color_a, color_b, t_array, short_path=True):
"""Interpolate through OKLCH (preserves chroma, smooth hue path).
short_path: take the shorter arc around the hue wheel."""
La, aa, ba = rgb_to_oklab(
np.full_like(t_array, color_a[0], dtype=np.uint8),
np.full_like(t_array, color_a[1], dtype=np.uint8),
np.full_like(t_array, color_a[2], dtype=np.uint8))
Lb, ab, bb = rgb_to_oklab(
np.full_like(t_array, color_b[0], dtype=np.uint8),
np.full_like(t_array, color_b[1], dtype=np.uint8),
np.full_like(t_array, color_b[2], dtype=np.uint8))
L1, C1, H1 = oklab_to_oklch(La, aa, ba)
L2, C2, H2 = oklab_to_oklch(Lb, ab, bb)
# Shortest hue path
if short_path:
dh = H2 - H1
dh = np.where(dh > 0.5, dh - 1.0, np.where(dh < -0.5, dh + 1.0, dh))
H = (H1 + dh * t_array) % 1.0
else:
H = H1 + (H2 - H1) * t_array
L = L1 + (L2 - L1) * t_array
C = C1 + (C2 - C1) * t_array
Lout, aout, bout = oklch_to_oklab(L, C, H)
return oklab_to_rgb(Lout, aout, bout)
```
### Color Harmony Generation
Auto-generate harmonious palettes from a seed color:
```python
def harmony_complementary(seed_rgb):
"""Two colors: seed + opposite hue."""
L, a, b = rgb_to_oklab(np.array([seed_rgb[0]]), np.array([seed_rgb[1]]), np.array([seed_rgb[2]]))
_, C, H = oklab_to_oklch(L, a, b)
return [seed_rgb, _oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.5) % 1.0)]
def harmony_triadic(seed_rgb):
"""Three colors: seed + two at 120-degree offsets."""
L, a, b = rgb_to_oklab(np.array([seed_rgb[0]]), np.array([seed_rgb[1]]), np.array([seed_rgb[2]]))
_, C, H = oklab_to_oklch(L, a, b)
return [seed_rgb,
_oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.333) % 1.0),
_oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.667) % 1.0)]
def harmony_analogous(seed_rgb, spread=0.08, n=5):
"""N colors spread evenly around seed hue."""
L, a, b = rgb_to_oklab(np.array([seed_rgb[0]]), np.array([seed_rgb[1]]), np.array([seed_rgb[2]]))
_, C, H = oklab_to_oklch(L, a, b)
offsets = np.linspace(-spread * (n-1)/2, spread * (n-1)/2, n)
return [_oklch_to_srgb_tuple(L[0], C[0], (H[0] + off) % 1.0) for off in offsets]
def harmony_split_complementary(seed_rgb, split=0.08):
"""Three colors: seed + two flanking the complement."""
L, a, b = rgb_to_oklab(np.array([seed_rgb[0]]), np.array([seed_rgb[1]]), np.array([seed_rgb[2]]))
_, C, H = oklab_to_oklch(L, a, b)
comp = (H[0] + 0.5) % 1.0
return [seed_rgb,
_oklch_to_srgb_tuple(L[0], C[0], (comp - split) % 1.0),
_oklch_to_srgb_tuple(L[0], C[0], (comp + split) % 1.0)]
def harmony_tetradic(seed_rgb):
"""Four colors: two complementary pairs at 90-degree offset."""
L, a, b = rgb_to_oklab(np.array([seed_rgb[0]]), np.array([seed_rgb[1]]), np.array([seed_rgb[2]]))
_, C, H = oklab_to_oklch(L, a, b)
return [seed_rgb,
_oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.25) % 1.0),
_oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.5) % 1.0),
_oklch_to_srgb_tuple(L[0], C[0], (H[0] + 0.75) % 1.0)]
def _oklch_to_srgb_tuple(L, C, H):
"""Helper: single OKLCH -> sRGB (R,G,B) int tuple."""
La = np.array([L]); Ca = np.array([C]); Ha = np.array([H])
Lo, ao, bo = oklch_to_oklab(La, Ca, Ha)
R, G, B = oklab_to_rgb(Lo, ao, bo)
return (int(R[0]), int(G[0]), int(B[0]))
```
### OKLAB Hue Fields
Drop-in replacements for `hf_*` generators that produce perceptually uniform hue variation:
```python
def hf_oklch_angle(offset=0.0, chroma=0.12, lightness=0.7):
"""OKLCH hue mapped to angle from center. Perceptually uniform rainbow.
Returns (R, G, B) uint8 color array instead of a float hue.
NOTE: Use with _render_vf_rgb() variant, not standard _render_vf()."""
def fn(g, f, t, S):
H = (g.angle / (2 * np.pi) + offset + t * 0.05) % 1.0
L = np.full_like(H, lightness)
C = np.full_like(H, chroma)
Lo, ao, bo = oklch_to_oklab(L, C, H)
R, G, B = oklab_to_rgb(Lo, ao, bo)
return mkc(R, G, B, g.rows, g.cols)
return fn
```
### Compositing Helpers
```python
@@ -458,7 +740,7 @@ subprocess.run(["ffmpeg", "-y", "-f", "concat", "-safe", "0", "-i", concat_path,
### v2 Protocol (Current)
Every scene function: `(renderer, features_dict, time_float, state_dict) -> canvas_uint8`
Every scene function: `(r, f, t, S) -> canvas_uint8` — where `r` = Renderer, `f` = features dict, `t` = time float, `S` = persistent state dict
```python
def fx_example(r, f, t, S):

View File

@@ -1,6 +1,14 @@
# Composition & Brightness Reference
The composable system is the core of visual complexity. It operates at three levels: pixel-level blend modes, multi-grid composition, and adaptive brightness management. This document covers all three.
The composable system is the core of visual complexity. It operates at three levels: pixel-level blend modes, multi-grid composition, and adaptive brightness management. This document covers all three, plus the masking/stencil system for spatial control.
**Cross-references:**
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
- Effect building blocks (value fields, hue fields, particles): `effects.md`
- Scene protocol, render_clip, SCENES table: `scenes.md`
- Shader pipeline, feedback buffer: `shaders.md`
- Complete scene examples with blend/mask usage: `examples.md`
- Blend mode pitfalls (overlay crush, division by zero): `troubleshooting.md`
## Pixel-Level Blend Modes
@@ -102,6 +110,69 @@ result = blend_canvas(result, canvas_c, "difference", 0.6)
Order matters: `screen(A, B)` is commutative, but `difference(screen(A,B), C)` differs from `difference(A, screen(B,C))`.
### Linear-Light Blend Modes
Standard `blend_canvas()` operates in sRGB space — the raw byte values. This is fine for most uses, but sRGB is perceptually non-linear: blending in sRGB darkens midtones and shifts hues slightly. For physically accurate blending (matching how light actually combines), convert to linear light first.
Uses `srgb_to_linear()` / `linear_to_srgb()` from `architecture.md` § OKLAB Color System.
```python
def blend_canvas_linear(base, top, mode="normal", opacity=1.0):
"""Blend in linear light space for physically accurate results.
Identical API to blend_canvas(), but converts sRGB → linear before
blending and linear → sRGB after. More expensive (~2x) due to the
gamma conversions, but produces correct results for additive blending,
screen, and any mode where brightness matters.
"""
af = srgb_to_linear(base.astype(np.float32) / 255.0)
bf = srgb_to_linear(top.astype(np.float32) / 255.0)
fn = BLEND_MODES.get(mode, BLEND_MODES["normal"])
result = fn(af, bf)
if opacity < 1.0:
result = af * (1 - opacity) + result * opacity
result = linear_to_srgb(np.clip(result, 0, 1))
return np.clip(result * 255, 0, 255).astype(np.uint8)
```
**When to use `blend_canvas_linear()` vs `blend_canvas()`:**
| Scenario | Use | Why |
|----------|-----|-----|
| Screen-blending two bright layers | `linear` | sRGB screen over-brightens highlights |
| Add mode for glow/bloom effects | `linear` | Additive light follows linear physics |
| Blending text overlay at low opacity | `srgb` | Perceptual blending looks more natural for text |
| Multiply for shadow/darkening | `srgb` | Differences are minimal for darken ops |
| Color-critical work (matching reference) | `linear` | Avoids sRGB hue shifts in midtones |
| Performance-critical inner loop | `srgb` | ~2x faster, good enough for most ASCII art |
**Batch version** for compositing many layers (converts once, blends multiple, converts back):
```python
def blend_many_linear(layers, modes, opacities):
"""Blend a stack of layers in linear light space.
Args:
layers: list of uint8 (H,W,3) canvases
modes: list of blend mode strings (len = len(layers) - 1)
opacities: list of floats (len = len(layers) - 1)
Returns:
uint8 (H,W,3) canvas
"""
# Convert all to linear at once
linear = [srgb_to_linear(l.astype(np.float32) / 255.0) for l in layers]
result = linear[0]
for i in range(1, len(linear)):
fn = BLEND_MODES.get(modes[i-1], BLEND_MODES["normal"])
blended = fn(result, linear[i])
op = opacities[i-1]
if op < 1.0:
blended = result * (1 - op) + blended * op
result = np.clip(blended, 0, 1)
result = linear_to_srgb(result)
return np.clip(result * 255, 0, 255).astype(np.uint8)
```
---
## Multi-Grid Composition
@@ -219,19 +290,22 @@ def tonemap(canvas, target_mean=90, gamma=0.75, black_point=2, white_point=253):
"""Adaptive tone-mapping: normalizes + gamma-corrects so no frame is
fully dark or washed out.
1. Compute 1st and 99.5th percentile (ignores outlier pixels)
1. Compute 1st and 99.5th percentile on 4x subsample (16x fewer values,
negligible accuracy loss, major speedup at 1080p+)
2. Stretch that range to [0, 1]
3. Apply gamma curve (< 1 lifts shadows, > 1 darkens)
4. Rescale to [black_point, white_point]
"""
f = canvas.astype(np.float32)
lo = np.percentile(f, 1)
hi = np.percentile(f, 99.5)
sub = f[::4, ::4] # 4x subsample: ~390K values vs ~6.2M at 1080p
lo = np.percentile(sub, 1)
hi = np.percentile(sub, 99.5)
if hi - lo < 10:
hi = max(hi, lo + 10) # near-uniform frame fallback
f = np.clip((f - lo) / (hi - lo), 0.0, 1.0)
f = np.power(f, gamma)
f = f * (white_point - black_point) + black_point
np.power(f, gamma, out=f) # in-place: avoids allocation
np.multiply(f, (white_point - black_point), out=f)
np.add(f, black_point, out=f)
return np.clip(f, 0, 255).astype(np.uint8)
```
@@ -453,6 +527,208 @@ class FeedbackBuffer:
---
## Masking / Stencil System
Masks are float32 arrays `(rows, cols)` or `(VH, VW)` in range [0, 1]. They control where effects are visible: 1.0 = fully visible, 0.0 = fully hidden. Use masks to create figure/ground relationships, focal points, and shaped reveals.
### Shape Masks
```python
def mask_circle(g, cx_frac=0.5, cy_frac=0.5, radius=0.3, feather=0.05):
"""Circular mask centered at (cx_frac, cy_frac) in normalized coords.
feather: width of soft edge (0 = hard cutoff)."""
asp = g.cw / g.ch if hasattr(g, 'cw') else 1.0
dx = (g.cc / g.cols - cx_frac)
dy = (g.rr / g.rows - cy_frac) * asp
d = np.sqrt(dx**2 + dy**2)
if feather > 0:
return np.clip(1.0 - (d - radius) / feather, 0, 1)
return (d <= radius).astype(np.float32)
def mask_rect(g, x0=0.2, y0=0.2, x1=0.8, y1=0.8, feather=0.03):
"""Rectangular mask. Coordinates in [0,1] normalized."""
dx = np.maximum(x0 - g.cc / g.cols, g.cc / g.cols - x1)
dy = np.maximum(y0 - g.rr / g.rows, g.rr / g.rows - y1)
d = np.maximum(dx, dy)
if feather > 0:
return np.clip(1.0 - d / feather, 0, 1)
return (d <= 0).astype(np.float32)
def mask_ring(g, cx_frac=0.5, cy_frac=0.5, inner_r=0.15, outer_r=0.35,
feather=0.03):
"""Ring / annulus mask."""
inner = mask_circle(g, cx_frac, cy_frac, inner_r, feather)
outer = mask_circle(g, cx_frac, cy_frac, outer_r, feather)
return outer - inner
def mask_gradient_h(g, start=0.0, end=1.0):
"""Left-to-right gradient mask."""
return np.clip((g.cc / g.cols - start) / (end - start + 1e-10), 0, 1).astype(np.float32)
def mask_gradient_v(g, start=0.0, end=1.0):
"""Top-to-bottom gradient mask."""
return np.clip((g.rr / g.rows - start) / (end - start + 1e-10), 0, 1).astype(np.float32)
def mask_gradient_radial(g, cx_frac=0.5, cy_frac=0.5, inner=0.0, outer=0.5):
"""Radial gradient mask — bright at center, dark at edges."""
d = np.sqrt((g.cc / g.cols - cx_frac)**2 + (g.rr / g.rows - cy_frac)**2)
return np.clip(1.0 - (d - inner) / (outer - inner + 1e-10), 0, 1)
```
### Value Field as Mask
Use any `vf_*` function's output as a spatial mask:
```python
def mask_from_vf(vf_result, threshold=0.5, feather=0.1):
"""Convert a value field to a mask by thresholding.
feather: smooth edge width around threshold."""
if feather > 0:
return np.clip((vf_result - threshold + feather) / (2 * feather), 0, 1)
return (vf_result > threshold).astype(np.float32)
def mask_select(mask, vf_a, vf_b):
"""Spatial conditional: show vf_a where mask is 1, vf_b where mask is 0.
mask: float32 [0,1] array. Intermediate values blend."""
return vf_a * mask + vf_b * (1 - mask)
```
### Text Stencil
Render text to a mask. Effects are visible only through the letterforms:
```python
def mask_text(grid, text, row_frac=0.5, font=None, font_size=None):
"""Render text string as a float32 mask [0,1] at grid resolution.
Characters = 1.0, background = 0.0.
row_frac: vertical position as fraction of grid height.
font: PIL ImageFont (defaults to grid's font if None).
font_size: override font size for the mask text (for larger stencil text).
"""
from PIL import Image, ImageDraw, ImageFont
f = font or grid.font
if font_size and font != grid.font:
f = ImageFont.truetype(font.path, font_size)
# Render text to image at pixel resolution, then downsample to grid
img = Image.new("L", (grid.cols * grid.cw, grid.ch), 0)
draw = ImageDraw.Draw(img)
bbox = draw.textbbox((0, 0), text, font=f)
tw = bbox[2] - bbox[0]
x = (grid.cols * grid.cw - tw) // 2
draw.text((x, 0), text, fill=255, font=f)
row_mask = np.array(img, dtype=np.float32) / 255.0
# Place in full grid mask
mask = np.zeros((grid.rows, grid.cols), dtype=np.float32)
target_row = int(grid.rows * row_frac)
# Downsample rendered text to grid cells
for c in range(grid.cols):
px = c * grid.cw
if px + grid.cw <= row_mask.shape[1]:
cell = row_mask[:, px:px + grid.cw]
if cell.mean() > 0.1:
mask[target_row, c] = cell.mean()
return mask
def mask_text_block(grid, lines, start_row_frac=0.3, font=None):
"""Multi-line text stencil. Returns full grid mask."""
mask = np.zeros((grid.rows, grid.cols), dtype=np.float32)
for i, line in enumerate(lines):
row_frac = start_row_frac + i / grid.rows
line_mask = mask_text(grid, line, row_frac, font)
mask = np.maximum(mask, line_mask)
return mask
```
### Animated Masks
Masks that change over time for reveals, wipes, and morphing:
```python
def mask_iris(g, t, t_start, t_end, cx_frac=0.5, cy_frac=0.5,
max_radius=0.7, ease_fn=None):
"""Iris open/close: circle that grows from 0 to max_radius.
ease_fn: easing function (default: ease_in_out_cubic from effects.md)."""
if ease_fn is None:
ease_fn = lambda x: x * x * (3 - 2 * x) # smoothstep fallback
progress = np.clip((t - t_start) / (t_end - t_start), 0, 1)
radius = ease_fn(progress) * max_radius
return mask_circle(g, cx_frac, cy_frac, radius, feather=0.03)
def mask_wipe_h(g, t, t_start, t_end, direction="right"):
"""Horizontal wipe reveal."""
progress = np.clip((t - t_start) / (t_end - t_start), 0, 1)
if direction == "left":
progress = 1 - progress
return mask_gradient_h(g, start=progress - 0.05, end=progress + 0.05)
def mask_wipe_v(g, t, t_start, t_end, direction="down"):
"""Vertical wipe reveal."""
progress = np.clip((t - t_start) / (t_end - t_start), 0, 1)
if direction == "up":
progress = 1 - progress
return mask_gradient_v(g, start=progress - 0.05, end=progress + 0.05)
def mask_dissolve(g, t, t_start, t_end, seed=42):
"""Random pixel dissolve — noise threshold sweeps from 0 to 1."""
progress = np.clip((t - t_start) / (t_end - t_start), 0, 1)
rng = np.random.RandomState(seed)
noise = rng.random((g.rows, g.cols)).astype(np.float32)
return (noise < progress).astype(np.float32)
```
### Mask Boolean Operations
```python
def mask_union(a, b):
"""OR — visible where either mask is active."""
return np.maximum(a, b)
def mask_intersect(a, b):
"""AND — visible only where both masks are active."""
return np.minimum(a, b)
def mask_subtract(a, b):
"""A minus B — visible where A is active but B is not."""
return np.clip(a - b, 0, 1)
def mask_invert(m):
"""NOT — flip mask."""
return 1.0 - m
```
### Applying Masks to Canvases
```python
def apply_mask_canvas(canvas, mask, bg_canvas=None):
"""Apply a grid-resolution mask to a pixel canvas.
Expands mask from (rows, cols) to (VH, VW) via nearest-neighbor.
canvas: uint8 (VH, VW, 3)
mask: float32 (rows, cols) [0,1]
bg_canvas: what shows through where mask=0. None = black.
"""
# Expand mask to pixel resolution
mask_px = np.repeat(np.repeat(mask, canvas.shape[0] // mask.shape[0] + 1, axis=0),
canvas.shape[1] // mask.shape[1] + 1, axis=1)
mask_px = mask_px[:canvas.shape[0], :canvas.shape[1]]
if bg_canvas is not None:
return np.clip(canvas * mask_px[:, :, None] +
bg_canvas * (1 - mask_px[:, :, None]), 0, 255).astype(np.uint8)
return np.clip(canvas * mask_px[:, :, None], 0, 255).astype(np.uint8)
def apply_mask_vf(vf_a, vf_b, mask):
"""Apply mask at value-field level — blend two value fields spatially.
All arrays are (rows, cols) float32."""
return vf_a * mask + vf_b * (1 - mask)
```
---
## PixelBlendStack
Higher-level wrapper for multi-layer compositing:

View File

@@ -0,0 +1,193 @@
# Scene Design Patterns
**Cross-references:**
- Scene protocol, SCENES table: `scenes.md`
- Blend modes, multi-grid composition, tonemap: `composition.md`
- Effect building blocks (value fields, noise, SDFs): `effects.md`
- Shader pipeline, feedback buffer: `shaders.md`
- Complete scene examples: `examples.md`
Higher-order patterns for composing scenes that feel intentional rather than random. These patterns use the existing building blocks (value fields, blend modes, shaders, feedback) but organize them with compositional intent.
## Layer Hierarchy
Every scene should have clear visual layers with distinct roles:
| Layer | Grid | Brightness | Purpose |
|-------|------|-----------|---------|
| **Background** | xs or sm (dense) | 0.10.25 | Atmosphere, texture. Never competes with content. |
| **Content** | md (balanced) | 0.40.8 | The main visual idea. Carries the scene's concept. |
| **Accent** | lg or sm (sparse) | 0.51.0 (sparse coverage) | Highlights, punctuation, sparse bright points. |
The background sets mood. The content layer is what the scene *is about*. The accent adds visual interest without overwhelming.
```python
def fx_example(r, f, t, S):
local = t
progress = min(local / 5.0, 1.0)
g_bg = r.get_grid("sm")
g_main = r.get_grid("md")
g_accent = r.get_grid("lg")
# --- Background: dim atmosphere ---
bg_val = vf_smooth_noise(g_bg, f, t * 0.3, S, octaves=2, bri=0.15)
# ... render bg to canvas
# --- Content: the main visual idea ---
content_val = vf_spiral(g_main, f, t, S, n_arms=n_arms, tightness=tightness)
# ... render content on top of canvas
# --- Accent: sparse highlights ---
accent_val = vf_noise_static(g_accent, f, t, S, density=0.05)
# ... render accent on top
return canvas
```
## Directional Parameter Arcs
Parameters should *go somewhere* over the scene's duration — not oscillate aimlessly with `sin(t * N)`.
**Bad:** `twist = 3.0 + 2.0 * math.sin(t * 0.6)` — wobbles back and forth, feels aimless.
**Good:** `twist = 2.0 + progress * 5.0` — starts gentle, ends intense. The scene *builds*.
Use `progress = min(local / duration, 1.0)` (0→1 over the scene) to drive directional change:
| Pattern | Formula | Feel |
|---------|---------|------|
| Linear ramp | `progress * range` | Steady buildup |
| Ease-out | `1 - (1 - progress) ** 2` | Fast start, gentle finish |
| Ease-in | `progress ** 2` | Slow start, accelerating |
| Step reveal | `np.clip((progress - 0.5) / 0.25, 0, 1)` | Nothing until 50%, then fades in |
| Build + plateau | `min(1.0, progress * 1.5)` | Reaches full at 67%, holds |
Oscillation is fine for *secondary* parameters (saturation shimmer, hue drift). But the *defining* parameter of the scene should have a direction.
### Examples of Directional Arcs
| Scene concept | Parameter | Arc |
|--------------|-----------|-----|
| Emergence | Ring radius | 0 → max (ease-out) |
| Shatter | Voronoi cell count | 8 → 38 (linear) |
| Descent | Tunnel speed | 2.0 → 10.0 (linear) |
| Mandala | Shape complexity | ring → +polygon → +star → +rosette (step reveals) |
| Crescendo | Layer count | 1 → 7 (staggered entry) |
| Entropy | Geometry visibility | 1.0 → 0.0 (consumed) |
## Scene Concepts
Each scene should be built around a *visual idea*, not an effect name.
**Bad:** "fx_plasma_cascade" — named after the effect. No concept.
**Good:** "fx_emergence" — a point of light expands into a field. The name tells you *what happens*.
Good scene concepts have:
1. A **visual metaphor** (emergence, descent, collision, entropy)
2. A **directional arc** (things change from A to B, not oscillate)
3. **Motivated layer choices** (each layer serves the concept)
4. **Motivated feedback** (transform direction matches the metaphor)
| Concept | Metaphor | Feedback transform | Why |
|---------|----------|-------------------|-----|
| Emergence | Birth, expansion | zoom-out | Past frames expand outward |
| Descent | Falling, acceleration | zoom-in | Past frames rush toward center |
| Inferno | Rising fire | shift-up | Past frames rise with the flames |
| Entropy | Decay, dissolution | none | Clean, no persistence — things disappear |
| Crescendo | Accumulation | zoom + hue_shift | Everything compounds and shifts |
## Compositional Techniques
### Counter-Rotating Dual Systems
Two instances of the same effect rotating in opposite directions create visual interference:
```python
# Primary spiral (clockwise)
s1_val = vf_spiral(g_main, f, t * 1.5, S, n_arms=n_arms_1, tightness=tightness_1)
# Counter-rotating spiral (counter-clockwise via negative time)
s2_val = vf_spiral(g_accent, f, -t * 1.2, S, n_arms=n_arms_2, tightness=tightness_2)
# Screen blend creates bright interference at crossing points
canvas = blend_canvas(canvas_with_s1, c2, "screen", 0.7)
```
Works with spirals, vortexes, rings. The counter-rotation creates constantly shifting interference patterns.
### Wave Collision
Two wave fronts converging from opposite sides, meeting at a collision point:
```python
collision_phase = abs(progress - 0.5) * 2 # 1→0→1 (0 at collision)
# Wave A approaches from left
offset_a = (1 - progress) * g.cols * 0.4
wave_a = np.sin((g.cc + offset_a) * 0.08 + t * 2) * 0.5 + 0.5
# Wave B approaches from right
offset_b = -(1 - progress) * g.cols * 0.4
wave_b = np.sin((g.cc + offset_b) * 0.08 - t * 2) * 0.5 + 0.5
# Interference peaks at collision
combined = wave_a * 0.5 + wave_b * 0.5 + np.abs(wave_a - wave_b) * (1 - collision_phase) * 0.5
```
### Progressive Fragmentation
Voronoi with cell count increasing over time — visual shattering:
```python
n_pts = int(8 + progress * 30) # 8 cells → 38 cells
# Pre-generate enough points, slice to n_pts
px = base_x[:n_pts] + np.sin(t * 0.3 + np.arange(n_pts) * 0.7) * (3 + progress * 3)
```
The edge glow width can also increase with progress to emphasize the cracks.
### Entropy / Consumption
A clean geometric pattern being overtaken by an organic process:
```python
# Geometry fades out
geo_val = clean_pattern * max(0.05, 1.0 - progress * 0.9)
# Organic process grows in
rd_val = vf_reaction_diffusion(g, f, t, S) * min(1.0, progress * 1.5)
# Render geometry first, organic on top — organic consumes geometry
```
### Staggered Layer Entry (Crescendo)
Layers enter one at a time, building to overwhelming density:
```python
def layer_strength(enter_t, ramp=1.5):
"""0.0 until enter_t, ramps to 1.0 over ramp seconds."""
return max(0.0, min(1.0, (local - enter_t) / ramp))
# Layer 1: always present
s1 = layer_strength(0.0)
# Layer 2: enters at 2s
s2 = layer_strength(2.0)
# Layer 3: enters at 4s
s3 = layer_strength(4.0)
# ... etc
# Each layer uses a different effect, grid, palette, and blend mode
# Screen blend between layers so they accumulate light
```
For a 15-second crescendo, 7 layers entering every 2 seconds works well. Use different blend modes (screen for most, add for energy, colordodge for the final wash).
## Scene Ordering
For a multi-scene reel or video:
- **Vary mood between adjacent scenes** — don't put two calm scenes next to each other
- **Randomize order** rather than grouping by type — prevents "effect demo" feel
- **End on the strongest scene** — crescendo or something with a clear payoff
- **Open with energy** — grab attention in the first 2 seconds

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,416 @@
# Scene Examples
**Cross-references:**
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
- Scene protocol, render_clip, SCENES table: `scenes.md`
- Shader pipeline, feedback buffer, ShaderChain: `shaders.md`
- Input sources (audio features, video features): `inputs.md`
- Performance tuning: `optimization.md`
- Common bugs: `troubleshooting.md`
Copy-paste-ready scene functions at increasing complexity. Each is a complete, working v2 scene function that returns a pixel canvas. See `scenes.md` for the scene protocol and `composition.md` for blend modes and tonemap.
---
## Minimal — Single Grid, Single Effect
### Breathing Plasma
One grid, one value field, one hue field. The simplest possible scene.
```python
def fx_breathing_plasma(r, f, t, S):
"""Plasma field with time-cycling hue. Audio modulates brightness."""
canvas = _render_vf(r, "md",
lambda g, f, t, S: vf_plasma(g, f, t, S) * 1.3,
hf_time_cycle(0.08), PAL_DENSE, f, t, S, sat=0.8)
return canvas
```
### Reaction-Diffusion Coral
Single grid, simulation-based field. Evolves organically over time.
```python
def fx_coral(r, f, t, S):
"""Gray-Scott reaction-diffusion — coral branching pattern.
Slow-evolving, organic. Best for ambient/chill sections."""
canvas = _render_vf(r, "sm",
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
feed=0.037, kill=0.060, steps_per_frame=6, init_mode="center"),
hf_distance(0.55, 0.015), PAL_DOTS, f, t, S, sat=0.7)
return canvas
```
### SDF Geometry
Geometric shapes from SDFs. Clean, precise, graphic.
```python
def fx_sdf_rings(r, f, t, S):
"""Concentric SDF rings with smooth pulsing."""
def val_fn(g, f, t, S):
d1 = sdf_ring(g, radius=0.15 + f.get("bass", 0.3) * 0.05, thickness=0.015)
d2 = sdf_ring(g, radius=0.25 + f.get("mid", 0.3) * 0.05, thickness=0.012)
d3 = sdf_ring(g, radius=0.35 + f.get("hi", 0.3) * 0.04, thickness=0.010)
combined = sdf_smooth_union(sdf_smooth_union(d1, d2, 0.05), d3, 0.05)
return sdf_glow(combined, falloff=0.08) * (0.5 + f.get("rms", 0.3) * 0.8)
canvas = _render_vf(r, "md", val_fn, hf_angle(0.0), PAL_STARS, f, t, S, sat=0.85)
return canvas
```
---
## Standard — Two Grids + Blend
### Tunnel Through Noise
Two grids at different densities, screen blended. The fine noise texture shows through the coarser tunnel characters.
```python
def fx_tunnel_noise(r, f, t, S):
"""Tunnel depth on md grid + fBM noise on sm grid, screen blended."""
canvas_a = _render_vf(r, "md",
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=4.0, complexity=8) * 1.2,
hf_distance(0.5, 0.02), PAL_BLOCKS, f, t, S, sat=0.7)
canvas_b = _render_vf(r, "sm",
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=4, freq=0.05, speed=0.15) * 1.3,
hf_time_cycle(0.06), PAL_RUNE, f, t, S, sat=0.6)
return blend_canvas(canvas_a, canvas_b, "screen", 0.7)
```
### Voronoi Cells + Spiral Overlay
Voronoi cell edges with a spiral arm pattern overlaid.
```python
def fx_voronoi_spiral(r, f, t, S):
"""Voronoi edge detection on md + logarithmic spiral on lg."""
canvas_a = _render_vf(r, "md",
lambda g, f, t, S: vf_voronoi(g, f, t, S,
n_cells=15, mode="edge", edge_width=2.0, speed=0.4),
hf_angle(0.2), PAL_CIRCUIT, f, t, S, sat=0.75)
canvas_b = _render_vf(r, "lg",
lambda g, f, t, S: vf_spiral(g, f, t, S, n_arms=4, tightness=3.0) * 1.2,
hf_distance(0.1, 0.03), PAL_BLOCKS, f, t, S, sat=0.9)
return blend_canvas(canvas_a, canvas_b, "exclusion", 0.6)
```
### Domain-Warped fBM
Two layers of the same fBM, one domain-warped, difference-blended for psychedelic organic texture.
```python
def fx_organic_warp(r, f, t, S):
"""Clean fBM vs domain-warped fBM, difference blended."""
canvas_a = _render_vf(r, "sm",
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04, speed=0.1),
hf_plasma(0.2), PAL_DENSE, f, t, S, sat=0.6)
canvas_b = _render_vf(r, "md",
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
warp_strength=20.0, freq=0.05, speed=0.15),
hf_time_cycle(0.05), PAL_BRAILLE, f, t, S, sat=0.7)
return blend_canvas(canvas_a, canvas_b, "difference", 0.7)
```
---
## Complex — Three Grids + Conditional + Feedback
### Psychedelic Cathedral
Three-grid composition with beat-triggered kaleidoscope and feedback zoom tunnel. The most visually complex pattern.
```python
def fx_cathedral(r, f, t, S):
"""Three-layer cathedral: interference + rings + noise, kaleidoscope on beat,
feedback zoom tunnel."""
# Layer 1: interference pattern on sm grid
canvas_a = _render_vf(r, "sm",
lambda g, f, t, S: vf_interference(g, f, t, S, n_waves=7) * 1.3,
hf_angle(0.0), PAL_MATH, f, t, S, sat=0.8)
# Layer 2: pulsing rings on md grid
canvas_b = _render_vf(r, "md",
lambda g, f, t, S: vf_rings(g, f, t, S, n_base=10, spacing_base=3) * 1.4,
hf_distance(0.3, 0.02), PAL_STARS, f, t, S, sat=0.9)
# Layer 3: temporal noise on lg grid (slow morph)
canvas_c = _render_vf(r, "lg",
lambda g, f, t, S: vf_temporal_noise(g, f, t, S,
freq=0.04, t_freq=0.2, octaves=3),
hf_time_cycle(0.12), PAL_BLOCKS, f, t, S, sat=0.7)
# Blend: A screen B, then difference with C
result = blend_canvas(canvas_a, canvas_b, "screen", 0.8)
result = blend_canvas(result, canvas_c, "difference", 0.5)
# Beat-triggered kaleidoscope
if f.get("bdecay", 0) > 0.3:
folds = 6 if f.get("sub_r", 0.3) > 0.4 else 8
result = sh_kaleidoscope(result.copy(), folds=folds)
return result
# Scene table entry with feedback:
# {"start": 30.0, "end": 50.0, "name": "cathedral", "fx": fx_cathedral,
# "gamma": 0.65, "shaders": [("bloom", {"thr": 110}), ("chromatic", {"amt": 4}),
# ("vignette", {"s": 0.2}), ("grain", {"amt": 8})],
# "feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
# "transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}}
```
### Masked Reaction-Diffusion with Attractor Overlay
Reaction-diffusion visible only through an animated iris mask, with a strange attractor density field underneath.
```python
def fx_masked_life(r, f, t, S):
"""Attractor base + reaction-diffusion visible through iris mask + particles."""
g_sm = r.get_grid("sm")
g_md = r.get_grid("md")
# Layer 1: strange attractor density field (background)
canvas_bg = _render_vf(r, "sm",
lambda g, f, t, S: vf_strange_attractor(g, f, t, S,
attractor="clifford", n_points=30000),
hf_time_cycle(0.04), PAL_DOTS, f, t, S, sat=0.5)
# Layer 2: reaction-diffusion (foreground, will be masked)
canvas_rd = _render_vf(r, "md",
lambda g, f, t, S: vf_reaction_diffusion(g, f, t, S,
feed=0.046, kill=0.063, steps_per_frame=4, init_mode="ring"),
hf_angle(0.15), PAL_HALFFILL, f, t, S, sat=0.85)
# Animated iris mask — opens over first 5 seconds of scene
scene_start = S.get("_scene_start", t)
if "_scene_start" not in S:
S["_scene_start"] = t
mask = mask_iris(g_md, t, scene_start, scene_start + 5.0,
max_radius=0.6)
canvas_rd = apply_mask_canvas(canvas_rd, mask, bg_canvas=canvas_bg)
# Layer 3: flow-field particles following the R-D gradient
rd_field = vf_reaction_diffusion(g_sm, f, t, S,
feed=0.046, kill=0.063, steps_per_frame=0) # read without stepping
ch_p, co_p = update_flow_particles(S, g_sm, f, rd_field,
n=300, speed=0.8, char_set=list("·•◦∘°"))
canvas_p = g_sm.render(ch_p, co_p)
result = blend_canvas(canvas_rd, canvas_p, "add", 0.7)
return result
```
### Morphing Field Sequence with Eased Keyframes
Demonstrates temporal coherence: smooth morphing between effects with keyframed parameters.
```python
def fx_morphing_journey(r, f, t, S):
"""Morphs through 4 value fields over 20 seconds with eased transitions.
Parameters (twist, arm count) also keyframed."""
# Keyframed twist parameter
twist = keyframe(t, [(0, 1.0), (5, 5.0), (10, 2.0), (15, 8.0), (20, 1.0)],
ease_fn=ease_in_out_cubic, loop=True)
# Sequence of value fields with 2s crossfade
fields = [
lambda g, f, t, S: vf_plasma(g, f, t, S),
lambda g, f, t, S: vf_vortex(g, f, t, S, twist=twist),
lambda g, f, t, S: vf_fbm(g, f, t, S, octaves=5, freq=0.04),
lambda g, f, t, S: vf_domain_warp(g, f, t, S, warp_strength=15),
]
durations = [5.0, 5.0, 5.0, 5.0]
val_fn = lambda g, f, t, S: vf_sequence(g, f, t, S, fields, durations,
crossfade=2.0)
# Render with slowly rotating hue
canvas = _render_vf(r, "md", val_fn, hf_time_cycle(0.06),
PAL_DENSE, f, t, S, sat=0.8)
# Second layer: tiled version of same sequence at smaller grid
tiled_fn = lambda g, f, t, S: vf_sequence(
make_tgrid(g, *uv_tile(g, 3, 3, mirror=True)),
f, t, S, fields, durations, crossfade=2.0)
canvas_b = _render_vf(r, "sm", tiled_fn, hf_angle(0.1),
PAL_RUNE, f, t, S, sat=0.6)
return blend_canvas(canvas, canvas_b, "screen", 0.5)
```
---
## Specialized — Unique State Patterns
### Game of Life with Ghost Trails
Cellular automaton with analog fade trails. Beat injects random cells.
```python
def fx_life(r, f, t, S):
"""Conway's Game of Life with fading ghost trails.
Beat events inject random live cells for disruption."""
canvas = _render_vf(r, "sm",
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
rule="life", steps_per_frame=1, fade=0.92, density=0.25),
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.8)
# Overlay: coral automaton on lg grid for chunky texture
canvas_b = _render_vf(r, "lg",
lambda g, f, t, S: vf_game_of_life(g, f, t, S,
rule="coral", steps_per_frame=1, fade=0.85, density=0.15, seed=99),
hf_time_cycle(0.1), PAL_HATCH, f, t, S, sat=0.6)
return blend_canvas(canvas, canvas_b, "screen", 0.5)
```
### Boids Flock Over Voronoi
Emergent swarm movement over a cellular background.
```python
def fx_boid_swarm(r, f, t, S):
"""Flocking boids over animated voronoi cells."""
# Background: voronoi cells
canvas_bg = _render_vf(r, "md",
lambda g, f, t, S: vf_voronoi(g, f, t, S,
n_cells=20, mode="distance", speed=0.2),
hf_distance(0.4, 0.02), PAL_CIRCUIT, f, t, S, sat=0.5)
# Foreground: boids
g = r.get_grid("md")
ch_b, co_b = update_boids(S, g, f, n_boids=150, perception=6.0,
max_speed=1.5, char_set=list("▸▹►▻→⟶"))
canvas_boids = g.render(ch_b, co_b)
# Trails for the boids
# (boid positions are stored in S["boid_x"], S["boid_y"])
S["px"] = list(S.get("boid_x", []))
S["py"] = list(S.get("boid_y", []))
ch_t, co_t = draw_particle_trails(S, g, max_trail=6, fade=0.6)
canvas_trails = g.render(ch_t, co_t)
result = blend_canvas(canvas_bg, canvas_trails, "add", 0.3)
result = blend_canvas(result, canvas_boids, "add", 0.9)
return result
```
### Fire Rising Through SDF Text Stencil
Fire effect visible only through text letterforms.
```python
def fx_fire_text(r, f, t, S):
"""Fire columns visible through text stencil. Text acts as window."""
g = r.get_grid("lg")
# Full-screen fire (will be masked)
canvas_fire = _render_vf(r, "sm",
lambda g, f, t, S: np.clip(
vf_fbm(g, f, t, S, octaves=4, freq=0.08, speed=0.8) *
(1.0 - g.rr / g.rows) * # fade toward top
(0.6 + f.get("bass", 0.3) * 0.8), 0, 1),
hf_fixed(0.05), PAL_BLOCKS, f, t, S, sat=0.9) # fire hue
# Background: dark domain warp
canvas_bg = _render_vf(r, "md",
lambda g, f, t, S: vf_domain_warp(g, f, t, S,
warp_strength=8, freq=0.03, speed=0.05) * 0.3,
hf_fixed(0.6), PAL_DENSE, f, t, S, sat=0.4)
# Text stencil mask
mask = mask_text(g, "FIRE", row_frac=0.45)
# Expand vertically for multi-row coverage
for offset in range(-2, 3):
shifted = mask_text(g, "FIRE", row_frac=0.45 + offset / g.rows)
mask = mask_union(mask, shifted)
canvas_masked = apply_mask_canvas(canvas_fire, mask, bg_canvas=canvas_bg)
return canvas_masked
```
### Portrait Mode: Vertical Rain + Quote
Optimized for 9:16. Uses vertical space for long rain trails and stacked text.
```python
def fx_portrait_rain_quote(r, f, t, S):
"""Portrait-optimized: matrix rain (long vertical trails) with stacked quote.
Designed for 1080x1920 (9:16)."""
g = r.get_grid("md") # ~112x100 in portrait
# Matrix rain — long trails benefit from portrait's extra rows
ch, co, S = eff_matrix_rain(g, f, t, S,
hue=0.33, bri=0.6, pal=PAL_KATA, speed_base=0.4, speed_beat=2.5)
canvas_rain = g.render(ch, co)
# Tunnel depth underneath for texture
canvas_tunnel = _render_vf(r, "sm",
lambda g, f, t, S: vf_tunnel(g, f, t, S, speed=3.0, complexity=6) * 0.8,
hf_fixed(0.33), PAL_BLOCKS, f, t, S, sat=0.5)
result = blend_canvas(canvas_tunnel, canvas_rain, "screen", 0.8)
# Quote text — portrait layout: short lines, many of them
g_text = r.get_grid("lg") # ~90x80 in portrait
quote_lines = layout_text_portrait(
"The code is the art and the art is the code",
max_chars_per_line=20)
# Center vertically
block_start = (g_text.rows - len(quote_lines)) // 2
ch_t = np.full((g_text.rows, g_text.cols), " ", dtype="U1")
co_t = np.zeros((g_text.rows, g_text.cols, 3), dtype=np.uint8)
total_chars = sum(len(l) for l in quote_lines)
progress = min(1.0, (t - S.get("_scene_start", t)) / 3.0)
if "_scene_start" not in S: S["_scene_start"] = t
render_typewriter(ch_t, co_t, quote_lines, block_start, g_text.cols,
progress, total_chars, (200, 255, 220), t)
canvas_text = g_text.render(ch_t, co_t)
result = blend_canvas(result, canvas_text, "add", 0.9)
return result
```
---
## Scene Table Template
Wire scenes into a complete video:
```python
SCENES = [
{"start": 0.0, "end": 5.0, "name": "coral",
"fx": fx_coral, "grid": "sm", "gamma": 0.70,
"shaders": [("bloom", {"thr": 110}), ("vignette", {"s": 0.2})],
"feedback": {"decay": 0.8, "blend": "screen", "opacity": 0.3,
"transform": "zoom", "transform_amt": 0.01}},
{"start": 5.0, "end": 15.0, "name": "tunnel_noise",
"fx": fx_tunnel_noise, "grid": "md", "gamma": 0.75,
"shaders": [("chromatic", {"amt": 3}), ("bloom", {"thr": 120}),
("scanlines", {"intensity": 0.06}), ("grain", {"amt": 8})],
"feedback": None},
{"start": 15.0, "end": 35.0, "name": "cathedral",
"fx": fx_cathedral, "grid": "sm", "gamma": 0.65,
"shaders": [("bloom", {"thr": 100}), ("chromatic", {"amt": 5}),
("color_wobble", {"amt": 0.2}), ("vignette", {"s": 0.18})],
"feedback": {"decay": 0.75, "blend": "screen", "opacity": 0.35,
"transform": "zoom", "transform_amt": 0.012, "hue_shift": 0.015}},
{"start": 35.0, "end": 50.0, "name": "morphing",
"fx": fx_morphing_journey, "grid": "md", "gamma": 0.70,
"shaders": [("bloom", {"thr": 110}), ("grain", {"amt": 6})],
"feedback": {"decay": 0.7, "blend": "screen", "opacity": 0.25,
"transform": "rotate_cw", "transform_amt": 0.003}},
]
```

View File

@@ -1,5 +1,14 @@
# Input Sources
**Cross-references:**
- Grid system, resolution presets: `architecture.md`
- Effect building blocks (audio-reactive modulation): `effects.md`
- Scene protocol, SCENES table (feature routing): `scenes.md`
- Shader pipeline, output encoding: `shaders.md`
- Performance tuning (audio chunking, WAV caching): `optimization.md`
- Common bugs (sample rate, dtype, silence handling): `troubleshooting.md`
- Complete scene examples with feature usage: `examples.md`
## Audio Analysis
### Loading
@@ -294,23 +303,73 @@ For narrated videos (testimonials, quotes, storytelling), generate speech audio
### ElevenLabs Voice Generation
```python
import requests
import requests, time, os
def generate_tts(text, voice_id, api_key, output_path, model="eleven_multilingual_v2"):
"""Generate TTS audio via ElevenLabs API."""
"""Generate TTS audio via ElevenLabs API. Streams response to disk."""
# Skip if already generated (idempotent re-runs)
if os.path.exists(output_path) and os.path.getsize(output_path) > 1000:
return
url = f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}"
headers = {"xi-api-key": api_key, "Content-Type": "application/json"}
data = {"text": text, "model_id": model,
"voice_settings": {"stability": 0.5, "similarity_boost": 0.75}}
resp = requests.post(url, json=data, headers=headers, timeout=30)
data = {
"text": text,
"model_id": model,
"voice_settings": {
"stability": 0.65,
"similarity_boost": 0.80,
"style": 0.15,
"use_speaker_boost": True,
},
}
resp = requests.post(url, json=data, headers=headers, stream=True)
resp.raise_for_status()
with open(output_path, "wb") as f:
f.write(resp.content)
for chunk in resp.iter_content(chunk_size=4096):
f.write(chunk)
time.sleep(0.3) # rate limit: avoid 429s on batch generation
```
Voice settings notes:
- `stability` 0.65 gives natural variation without drift. Lower (0.3-0.5) for more expressive reads, higher (0.7-0.9) for monotone/narration.
- `similarity_boost` 0.80 keeps it close to the voice profile. Lower for more generic sound.
- `style` 0.15 adds slight stylistic variation. Keep low (0-0.2) for straightforward reads.
- `use_speaker_boost` True improves clarity at the cost of slightly more processing time.
### Voice Pool
ElevenLabs has ~20 built-in voices. Use multiple voices for variety across quotes. Reference pool:
```python
VOICE_POOL = [
("JBFqnCBsd6RMkjVDRZzb", "George"),
("nPczCjzI2devNBz1zQrb", "Brian"),
("pqHfZKP75CvOlQylNhV4", "Bill"),
("CwhRBWXzGAHq8TQ4Fs17", "Roger"),
("cjVigY5qzO86Huf0OWal", "Eric"),
("onwK4e9ZLuTAKqWW03F9", "Daniel"),
("IKne3meq5aSn9XLyUdCD", "Charlie"),
("iP95p4xoKVk53GoZ742B", "Chris"),
("bIHbv24MWmeRgasZH58o", "Will"),
("TX3LPaxmHKxFdv7VOQHJ", "Liam"),
("SAz9YHcvj6GT2YYXdXww", "River"),
("EXAVITQu4vr4xnSDxMaL", "Sarah"),
("Xb7hH8MSUJpSbSDYk0k2", "Alice"),
("pFZP5JQG7iQjIQuC4Bku", "Lily"),
("XrExE9yKIg1WjnnlVkGX", "Matilda"),
("FGY2WhTYpPnrIDTdsKH5", "Laura"),
("SOYHLrjzK2X1ezoPC6cr", "Harry"),
("hpp4J3VqNfWAUOO0d1Us", "Bella"),
("N2lVS1w4EtoT3dr4eOWO", "Callum"),
("cgSgspJ2msm6clMCkdW9", "Jessica"),
("pNInz6obpgDQGcFmaJgB", "Adam"),
]
```
### Voice Assignment
Use multiple voices for variety. Shuffle deterministically so re-runs are consistent:
Shuffle deterministically so re-runs produce the same voice mapping:
```python
import random as _rng
@@ -318,83 +377,199 @@ import random as _rng
def assign_voices(n_quotes, voice_pool, seed=42):
"""Assign a different voice to each quote, cycling if needed."""
r = _rng.Random(seed)
shuffled = list(voice_pool)
r.shuffle(shuffled)
return [shuffled[i % len(shuffled)] for i in range(n_quotes)]
ids = [v[0] for v in voice_pool]
r.shuffle(ids)
return [ids[i % len(ids)] for i in range(n_quotes)]
```
### Pronunciation Control
TTS text should be separate from display text. Common fixes:
TTS text must be separate from display text. The display text has line breaks for visual layout; the TTS text is a flat sentence with phonetic fixes.
Common fixes:
- Brand names: spell phonetically ("Nous" -> "Noose", "nginx" -> "engine-x")
- Abbreviations: expand ("API" -> "A P I", "CLI" -> "C L I")
- Technical terms: add phonetic hints
- Punctuation for pacing: periods create pauses, commas create slight pauses
```python
QUOTES = [("Display text here", "Author")]
QUOTES_TTS = ["TTS text with phonetic spelling here"]
# Display text: line breaks control visual layout
QUOTES = [
("It can do far more than the Claws,\nand you don't need to buy a Mac Mini.\nNous Research has a winner here.", "Brian Roemmele"),
]
# TTS text: flat, phonetically corrected for speech
QUOTES_TTS = [
"It can do far more than the Claws, and you don't need to buy a Mac Mini. Noose Research has a winner here.",
]
# Keep both arrays in sync -- same indices
```
### Audio Pipeline
1. Generate individual TTS clips (MP3/WAV per quote)
2. Get duration of each clip
3. Calculate timing: speech start/end per quote with gaps
1. Generate individual TTS clips (MP3 per quote, skipping existing)
2. Convert each to WAV (mono, 22050 Hz) for duration measurement and concatenation
3. Calculate timing: intro pad + speech + gaps + outro pad = target duration
4. Concatenate into single TTS track with silence padding
5. Mix with background music
```python
def build_tts_track(tts_clips, target_duration, gap_seconds=2.0):
"""Concatenate TTS clips with gaps, pad to target duration."""
# Get durations
def build_tts_track(tts_clips, target_duration, intro_pad=5.0, outro_pad=4.0):
"""Concatenate TTS clips with calculated gaps, pad to target duration.
Returns:
timing: list of (start_time, end_time, quote_index) tuples
"""
sr = 22050
# Convert MP3s to WAV for duration and sample-level concatenation
durations = []
for clip in tts_clips:
wav = clip.replace(".mp3", ".wav")
subprocess.run(
["ffmpeg", "-y", "-i", clip, "-ac", "1", "-ar", str(sr),
"-sample_fmt", "s16", wav],
capture_output=True, check=True)
result = subprocess.run(
["ffprobe", "-v", "error", "-show_entries", "format=duration",
"-of", "csv=p=0", clip],
"-of", "csv=p=0", wav],
capture_output=True, text=True)
durations.append(float(result.stdout.strip()))
# Calculate timing
# Calculate gap to fill target duration
total_speech = sum(durations)
total_gaps = target_duration - total_speech
gap = max(0.5, total_gaps / (len(tts_clips) + 1))
timing = [] # (start, end, quote_index)
t = gap # start after initial gap
n_gaps = len(tts_clips) - 1
remaining = target_duration - total_speech - intro_pad - outro_pad
gap = max(1.0, remaining / max(1, n_gaps))
# Build timing and concatenate samples
timing = []
t = intro_pad
all_audio = [np.zeros(int(sr * intro_pad), dtype=np.int16)]
for i, dur in enumerate(durations):
wav = tts_clips[i].replace(".mp3", ".wav")
with wave.open(wav) as wf:
samples = np.frombuffer(wf.readframes(wf.getnframes()), dtype=np.int16)
timing.append((t, t + dur, i))
t += dur + gap
# Concatenate with ffmpeg
# ... silence padding + concat filter
all_audio.append(samples)
t += dur
if i < len(tts_clips) - 1:
all_audio.append(np.zeros(int(sr * gap), dtype=np.int16))
t += gap
all_audio.append(np.zeros(int(sr * outro_pad), dtype=np.int16))
# Pad or trim to exactly target_duration
full = np.concatenate(all_audio)
target_samples = int(sr * target_duration)
if len(full) < target_samples:
full = np.pad(full, (0, target_samples - len(full)))
else:
full = full[:target_samples]
# Write concatenated TTS track
with wave.open("tts_full.wav", "w") as wf:
wf.setnchannels(1)
wf.setsampwidth(2)
wf.setframerate(sr)
wf.writeframes(full.tobytes())
return timing
```
### Audio Mixing
Mix TTS (center) with background music (wide stereo, low volume):
Mix TTS (center) with background music (wide stereo, low volume). The filter chain:
1. TTS mono duplicated to both channels (centered)
2. BGM loudness-normalized, volume reduced to 15%, stereo widened with `extrastereo`
3. Mixed together with dropout transition for smooth endings
```python
def mix_audio(tts_path, bgm_path, output_path, bgm_volume=0.15):
"""Mix TTS centered with BGM panned wide stereo."""
filter_complex = (
# TTS: mono -> stereo center
"[0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=mono,"
"pan=stereo|c0=c0|c1=c0[tts];"
# BGM: normalize loudness, reduce volume, widen stereo
f"[1:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=stereo,"
f"loudnorm=I=-16:TP=-1.5:LRA=11,"
f"volume={bgm_volume},"
f"extrastereo=m=2.5[bgm];"
# Mix with smooth dropout at end
"[tts][bgm]amix=inputs=2:duration=longest:dropout_transition=3,"
"aformat=sample_fmts=s16:sample_rates=44100:channel_layouts=stereo[out]"
)
cmd = [
"ffmpeg", "-y",
"-i", tts_path, # mono TTS
"-i", bgm_path, # stereo BGM
"-filter_complex",
f"[0:a]aformat=sample_fmts=fltp:sample_rates=44100:channel_layouts=mono,"
f"pan=stereo|c0=c0|c1=c0[tts];" # TTS center
f"[1:a]loudnorm=I=-16:TP=-1.5:LRA=11,"
f"volume={bgm_volume},"
f"extrastereo=2.5[bgm];" # BGM wide stereo
f"[tts][bgm]amix=inputs=2:duration=longest[out]",
"-map", "[out]", "-c:a", "pcm_s16le", output_path
"-i", tts_path,
"-i", bgm_path,
"-filter_complex", filter_complex,
"-map", "[out]", output_path,
]
subprocess.run(cmd, capture_output=True, check=True)
```
### Per-Quote Visual Style
Cycle through visual presets per quote for variety. Each preset defines a background effect, color scheme, and text color:
```python
QUOTE_STYLES = [
{"hue": 0.08, "accent": 0.7, "bg": "spiral", "text_rgb": (255, 220, 140)}, # warm gold
{"hue": 0.55, "accent": 0.6, "bg": "rings", "text_rgb": (180, 220, 255)}, # cool blue
{"hue": 0.75, "accent": 0.7, "bg": "wave", "text_rgb": (220, 180, 255)}, # purple
{"hue": 0.35, "accent": 0.6, "bg": "matrix", "text_rgb": (140, 255, 180)}, # green
{"hue": 0.95, "accent": 0.8, "bg": "fire", "text_rgb": (255, 180, 160)}, # red/coral
{"hue": 0.12, "accent": 0.5, "bg": "interference", "text_rgb": (255, 240, 200)}, # amber
{"hue": 0.60, "accent": 0.7, "bg": "tunnel", "text_rgb": (160, 210, 255)}, # cyan
{"hue": 0.45, "accent": 0.6, "bg": "aurora", "text_rgb": (180, 255, 220)}, # teal
]
style = QUOTE_STYLES[quote_index % len(QUOTE_STYLES)]
```
This guarantees no two adjacent quotes share the same look, even without randomness.
### Typewriter Text Rendering
Display quote text character-by-character synced to speech progress. Recently revealed characters are brighter, creating a "just typed" glow:
```python
def render_typewriter(ch, co, lines, block_start, cols, progress, total_chars, text_rgb, t):
"""Overlay typewriter text onto character/color grids.
progress: 0.0 (nothing visible) to 1.0 (all text visible)."""
chars_visible = int(total_chars * min(1.0, progress * 1.2)) # slight overshoot for snappy feel
tr, tg, tb = text_rgb
char_count = 0
for li, line in enumerate(lines):
row = block_start + li
col = (cols - len(line)) // 2
for ci, c in enumerate(line):
if char_count < chars_visible:
age = chars_visible - char_count
bri_factor = min(1.0, 0.5 + 0.5 / (1 + age * 0.015)) # newer = brighter
hue_shift = math.sin(char_count * 0.3 + t * 2) * 0.05
stamp(ch, co, c, row, col + ci,
(int(min(255, tr * bri_factor * (1.0 + hue_shift))),
int(min(255, tg * bri_factor)),
int(min(255, tb * bri_factor * (1.0 - hue_shift)))))
char_count += 1
# Blinking cursor at insertion point
if progress < 1.0 and int(t * 3) % 2 == 0:
# Find cursor position (char_count == chars_visible)
cc = 0
for li, line in enumerate(lines):
for ci, c in enumerate(line):
if cc == chars_visible:
stamp(ch, co, "\u258c", block_start + li,
(cols - len(line)) // 2 + ci, (255, 220, 100))
return
cc += 1
```
### Feature Analysis on Mixed Audio
Run the standard audio analysis (FFT, beat detection) on the final mixed track so visual effects react to both TTS and music:
@@ -404,4 +579,114 @@ Run the standard audio analysis (FFT, beat detection) on the final mixed track s
features = analyze_audio("mixed_final.wav", fps=24)
```
This means visuals will pulse with both the music beats and the speech energy -- creating natural synchronization.
Visuals pulse with both the music beats and the speech energy.
---
## Audio-Video Sync Verification
After rendering, verify that visual beat markers align with actual audio beats. Drift accumulates from frame timing errors, ffmpeg concat boundaries, and rounding in `fi / fps`.
### Beat Timestamp Extraction
```python
def extract_beat_timestamps(features, fps, threshold=0.5):
"""Extract timestamps where beat feature exceeds threshold."""
beat = features["beat"]
timestamps = []
for fi in range(len(beat)):
if beat[fi] > threshold:
timestamps.append(fi / fps)
return timestamps
def extract_visual_beat_timestamps(video_path, fps, brightness_jump=30):
"""Detect visual beats by brightness jumps between consecutive frames.
Returns timestamps where mean brightness increases by more than threshold."""
import subprocess
cmd = ["ffmpeg", "-i", video_path, "-f", "rawvideo", "-pix_fmt", "gray", "-"]
proc = subprocess.run(cmd, capture_output=True)
frames = np.frombuffer(proc.stdout, dtype=np.uint8)
# Infer frame dimensions from total byte count
n_pixels = len(frames)
# For 1080p: 1920*1080 pixels per frame
# Auto-detect from video metadata is more robust:
probe = subprocess.run(
["ffprobe", "-v", "error", "-select_streams", "v:0",
"-show_entries", "stream=width,height",
"-of", "csv=p=0", video_path],
capture_output=True, text=True)
w, h = map(int, probe.stdout.strip().split(","))
ppf = w * h # pixels per frame
n_frames = n_pixels // ppf
frames = frames[:n_frames * ppf].reshape(n_frames, ppf)
means = frames.mean(axis=1)
timestamps = []
for i in range(1, len(means)):
if means[i] - means[i-1] > brightness_jump:
timestamps.append(i / fps)
return timestamps
```
### Sync Report
```python
def sync_report(audio_beats, visual_beats, tolerance_ms=50):
"""Compare audio beat timestamps to visual beat timestamps.
Args:
audio_beats: list of timestamps (seconds) from audio analysis
visual_beats: list of timestamps (seconds) from video brightness analysis
tolerance_ms: max acceptable drift in milliseconds
Returns:
dict with matched/unmatched/drift statistics
"""
tolerance = tolerance_ms / 1000.0
matched = []
unmatched_audio = []
unmatched_visual = list(visual_beats)
for at in audio_beats:
best_match = None
best_delta = float("inf")
for vt in unmatched_visual:
delta = abs(at - vt)
if delta < best_delta:
best_delta = delta
best_match = vt
if best_match is not None and best_delta < tolerance:
matched.append({"audio": at, "visual": best_match, "drift_ms": best_delta * 1000})
unmatched_visual.remove(best_match)
else:
unmatched_audio.append(at)
drifts = [m["drift_ms"] for m in matched]
return {
"matched": len(matched),
"unmatched_audio": len(unmatched_audio),
"unmatched_visual": len(unmatched_visual),
"total_audio_beats": len(audio_beats),
"total_visual_beats": len(visual_beats),
"mean_drift_ms": np.mean(drifts) if drifts else 0,
"max_drift_ms": np.max(drifts) if drifts else 0,
"p95_drift_ms": np.percentile(drifts, 95) if len(drifts) > 1 else 0,
}
# Usage:
audio_beats = extract_beat_timestamps(features, fps=24)
visual_beats = extract_visual_beat_timestamps("output.mp4", fps=24)
report = sync_report(audio_beats, visual_beats)
print(f"Matched: {report['matched']}/{report['total_audio_beats']} beats")
print(f"Mean drift: {report['mean_drift_ms']:.1f}ms, Max: {report['max_drift_ms']:.1f}ms")
# Target: mean drift < 20ms, max drift < 42ms (1 frame at 24fps)
```
### Common Sync Issues
| Symptom | Cause | Fix |
|---------|-------|-----|
| Consistent late visual beats | ffmpeg concat adds frames at boundaries | Use `-vsync cfr` flag; pad segments to exact frame count |
| Drift increases over time | Floating-point accumulation in `t = fi / fps` | Use integer frame counter, compute `t` fresh each frame |
| Random missed beats | Beat threshold too high / feature smoothing too aggressive | Lower threshold; reduce EMA alpha for beat feature |
| Beats land on wrong frame | Off-by-one in frame indexing | Verify: frame 0 = t=0, frame 1 = t=1/fps (not t=0) |

View File

@@ -1,5 +1,15 @@
# Optimization Reference
**Cross-references:**
- Grid system, resolution presets, portrait GridLayer: `architecture.md`
- Effect building blocks (pre-computation strategies): `effects.md`
- `_render_vf()`, tonemap (subsampled percentile): `composition.md`
- Scene protocol, render_clip: `scenes.md`
- Shader pipeline, encoding (ffmpeg flags): `shaders.md`
- Input sources (audio chunking, WAV extraction): `inputs.md`
- Common bugs (memory, OOM, frame drops): `troubleshooting.md`
- Complete scene examples: `examples.md`
## Hardware Detection
Detect the user's hardware at script startup and adapt rendering parameters automatically. Never hardcode worker counts or resolution.
@@ -124,6 +134,8 @@ def apply_quality_profile(profile):
parser = argparse.ArgumentParser()
parser.add_argument("--quality", choices=["draft", "preview", "production", "max", "auto"],
default="auto", help="Render quality preset")
parser.add_argument("--aspect", choices=["landscape", "portrait", "square"],
default="landscape", help="Aspect ratio preset")
parser.add_argument("--workers", type=int, default=0, help="Override worker count (0=auto)")
parser.add_argument("--resolution", type=str, default="", help="Override resolution e.g. 1280x720")
args = parser.parse_args()
@@ -132,6 +144,16 @@ hw = detect_hardware()
if args.workers > 0:
hw["workers"] = args.workers
profile = quality_profile(hw, target_duration, args.quality)
# Apply aspect ratio preset (before manual resolution override)
ASPECT_PRESETS = {
"landscape": (1920, 1080),
"portrait": (1080, 1920),
"square": (1080, 1080),
}
if args.aspect != "landscape" and not args.resolution:
profile["vw"], profile["vh"] = ASPECT_PRESETS[args.aspect]
if args.resolution:
w, h = args.resolution.split("x")
profile["vw"], profile["vh"] = int(w), int(h)
@@ -142,6 +164,47 @@ log(f"Render: {profile['vw']}x{profile['vh']} @{profile['fps']}fps, "
f"CRF {profile['crf']}, {profile['workers']} workers")
```
### Portrait Mode Considerations
Portrait (1080x1920) has the same pixel count as landscape 1080p, so performance is equivalent. But composition patterns differ:
| Concern | Landscape | Portrait |
|---------|-----------|----------|
| Grid cols at `lg` | 160 | 90 |
| Grid rows at `lg` | 45 | 80 |
| Max text line chars | ~50 centered | ~25-30 centered |
| Vertical rain | Short travel | Long, dramatic travel |
| Horizontal spectrum | Full width | Needs rotation or compression |
| Radial effects | Natural circles | Tall ellipses (aspect correction handles this) |
| Particle explosions | Wide spread | Tall spread |
| Text stacking | 3-4 lines comfortable | 8-10 lines comfortable |
| Quote layout | 2-3 wide lines | 5-6 short lines |
**Portrait-optimized patterns:**
- Vertical rain/matrix effects are naturally enhanced — longer column travel
- Fire columns rise through more screen space
- Rising embers/particles have more vertical runway
- Text can be stacked more aggressively with more lines
- Radial effects work if aspect correction is applied (GridLayer handles this automatically)
- Spectrum bars can be rotated 90 degrees (vertical bars from bottom)
**Portrait text layout:**
```python
def layout_text_portrait(text, max_chars_per_line=25, grid=None):
"""Break text into short lines for portrait display."""
words = text.split()
lines = []; current = ""
for w in words:
if len(current) + len(w) + 1 > max_chars_per_line:
lines.append(current.strip())
current = w + " "
else:
current += w + " "
if current.strip():
lines.append(current.strip())
return lines
```
## Performance Budget
Target: 100-200ms per frame (5-10 fps single-threaded, 40-80 fps across 8 workers).
@@ -173,6 +236,74 @@ canvas[y:y+ch, x:x+cw] = np.maximum(canvas[y:y+ch, x:x+cw],
Collect all characters from all palettes + overlay text into the init set. Lazy-init for any missed characters.
## Pre-Rendered Background Textures
Alternative to `_render_vf()` for backgrounds where characters don't need to change every frame. Pre-bake a static ASCII texture once at init, then multiply by a per-cell color field each frame. One matrix multiply vs thousands of bitmap blits.
Use when: background layer uses a fixed character palette and only color/brightness varies per frame. NOT suitable for layers where character selection depends on a changing value field.
### Init: Bake the Texture
```python
# In GridLayer.__init__:
self._bg_row_idx = np.clip(
(np.arange(VH) - self.oy) // self.ch, 0, self.rows - 1
)
self._bg_col_idx = np.clip(
(np.arange(VW) - self.ox) // self.cw, 0, self.cols - 1
)
self._bg_textures = {}
def make_bg_texture(self, palette):
"""Pre-render a static ASCII texture (grayscale float32) once."""
if palette not in self._bg_textures:
texture = np.zeros((VH, VW), dtype=np.float32)
rng = random.Random(12345)
ch_list = [c for c in palette if c != " " and c in self.bm]
if not ch_list:
ch_list = list(self.bm.keys())[:5]
for row in range(self.rows):
y = self.oy + row * self.ch
if y + self.ch > VH:
break
for col in range(self.cols):
x = self.ox + col * self.cw
if x + self.cw > VW:
break
bm = self.bm[rng.choice(ch_list)]
texture[y:y+self.ch, x:x+self.cw] = bm
self._bg_textures[palette] = texture
return self._bg_textures[palette]
```
### Render: Color Field x Cached Texture
```python
def render_bg(self, color_field, palette=PAL_CIRCUIT):
"""Fast background: pre-rendered ASCII texture * per-cell color field.
color_field: (rows, cols, 3) uint8. Returns (VH, VW, 3) uint8."""
texture = self.make_bg_texture(palette)
# Expand cell colors to pixel coords via pre-computed index maps
color_px = color_field[
self._bg_row_idx[:, None], self._bg_col_idx[None, :]
].astype(np.float32)
return (texture[:, :, None] * color_px).astype(np.uint8)
```
### Usage in a Scene
```python
# Build per-cell color from effect fields (cheap — rows*cols, not VH*VW)
hue = ((t * 0.05 + val * 0.2) % 1.0).astype(np.float32)
R, G, B = hsv2rgb(hue, np.full_like(val, 0.5), val)
color_field = mkc(R, G, B, g.rows, g.cols) # (rows, cols, 3) uint8
# Render background — single matrix multiply, no per-cell loop
canvas_bg = g.render_bg(color_field, PAL_DENSE)
```
The texture init loop runs once and is cached per palette. Per-frame cost is one fancy-index lookup + one broadcast multiply — orders of magnitude faster than the per-cell bitmap blit loop in `render()` for dense backgrounds.
## Coordinate Array Caching
Pre-compute all grid-relative coordinate arrays at init, not per-frame:
@@ -215,8 +346,8 @@ all_rows = []
all_cols = []
all_fades = []
for c in range(cols):
head = int(state["ry"][c])
trail_len = state["rln"][c]
head = int(S["ry"][c])
trail_len = S["rln"][c]
for i in range(trail_len):
row = head - i
if 0 <= row < rows:
@@ -254,6 +385,57 @@ for fi in range(n_cols):
# Now map fire_val to chars and colors in one vectorized pass
```
## PIL String Rendering for Text-Heavy Scenes
Alternative to per-cell bitmap blitting when rendering many long text strings (scrolling tickers, typewriter sequences, idea floods). Uses PIL's native `ImageDraw.text()` which renders an entire string in one C call, vs one Python-loop bitmap blit per character.
Typical win: a scene with 56 ticker rows renders 56 PIL `text()` calls instead of ~10K individual bitmap blits.
Use when: scene renders many rows of readable text strings. NOT suitable for sparse or spatially-scattered single characters (use normal `render()` for those).
```python
from PIL import Image, ImageDraw
def render_text_layer(grid, rows_data, font):
"""Render dense text rows via PIL instead of per-cell bitmap blitting.
Args:
grid: GridLayer instance (for oy, ch, ox, font metrics)
rows_data: list of (row_index, text_string, rgb_tuple) — one per row
font: PIL ImageFont instance (grid.font)
Returns:
uint8 array (VH, VW, 3) — canvas with rendered text
"""
img = Image.new("RGB", (VW, VH), (0, 0, 0))
draw = ImageDraw.Draw(img)
for row_idx, text, color in rows_data:
y = grid.oy + row_idx * grid.ch
if y + grid.ch > VH:
break
draw.text((grid.ox, y), text, fill=color, font=font)
return np.array(img)
```
### Usage in a Ticker Scene
```python
# Build ticker data (text + color per row)
rows_data = []
for row in range(n_tickers):
text = build_ticker_text(row, t) # scrolling substring
color = hsv2rgb_scalar(hue, 0.85, bri) # (R, G, B) tuple
rows_data.append((row, text, color))
# One PIL pass instead of thousands of bitmap blits
canvas_tickers = render_text_layer(g_md, rows_data, g_md.font)
# Blend with other layers normally
result = blend_canvas(canvas_bg, canvas_tickers, "screen", 0.9)
```
This is purely a rendering optimization — same visual output, fewer draw calls. The grid's `render()` method is still needed for sparse character fields where characters are placed individually based on value fields.
## Bloom Optimization
**Do NOT use `scipy.ndimage.uniform_filter`** -- measured at 424ms/frame.
@@ -433,3 +615,82 @@ Scale with hardware. Baseline: 1080p, 24fps, ~180ms/frame/worker.
At 720p: multiply times by ~0.5. At 4K: multiply by ~4.
Heavier effects (many particles, dense grids, extra shader passes) add ~20-50%.
---
## Temp File Cleanup
Rendering generates intermediate files that accumulate across runs. Clean up after the final concat/mux step.
### Files to Clean
| File type | Source | Location |
|-----------|--------|----------|
| WAV extracts | `ffmpeg -i input.mp3 ... tmp.wav` | `tempfile.mktemp()` or project dir |
| Segment clips | `render_clip()` output | `segments/seg_00.mp4` etc. |
| Concat list | ffmpeg concat demuxer input | `segments/concat.txt` |
| ffmpeg stderr logs | piped to file for debugging | `*.log` in project dir |
| Feature cache | pickled numpy arrays | `*.pkl` or `*.npz` |
### Cleanup Function
```python
import glob
import tempfile
import shutil
def cleanup_render_artifacts(segments_dir="segments", keep_final=True):
"""Remove intermediate files after successful render.
Call this AFTER verifying the final output exists and plays correctly.
Args:
segments_dir: directory containing segment clips and concat list
keep_final: if True, only delete intermediates (not the final output)
"""
removed = []
# 1. Segment clips
if os.path.isdir(segments_dir):
shutil.rmtree(segments_dir)
removed.append(f"directory: {segments_dir}")
# 2. Temporary WAV files
for wav in glob.glob("*.wav"):
if wav.startswith("tmp") or wav.startswith("extracted_"):
os.remove(wav)
removed.append(wav)
# 3. ffmpeg stderr logs
for log in glob.glob("ffmpeg_*.log"):
os.remove(log)
removed.append(log)
# 4. Feature cache (optional — useful to keep for re-renders)
# for cache in glob.glob("features_*.npz"):
# os.remove(cache)
# removed.append(cache)
print(f"Cleaned {len(removed)} artifacts: {removed}")
return removed
```
### Integration with Render Pipeline
Call cleanup at the end of the main render script, after the final output is verified:
```python
# At end of main()
if os.path.exists(output_path) and os.path.getsize(output_path) > 1000:
cleanup_render_artifacts(segments_dir="segments")
print(f"Done. Output: {output_path}")
else:
print("WARNING: final output missing or empty — skipping cleanup")
```
### Temp File Best Practices
- Use `tempfile.mkdtemp()` for segment directories — avoids polluting the project dir
- Name WAV extracts with `tempfile.mktemp(suffix=".wav")` so they're in the OS temp dir
- For debugging, set `KEEP_INTERMEDIATES=1` env var to skip cleanup
- Feature caches (`.npz`) are cheap to store and expensive to recompute — default to keeping them

View File

@@ -1,5 +1,15 @@
# Scene System Reference
**Cross-references:**
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
- Effect building blocks (value fields, noise, SDFs, particles): `effects.md`
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
- Shader pipeline, feedback buffer, ShaderChain: `shaders.md`
- Complete scene examples at every complexity level: `examples.md`
- Input sources (audio features, video features): `inputs.md`
- Performance tuning, portrait CLI: `optimization.md`
- Common bugs (state leaks, frame drops): `troubleshooting.md`
Scenes are the top-level creative unit. Each scene is a time-bounded segment with its own effect function, shader chain, feedback configuration, and tone-mapping gamma.
## Scene Protocol (v2)
@@ -12,7 +22,7 @@ def fx_scene_name(r, f, t, S) -> canvas:
Args:
r: Renderer instance — access multiple grids via r.get_grid("sm")
f: dict of audio/video features, all values normalized to [0, 1]
t: time in seconds (global, not local to scene)
t: time in seconds local to scene (0.0 at scene start)
S: dict for persistent state (particles, rain columns, etc.)
Returns:
@@ -20,6 +30,20 @@ def fx_scene_name(r, f, t, S) -> canvas:
"""
```
**Local time convention:** Scene functions receive `t` starting at 0.0 for the first frame of the scene, regardless of where the scene appears in the timeline. The render loop subtracts the scene's start time before calling the function:
```python
# In render_clip:
t_local = fi / FPS - scene_start
canvas = fx_fn(r, feat, t_local, S)
```
This makes scenes reorderable without modifying their code. Compute scene progress as:
```python
progress = min(t / scene_duration, 1.0) # 0→1 over the scene
```
This replaces the v1 protocol where scenes returned `(chars, colors)` tuples. The v2 protocol gives scenes full control over multi-grid rendering and pixel-level composition internally.
### The Renderer Class

View File

@@ -2,6 +2,15 @@
Post-processing effects applied to the pixel canvas (`numpy uint8 array, shape (H,W,3)`) after character rendering and before encoding. Also covers **pixel-level blend modes**, **feedback buffers**, and the **ShaderChain** compositor.
**Cross-references:**
- Grid system, palettes, color (HSV + OKLAB): `architecture.md`
- Effect building blocks (value fields, noise, SDFs): `effects.md`
- `_render_vf()`, blend modes, tonemap, masking: `composition.md`
- Scene protocol, render_clip, SCENES table: `scenes.md`
- Complete scene examples with shader usage: `examples.md`
- Performance tuning (frame budget, worker count): `optimization.md`
- Encoding pitfalls (ffmpeg flags, color space): `troubleshooting.md`
## Design Philosophy
The shader pipeline turns raw ASCII renders into cinematic output. The system is designed for **composability** — every shader, blend mode, and feedback transform is an independent building block. Combining them creates infinite visual variety from a small set of primitives.
@@ -1025,3 +1034,324 @@ cmd = ["ffmpeg", "-y", "-f", "rawvideo", "-pix_fmt", "rgb24",
"-vf", f"fps={fps},scale={W}:{H}:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse",
"-loop", "0", output_gif]
```
### PNG Sequence
For frame-accurate editing, compositing in external tools (After Effects, Nuke), or lossless archival:
```python
import os
def output_png_sequence(frames, output_dir, W, H, fps, prefix="frame"):
"""Write frames as numbered PNGs. frames = iterable of uint8 (H,W,3) arrays."""
os.makedirs(output_dir, exist_ok=True)
# Method 1: Direct PIL write (no ffmpeg dependency)
from PIL import Image
for i, frame in enumerate(frames):
img = Image.fromarray(frame)
img.save(os.path.join(output_dir, f"{prefix}_{i:06d}.png"))
# Method 2: ffmpeg pipe (faster for large sequences)
cmd = ["ffmpeg", "-y", "-f", "rawvideo", "-pix_fmt", "rgb24",
"-s", f"{W}x{H}", "-r", str(fps), "-i", "pipe:0",
os.path.join(output_dir, f"{prefix}_%06d.png")]
```
Reassemble PNG sequence to video:
```bash
ffmpeg -framerate 24 -i frame_%06d.png -c:v libx264 -crf 18 -pix_fmt yuv420p output.mp4
```
### Alpha Channel / Transparent Background (RGBA)
For compositing ASCII art over other video or images. Uses RGBA canvas (4 channels) instead of RGB (3 channels):
```python
def create_rgba_canvas(H, W):
"""Transparent canvas — alpha channel starts at 0 (fully transparent)."""
return np.zeros((H, W, 4), dtype=np.uint8)
def render_char_rgba(canvas, row, col, char_img, color_rgb, alpha=255):
"""Render a character with alpha. char_img = PIL glyph mask (grayscale).
Alpha comes from the glyph mask — background stays transparent."""
r, g, b = color_rgb
y0, x0 = row * cell_h, col * cell_w
mask = np.array(char_img) # grayscale 0-255
canvas[y0:y0+cell_h, x0:x0+cell_w, 0] = np.maximum(canvas[y0:y0+cell_h, x0:x0+cell_w, 0], (mask * r / 255).astype(np.uint8))
canvas[y0:y0+cell_h, x0:x0+cell_w, 1] = np.maximum(canvas[y0:y0+cell_h, x0:x0+cell_w, 1], (mask * g / 255).astype(np.uint8))
canvas[y0:y0+cell_h, x0:x0+cell_w, 2] = np.maximum(canvas[y0:y0+cell_h, x0:x0+cell_w, 2], (mask * b / 255).astype(np.uint8))
canvas[y0:y0+cell_h, x0:x0+cell_w, 3] = np.maximum(canvas[y0:y0+cell_h, x0:x0+cell_w, 3], mask)
def blend_onto_background(rgba_canvas, bg_rgb):
"""Composite RGBA canvas over a solid or image background."""
alpha = rgba_canvas[:, :, 3:4].astype(np.float32) / 255.0
fg = rgba_canvas[:, :, :3].astype(np.float32)
bg = bg_rgb.astype(np.float32)
result = fg * alpha + bg * (1.0 - alpha)
return result.astype(np.uint8)
```
RGBA output via ffmpeg (ProRes 4444 for editing, WebM VP9 for web):
```bash
# ProRes 4444 — preserves alpha, widely supported in NLEs
ffmpeg -y -f rawvideo -pix_fmt rgba -s {W}x{H} -r {fps} -i pipe:0 \
-c:v prores_ks -profile:v 4444 -pix_fmt yuva444p10le output.mov
# WebM VP9 — alpha support for web/browser compositing
ffmpeg -y -f rawvideo -pix_fmt rgba -s {W}x{H} -r {fps} -i pipe:0 \
-c:v libvpx-vp9 -pix_fmt yuva420p -crf 30 -b:v 0 output.webm
# PNG sequence with alpha (lossless)
ffmpeg -y -f rawvideo -pix_fmt rgba -s {W}x{H} -r {fps} -i pipe:0 \
frame_%06d.png
```
**Key constraint**: shaders that operate on `(H,W,3)` arrays need adaptation for RGBA. Either apply shaders to the RGB channels only and preserve alpha, or write RGBA-aware versions:
```python
def apply_shader_rgba(canvas_rgba, shader_fn, **kwargs):
"""Apply an RGB shader to the color channels of an RGBA canvas."""
rgb = canvas_rgba[:, :, :3]
alpha = canvas_rgba[:, :, 3:4]
rgb_out = shader_fn(rgb, **kwargs)
return np.concatenate([rgb_out, alpha], axis=2)
```
---
## Real-Time Terminal Rendering
Live ASCII display in the terminal using ANSI escape codes. Useful for previewing scenes during development, live performances, and interactive parameter tuning.
### ANSI Color Escape Codes
```python
def rgb_to_ansi(r, g, b):
"""24-bit true color ANSI escape (supported by most modern terminals)."""
return f"\033[38;2;{r};{g};{b}m"
ANSI_RESET = "\033[0m"
ANSI_CLEAR = "\033[2J\033[H" # clear screen + cursor home
ANSI_HIDE_CURSOR = "\033[?25l"
ANSI_SHOW_CURSOR = "\033[?25h"
```
### Frame-to-ANSI Conversion
```python
def frame_to_ansi(chars, colors):
"""Convert char+color arrays to a single ANSI string for terminal output.
Args:
chars: (rows, cols) array of single characters
colors: (rows, cols, 3) uint8 RGB array
Returns:
str: ANSI-encoded frame ready for sys.stdout.write()
"""
rows, cols = chars.shape
lines = []
for r in range(rows):
parts = []
prev_color = None
for c in range(cols):
rgb = tuple(colors[r, c])
ch = chars[r, c]
if ch == " " or rgb == (0, 0, 0):
parts.append(" ")
else:
if rgb != prev_color:
parts.append(rgb_to_ansi(*rgb))
prev_color = rgb
parts.append(ch)
parts.append(ANSI_RESET)
lines.append("".join(parts))
return "\n".join(lines)
```
### Optimized: Delta Updates
Only redraw characters that changed since the last frame. Eliminates redundant terminal writes for static regions:
```python
def frame_to_ansi_delta(chars, colors, prev_chars, prev_colors):
"""Emit ANSI escapes only for cells that changed."""
rows, cols = chars.shape
parts = []
for r in range(rows):
for c in range(cols):
if (chars[r, c] != prev_chars[r, c] or
not np.array_equal(colors[r, c], prev_colors[r, c])):
parts.append(f"\033[{r+1};{c+1}H") # move cursor
rgb = tuple(colors[r, c])
parts.append(rgb_to_ansi(*rgb))
parts.append(chars[r, c])
return "".join(parts)
```
### Live Render Loop
```python
import sys
import time
def render_live(scene_fn, r, fps=24, duration=None):
"""Render a scene function live in the terminal.
Args:
scene_fn: v2 scene function (r, f, t, S) -> canvas
OR v1-style function that populates a grid
r: Renderer instance
fps: target frame rate
duration: seconds to run (None = run until Ctrl+C)
"""
frame_time = 1.0 / fps
S = {}
f = {} # synthesize features or connect to live audio
sys.stdout.write(ANSI_HIDE_CURSOR + ANSI_CLEAR)
sys.stdout.flush()
t0 = time.monotonic()
frame_count = 0
try:
while True:
t = time.monotonic() - t0
if duration and t > duration:
break
# Synthesize features from time (or connect to live audio via pyaudio)
f = synthesize_features(t)
# Render scene — for terminal, use a small grid
g = r.get_grid("sm")
# Option A: v2 scene → extract chars/colors from canvas (reverse render)
# Option B: call effect functions directly for chars/colors
canvas = scene_fn(r, f, t, S)
# For terminal display, render chars+colors directly
# (bypassing the pixel canvas — terminal uses character cells)
chars, colors = scene_to_terminal(scene_fn, r, f, t, S, g)
frame_str = ANSI_CLEAR + frame_to_ansi(chars, colors)
sys.stdout.write(frame_str)
sys.stdout.flush()
# Frame timing
elapsed = time.monotonic() - t0 - (frame_count * frame_time)
sleep_time = frame_time - elapsed
if sleep_time > 0:
time.sleep(sleep_time)
frame_count += 1
except KeyboardInterrupt:
pass
finally:
sys.stdout.write(ANSI_SHOW_CURSOR + ANSI_RESET + "\n")
sys.stdout.flush()
def scene_to_terminal(scene_fn, r, f, t, S, g):
"""Run effect functions and return (chars, colors) for terminal display.
For terminal mode, skip the pixel canvas and work with character arrays directly."""
# Effects that return (chars, colors) work directly
# For vf-based effects, render the value field + hue field to chars/colors:
val = vf_plasma(g, f, t, S)
hue = hf_time_cycle(0.08)(g, t)
mask = val > 0.03
chars = val2char(val, mask, PAL_DENSE)
R, G, B = hsv2rgb(hue, np.full_like(val, 0.8), val)
colors = mkc(R, G, B, g.rows, g.cols)
return chars, colors
```
### Curses-Based Rendering (More Robust)
For full-featured terminal UIs with proper resize handling and input:
```python
import curses
def render_curses(scene_fn, r, fps=24):
"""Curses-based live renderer with resize handling and key input."""
def _main(stdscr):
curses.start_color()
curses.use_default_colors()
curses.curs_set(0) # hide cursor
stdscr.nodelay(True) # non-blocking input
# Initialize color pairs (curses supports 256 colors)
# Map RGB to nearest curses color pair
color_cache = {}
next_pair = [1]
def get_color_pair(r, g, b):
key = (r >> 4, g >> 4, b >> 4) # quantize to reduce pairs
if key not in color_cache:
if next_pair[0] < curses.COLOR_PAIRS - 1:
ci = 16 + (r // 51) * 36 + (g // 51) * 6 + (b // 51) # 6x6x6 cube
curses.init_pair(next_pair[0], ci, -1)
color_cache[key] = next_pair[0]
next_pair[0] += 1
else:
return 0
return curses.color_pair(color_cache[key])
S = {}
f = {}
frame_time = 1.0 / fps
t0 = time.monotonic()
while True:
t = time.monotonic() - t0
f = synthesize_features(t)
# Adapt grid to terminal size
max_y, max_x = stdscr.getmaxyx()
g = r.get_grid_for_size(max_x, max_y) # dynamic grid sizing
chars, colors = scene_to_terminal(scene_fn, r, f, t, S, g)
rows, cols = chars.shape
for row in range(min(rows, max_y - 1)):
for col in range(min(cols, max_x - 1)):
ch = chars[row, col]
rgb = tuple(colors[row, col])
try:
stdscr.addch(row, col, ch, get_color_pair(*rgb))
except curses.error:
pass # ignore writes outside terminal bounds
stdscr.refresh()
# Handle input
key = stdscr.getch()
if key == ord('q'):
break
time.sleep(max(0, frame_time - (time.monotonic() - t0 - t)))
curses.wrapper(_main)
```
### Terminal Rendering Constraints
| Constraint | Value | Notes |
|-----------|-------|-------|
| Max practical grid | ~200x60 | Depends on terminal size |
| Color support | 24-bit (modern), 256 (fallback), 16 (minimal) | Check `$COLORTERM` for truecolor |
| Frame rate ceiling | ~30 fps | Terminal I/O is the bottleneck |
| Delta updates | 2-5x faster | Only worth it when <30% of cells change per frame |
| SSH latency | Kills performance | Local terminals only for real-time |
**Detect color support:**
```python
import os
def get_terminal_color_depth():
ct = os.environ.get("COLORTERM", "")
if ct in ("truecolor", "24bit"):
return 24
term = os.environ.get("TERM", "")
if "256color" in term:
return 8 # 256 colors
return 4 # 16 colors basic ANSI
```

View File

@@ -1,5 +1,15 @@
# Troubleshooting Reference
**Cross-references:**
- Grid system, palettes, font selection: `architecture.md`
- Effect building blocks (value fields, noise, SDFs): `effects.md`
- `_render_vf()`, blend modes, tonemap: `composition.md`
- Scene protocol, render_clip, SCENES table: `scenes.md`
- Shader pipeline, feedback buffer, encoding: `shaders.md`
- Input sources (audio, video, TTS): `inputs.md`
- Performance tuning, hardware detection: `optimization.md`
- Complete scene examples: `examples.md`
Common bugs, gotchas, and platform-specific issues encountered during ASCII video development.
## NumPy Broadcasting