Files
the-nexus/concept-packs/genie-nano-banana/pipeline.md
Alexander Whitestone 67d3b784fd feat: Project Genie + Nano Banana concept pack for The Nexus (closes #680)
Complete concept generation pipeline:
- shot-list.yaml: 17 shots across 5 priorities (environments, portals, landmarks, skyboxes, textures)
- prompts/: 5 YAML prompt packs with 17 detailed generation prompts
- pipeline.md: Concept-to-Three.js translation workflow
- storage-policy.md: Repo vs local split for binary media
- references/palette.md: Canonical Nexus color/material/lighting spec

All prompts match existing Nexus visual language (Orbitron/JetBrains,
#4af0c0/#7b5cff/#ffd700 palette, cyberpunk cathedral mood).
Genie world prompts designed for explorable 3D prototyping.
Nano Banana prompts designed for concept art that translates to
specific Three.js geometry, materials, and post-processing.
2026-04-12 12:36:03 -04:00

3.8 KiB

Concept-to-Three.js Pipeline

How Generated Assets Flow Into Code

Step 1: Generate

Run prompts from prompts/*.yaml through:

  • Nano Banana Pro → static concept images (PNG)
  • Project Genie → explorable 3D worlds (record as video + screenshots)

Batch runs are tracked in shot-list.yaml. Check off each shot as generated.

Step 2: Capture & Store

For Nano Banana images:

local-only-path: ~/nexus-concepts/nano-banana/{shot-id}/
  ├── shot-id_v1.png
  ├── shot-id_v2.png
  ├── shot-id_v3.png
  └── shot-id_v4.png

Do NOT commit PNG files to the repo. They are binary media weight. Store locally. Reference by path in design notes.

For Project Genie worlds:

local-only-path: ~/nexus-concepts/genie-worlds/{shot-id}/
  ├── walkthrough.mp4       (screen recording)
  ├── screenshot_01.png     (key angles)
  ├── screenshot_02.png
  └── notes.md              (scale observations, spatial notes)

Do NOT commit video or large screenshots to repo.

Step 3: Translate — Image to Three.js

Each concept image becomes one or more of these Three.js artifacts:

Concept Feature Three.js Translation File
Platform shape/size THREE.CylinderGeometry or custom BufferGeometry app.js
Platform material THREE.MeshStandardMaterial with color, roughness, metalness app.js
Grid lines on platform Custom shader or texture map (UV reference from concept) app.js / style.css
Portal ring shape THREE.TorusGeometry with emissive material app.js
Portal inner glow Custom shader material (swirl + transparency) app.js
Portal color NEXUS.colors map + per-portal color in portals.json portals.json
Crystal geometry THREE.OctahedronGeometry or THREE.IcosahedronGeometry app.js
Crystal glow THREE.MeshStandardMaterial emissive + bloom post-processing app.js
Particle streams THREE.Points with custom BufferGeometry and velocity app.js
Skybox THREE.CubeTextureLoader or THREE.EquirectangularReflectionMapping app.js
Fog scene.fog = new THREE.FogExp2(color, density) app.js
Lighting THREE.PointLight, THREE.AmbientLight — match concept color temp app.js
Bloom UnrealBloomPass — threshold/strength tuned to concept glow levels app.js

Step 4: Design Notes Format

For each concept that gets translated, create a short design note:

# Design: {concept-name}
Source: concept-packs/genie-nano-banana/references/{shot-id}_selected.png
Generated: {date}
Translated by: {agent or human}

## Geometry
- Shape: {CylinderGeometry, radius=8, height=0.3, segments=64}
- Position: {x, y, z}

## Material
- Base color: #{hex}
- Roughness: 0.{N}
- Metalness: 0.{N}
- Emissive: #{hex}, intensity: 0.{N}

## Lighting
- Point lights: [{color, intensity, position}, ...]
- Matches concept at: {what angle/aspect}

## Post-processing
- Bloom threshold: {N}
- Bloom strength: {N}
- Matches concept at: {what brightness level}

## Notes
- Concept shows {feature} but Three.js approximates with {approach}
- Deviation from concept: {what's different and why}

Store design notes in concept-packs/genie-nano-banana/references/design-{shot-id}.md.

Step 5: Build

Implement in app.js (root). Follow existing patterns:

  • Geometry created in init functions
  • Materials reference NEXUS.colors
  • Portals registered in portals array
  • Vision points registered in visionPoints array
  • Post-processing via EffectComposer

Validation

After implementing a concept translation:

  1. Serve the app locally
  2. Compare live render against concept art
  3. Adjust materials/lighting until match is acceptable
  4. Document remaining deviations in design notes