Compare commits
1 Commits
feat/670-a
...
fix/issue-
| Author | SHA1 | Date | |
|---|---|---|---|
| 7d628ea087 |
@@ -1,68 +0,0 @@
|
||||
# Approval Tier System
|
||||
|
||||
Graduated safety based on risk level. Routes confirmations through the appropriate channel.
|
||||
|
||||
## Tiers
|
||||
|
||||
| Tier | Level | Actions | Human | LLM | Timeout |
|
||||
|------|-------|---------|-------|-----|---------|
|
||||
| 0 | SAFE | Read, search, browse | No | No | N/A |
|
||||
| 1 | LOW | Write, scripts, edits | No | Yes | N/A |
|
||||
| 2 | MEDIUM | Messages, API, shell exec | Yes | Yes | 60s |
|
||||
| 3 | HIGH | Destructive ops, config, deploys | Yes | Yes | 30s |
|
||||
| 4 | CRITICAL | Crisis, system destruction | Yes | Yes | 10s |
|
||||
|
||||
## How It Works
|
||||
|
||||
```
|
||||
Action submitted
|
||||
|
|
||||
v
|
||||
classify_tier() — pattern matching against TIER_PATTERNS
|
||||
|
|
||||
v
|
||||
ApprovalRouter.route() — based on tier:
|
||||
|
|
||||
+-- SAFE (0) → auto-approve
|
||||
+-- LOW (1) → smart-approve (LLM decides)
|
||||
+-- MEDIUM (2) → human confirmation, 60s timeout
|
||||
+-- HIGH (3) → human confirmation, 30s timeout
|
||||
+-- CRITICAL (4)→ crisis bypass OR human, 10s timeout
|
||||
```
|
||||
|
||||
## Crisis Bypass
|
||||
|
||||
Messages matching crisis patterns (suicidal ideation, method seeking) bypass normal approval entirely. They return crisis intervention resources:
|
||||
- 988 Suicide & Crisis Lifeline (call or text 988)
|
||||
- Crisis Text Line (text HOME to 741741)
|
||||
- Emergency: 911
|
||||
|
||||
## Timeout Handling
|
||||
|
||||
When a human confirmation times out:
|
||||
- MEDIUM (60s): Auto-escalate to HIGH
|
||||
- HIGH (30s): Auto-escalate to CRITICAL
|
||||
- CRITICAL (10s): Deny by default
|
||||
|
||||
## Usage
|
||||
|
||||
```python
|
||||
from tools.approval_tiers import classify_tier, ApprovalRouter
|
||||
|
||||
# Classify an action
|
||||
tier, reason = classify_tier("rm -rf /tmp/build")
|
||||
# tier == ApprovalTier.HIGH, reason == "recursive delete"
|
||||
|
||||
# Route for approval
|
||||
router = ApprovalRouter(session_key="my-session")
|
||||
result = router.route("rm -rf /tmp/build", description="Clean build artifacts")
|
||||
# result["approved"] == False, result["tier"] == "HIGH"
|
||||
|
||||
# Handle response
|
||||
if result["status"] == "approval_required":
|
||||
# Show confirmation UI, wait for user
|
||||
pass
|
||||
elif result["status"] == "crisis":
|
||||
# Show crisis resources
|
||||
pass
|
||||
```
|
||||
0
skills/creative/shared/__init__.py
Normal file
0
skills/creative/shared/__init__.py
Normal file
120
skills/creative/shared/style-lock/SKILL.md
Normal file
120
skills/creative/shared/style-lock/SKILL.md
Normal file
@@ -0,0 +1,120 @@
|
||||
---
|
||||
name: style-lock
|
||||
description: "Shared style-lock infrastructure for consistent visual style across multi-frame video generation. Extracts style embeddings from a reference image and injects as conditioning (IP-Adapter, ControlNet, style tokens) into all subsequent generations. Used by Video Forge (playground #52) and LPM 1.0 (#641). Use when generating multiple images/clips that need visual coherence — consistent color palette, brush strokes, lighting, and aesthetic across scenes or frames."
|
||||
---
|
||||
|
||||
# Style Lock — Consistent Visual Style Across Video Generation
|
||||
|
||||
## Overview
|
||||
|
||||
When generating multiple images/clips that compose a video, each generation is independent. Without conditioning, visual style drifts wildly between frames/scenes. Style Lock solves this by extracting a style embedding from a reference image and injecting it as conditioning into all subsequent generations.
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
from scripts.style_lock import StyleLock
|
||||
|
||||
# Initialize with a reference image
|
||||
lock = StyleLock("reference.png")
|
||||
|
||||
# Get conditioning for Stable Diffusion XL
|
||||
conditioning = lock.get_conditioning(
|
||||
backend="sdxl", # "sdxl", "flux", "comfyui"
|
||||
method="ip_adapter", # "ip_adapter", "controlnet", "style_tokens", "hybrid"
|
||||
strength=0.75 # Style adherence (0.0-1.0)
|
||||
)
|
||||
|
||||
# Use in generation pipeline
|
||||
result = generate(prompt="a sunset over mountains", **conditioning)
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
Reference Image
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Style Extractor │──→ CLIP embedding
|
||||
│ │──→ Color palette (dominant colors)
|
||||
│ │──→ Texture features (Gabor filters)
|
||||
│ │──→ Lighting analysis (histogram)
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ Conditioning │──→ IP-Adapter (reference image injection)
|
||||
│ Router │──→ ControlNet (structural conditioning)
|
||||
│ │──→ Style tokens (text conditioning)
|
||||
│ │──→ Color palette constraint
|
||||
└────────┬────────┘
|
||||
│
|
||||
▼
|
||||
Generation Pipeline
|
||||
```
|
||||
|
||||
## Methods
|
||||
|
||||
| Method | Best For | Requires | Quality |
|
||||
|--------|----------|----------|---------|
|
||||
| `ip_adapter` | Reference-guided style transfer | SD XL, IP-Adapter model | ★★★★★ |
|
||||
| `controlnet` | Structural + style conditioning | ControlNet models | ★★★★ |
|
||||
| `style_tokens` | Text-prompt-based style | Any model | ★★★ |
|
||||
| `hybrid` | Maximum consistency | All of the above | ★★★★★ |
|
||||
|
||||
## Cross-Project Integration
|
||||
|
||||
### Video Forge (Playground #52)
|
||||
- Extract style from seed image or first scene
|
||||
- Apply across all scenes in music video generation
|
||||
- Scene-to-scene temporal coherence via shared style embedding
|
||||
|
||||
### LPM 1.0 (Issue #641)
|
||||
- Extract style from 8 identity reference photos
|
||||
- Frame-to-frame consistency in real-time video generation
|
||||
- Style tokens for HeartMuLa audio style consistency
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
style_lock:
|
||||
reference_image: "path/to/reference.png"
|
||||
backend: "sdxl"
|
||||
method: "hybrid"
|
||||
strength: 0.75
|
||||
color_palette:
|
||||
enabled: true
|
||||
num_colors: 5
|
||||
tolerance: 0.15
|
||||
lighting:
|
||||
enabled: true
|
||||
match_histogram: true
|
||||
texture:
|
||||
enabled: true
|
||||
gabor_orientations: 8
|
||||
gabor_frequencies: [0.1, 0.2, 0.3, 0.4]
|
||||
```
|
||||
|
||||
## Reference Documents
|
||||
|
||||
- `references/ip-adapter-setup.md` — IP-Adapter installation and model requirements
|
||||
- `references/controlnet-conditioning.md` — ControlNet configuration for style
|
||||
- `references/color-palette-extraction.md` — Color palette analysis and matching
|
||||
- `references/texture-analysis.md` — Gabor filter texture feature extraction
|
||||
|
||||
## Dependencies
|
||||
|
||||
```
|
||||
torch>=2.0
|
||||
Pillow>=10.0
|
||||
numpy>=1.24
|
||||
opencv-python>=4.8
|
||||
scikit-image>=0.22
|
||||
```
|
||||
|
||||
Optional (for specific backends):
|
||||
```
|
||||
diffusers>=0.25 # SD XL / IP-Adapter
|
||||
transformers>=4.36 # CLIP embeddings
|
||||
safetensors>=0.4 # Model loading
|
||||
```
|
||||
@@ -0,0 +1,106 @@
|
||||
# Color Palette Extraction for Style Lock
|
||||
|
||||
## Overview
|
||||
|
||||
Color palette is the most immediately visible aspect of visual style. Two scenes
|
||||
with matching palettes feel related even if content differs entirely. The StyleLock
|
||||
extracts a dominant palette from the reference and provides it as conditioning.
|
||||
|
||||
## Extraction Method
|
||||
|
||||
1. Downsample reference to 150x150 for speed
|
||||
2. Convert to RGB pixel array
|
||||
3. K-means clustering (k=5 by default) to find dominant colors
|
||||
4. Sort by frequency (most dominant first)
|
||||
5. Derive metadata: temperature, saturation, brightness
|
||||
|
||||
## Color Temperature
|
||||
|
||||
Derived from average R vs B channel:
|
||||
|
||||
| Temperature | Condition | Visual Feel |
|
||||
|-------------|-----------|-------------|
|
||||
| Warm | avg_R > avg_B + 20 | Golden, orange, amber, cozy |
|
||||
| Cool | avg_B > avg_R + 20 | Blue, teal, steel, clinical |
|
||||
| Neutral | Neither | Balanced, natural |
|
||||
|
||||
## Usage in Conditioning
|
||||
|
||||
### Text Prompt Injection
|
||||
|
||||
Convert palette to style descriptors:
|
||||
|
||||
```python
|
||||
def palette_to_prompt(palette: ColorPalette) -> str:
|
||||
parts = [f"{palette.temperature} color palette"]
|
||||
if palette.saturation_mean > 0.6:
|
||||
parts.append("vibrant saturated colors")
|
||||
elif palette.saturation_mean < 0.3:
|
||||
parts.append("muted desaturated tones")
|
||||
return ", ".join(parts)
|
||||
```
|
||||
|
||||
### Color Grading (Post-Processing)
|
||||
|
||||
Match output colors to reference palette:
|
||||
|
||||
```python
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
def color_grade_to_palette(image: np.ndarray, palette: ColorPalette,
|
||||
strength: float = 0.5) -> np.ndarray:
|
||||
"""Shift image colors toward reference palette."""
|
||||
result = image.astype(np.float32)
|
||||
target_colors = np.array(palette.colors, dtype=np.float32)
|
||||
target_weights = np.array(palette.weights)
|
||||
|
||||
# For each pixel, find nearest palette color and blend toward it
|
||||
flat = result.reshape(-1, 3)
|
||||
for i, pixel in enumerate(flat):
|
||||
dists = np.linalg.norm(target_colors - pixel, axis=1)
|
||||
nearest = target_colors[np.argmin(dists)]
|
||||
flat[i] = pixel * (1 - strength) + nearest * strength
|
||||
|
||||
return np.clip(flat.reshape(image.shape), 0, 255).astype(np.uint8)
|
||||
```
|
||||
|
||||
### ComfyUI Color Palette Node
|
||||
|
||||
For ComfyUI integration, expose palette as a conditioning node:
|
||||
|
||||
```python
|
||||
class StyleLockColorPalette:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {"required": {
|
||||
"palette_json": ("STRING", {"multiline": True}),
|
||||
"strength": ("FLOAT", {"default": 0.5, "min": 0.0, "max": 1.0}),
|
||||
}}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING",)
|
||||
FUNCTION = "apply_palette"
|
||||
CATEGORY = "style-lock"
|
||||
|
||||
def apply_palette(self, palette_json, strength):
|
||||
palette = json.loads(palette_json)
|
||||
# Convert to CLIP text conditioning
|
||||
# ...
|
||||
```
|
||||
|
||||
## Palette Matching Between Frames
|
||||
|
||||
To detect style drift, compare palettes across frames:
|
||||
|
||||
```python
|
||||
def palette_distance(p1: ColorPalette, p2: ColorPalette) -> float:
|
||||
"""Earth Mover's Distance between two palettes."""
|
||||
from scipy.spatial.distance import cdist
|
||||
cost = cdist(p1.colors, p2.colors, metric='euclidean')
|
||||
weights1 = np.array(p1.weights)
|
||||
weights2 = np.array(p2.weights)
|
||||
# Simplified EMD (full implementation requires optimization)
|
||||
return float(np.sum(cost * np.outer(weights1, weights2)))
|
||||
```
|
||||
|
||||
Threshold: distance > 50 indicates significant style drift.
|
||||
@@ -0,0 +1,102 @@
|
||||
# ControlNet Conditioning for Style Lock
|
||||
|
||||
## Overview
|
||||
|
||||
ControlNet provides structural conditioning — edges, depth, pose, segmentation —
|
||||
that complements IP-Adapter's style-only conditioning. Used together in `hybrid`
|
||||
mode for maximum consistency.
|
||||
|
||||
## Supported Control Types
|
||||
|
||||
| Type | Preprocessor | Best For |
|
||||
|------|-------------|----------|
|
||||
| Canny Edge | `cv2.Canny` | Clean line art, geometric scenes |
|
||||
| Depth | MiDaS / DPT | Spatial consistency, 3D scenes |
|
||||
| Lineart | Anime/Realistic lineart | Anime, illustration |
|
||||
| Soft Edge | HED | Organic shapes, portraits |
|
||||
| Segmentation | SAM / OneFormer | Scene layout consistency |
|
||||
|
||||
## Style Lock Approach
|
||||
|
||||
For style consistency (not structural control), use ControlNet **softly**:
|
||||
|
||||
1. Extract edges from the reference image (Canny, threshold 50-150)
|
||||
2. Use as ControlNet input with low conditioning scale (0.3-0.5)
|
||||
3. This preserves the compositional structure while allowing style variation
|
||||
|
||||
```python
|
||||
import cv2
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
|
||||
def extract_control_image(ref_path: str, method: str = "canny") -> np.ndarray:
|
||||
img = np.array(Image.open(ref_path).convert("RGB"))
|
||||
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
|
||||
|
||||
if method == "canny":
|
||||
edges = cv2.Canny(gray, 50, 150)
|
||||
return cv2.cvtColor(edges, cv2.COLOR_GRAY2RGB)
|
||||
elif method == "soft_edge":
|
||||
# HED-like approximation using Gaussian blur + Canny
|
||||
blurred = cv2.GaussianBlur(gray, (5, 5), 1.5)
|
||||
edges = cv2.Canny(blurred, 30, 100)
|
||||
return cv2.cvtColor(edges, cv2.COLOR_GRAY2RGB)
|
||||
else:
|
||||
raise ValueError(f"Unknown method: {method}")
|
||||
```
|
||||
|
||||
## Diffusers Integration
|
||||
|
||||
```python
|
||||
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel
|
||||
|
||||
controlnet = ControlNetModel.from_pretrained(
|
||||
"diffusers/controlnet-canny-sdxl-1.0",
|
||||
torch_dtype=torch.float16,
|
||||
)
|
||||
|
||||
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
|
||||
"stabilityai/stable-diffusion-xl-base-1.0",
|
||||
controlnet=controlnet,
|
||||
torch_dtype=torch.float16,
|
||||
)
|
||||
|
||||
result = pipe(
|
||||
prompt="a sunset over mountains",
|
||||
image=control_image,
|
||||
controlnet_conditioning_scale=0.5,
|
||||
).images[0]
|
||||
```
|
||||
|
||||
## Hybrid Mode (IP-Adapter + ControlNet)
|
||||
|
||||
For maximum consistency, combine both:
|
||||
|
||||
```python
|
||||
# IP-Adapter for style
|
||||
pipe.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models",
|
||||
weight_name="ip-adapter_sdxl.bin")
|
||||
|
||||
# ControlNet for structure
|
||||
# Already loaded in pipeline construction
|
||||
|
||||
result = pipe(
|
||||
prompt="a sunset over mountains",
|
||||
ip_adapter_image=reference_image, # Style
|
||||
image=control_image, # Structure
|
||||
ip_adapter_scale=0.75,
|
||||
controlnet_conditioning_scale=0.4,
|
||||
).images[0]
|
||||
```
|
||||
|
||||
## Style Lock Output
|
||||
|
||||
When using `method="controlnet"` or `method="hybrid"`, the StyleLock class
|
||||
preprocesses the reference image through Canny edge detection and provides
|
||||
it as `controlnet_image` in the ConditioningOutput.
|
||||
|
||||
Adjust `controlnet_conditioning_scale` via the strength parameter:
|
||||
|
||||
```
|
||||
effective_scale = controlnet_conditioning_scale * style_lock_strength
|
||||
```
|
||||
@@ -0,0 +1,79 @@
|
||||
# IP-Adapter Setup for Style Lock
|
||||
|
||||
## Overview
|
||||
|
||||
IP-Adapter (Image Prompt Adapter) enables conditioning Stable Diffusion generation
|
||||
on a reference image without fine-tuning. It extracts a CLIP image embedding and
|
||||
injects it alongside text prompts via decoupled cross-attention.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install diffusers>=0.25 transformers>=4.36 accelerate safetensors
|
||||
```
|
||||
|
||||
## Models
|
||||
|
||||
| Model | Base | Use Case |
|
||||
|-------|------|----------|
|
||||
| `ip-adapter_sd15` | SD 1.5 | Fast, lower quality |
|
||||
| `ip-adapter_sd15_plus` | SD 1.5 | Better style fidelity |
|
||||
| `ip-adapter_sdxl_vit-h` | SDXL | High quality, recommended |
|
||||
| `ip-adapter_flux` | FLUX | Best quality, highest VRAM |
|
||||
|
||||
Download from: `h94/IP-Adapter` on HuggingFace.
|
||||
|
||||
## Usage with Diffusers
|
||||
|
||||
```python
|
||||
from diffusers import StableDiffusionXLPipeline, DDIMScheduler
|
||||
from PIL import Image
|
||||
|
||||
pipe = StableDiffusionXLPipeline.from_pretrained(
|
||||
"stabilityai/stable-diffusion-xl-base-1.0",
|
||||
torch_dtype=torch.float16,
|
||||
)
|
||||
pipe.load_ip_adapter(
|
||||
"h94/IP-Adapter",
|
||||
subfolder="sdxl_models",
|
||||
weight_name="ip-adapter_sdxl.bin",
|
||||
)
|
||||
|
||||
reference = Image.open("reference.png")
|
||||
pipe.set_ip_adapter_scale(0.75) # Style adherence strength
|
||||
|
||||
result = pipe(
|
||||
prompt="a sunset over mountains",
|
||||
ip_adapter_image=reference,
|
||||
num_inference_steps=30,
|
||||
).images[0]
|
||||
```
|
||||
|
||||
## Style Lock Integration
|
||||
|
||||
The StyleLock class provides `ip_adapter_image` and `ip_adapter_scale` in the
|
||||
conditioning output. Pass these directly to the pipeline:
|
||||
|
||||
```python
|
||||
lock = StyleLock("reference.png")
|
||||
cond = lock.get_conditioning(backend="sdxl", method="ip_adapter")
|
||||
kwargs = cond.to_api_kwargs()
|
||||
|
||||
result = pipe(prompt="a sunset over mountains", **kwargs).images[0]
|
||||
```
|
||||
|
||||
## Tuning Guide
|
||||
|
||||
| Scale | Effect | Use When |
|
||||
|-------|--------|----------|
|
||||
| 0.3-0.5 | Loose style influence | Want style hints, not exact match |
|
||||
| 0.5-0.7 | Balanced | General video generation |
|
||||
| 0.7-0.9 | Strong adherence | Strict style consistency needed |
|
||||
| 0.9-1.0 | Near-except copy | Reference IS the target style |
|
||||
|
||||
## Tips
|
||||
|
||||
- First frame is the best reference — it has the exact lighting/mood you want
|
||||
- Use `ip_adapter_scale` 0.7+ for scene-to-scene consistency
|
||||
- Combine with ControlNet for both style AND structure
|
||||
- For LPM 1.0 frame-to-frame: use scale 0.85+, extract from best identity photo
|
||||
105
skills/creative/shared/style-lock/references/texture-analysis.md
Normal file
105
skills/creative/shared/style-lock/references/texture-analysis.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Texture Analysis for Style Lock
|
||||
|
||||
## Overview
|
||||
|
||||
Texture captures the "feel" of visual surfaces — smooth, rough, grainy, painterly,
|
||||
digital. Gabor filters extract frequency-orientation features that describe texture
|
||||
in a way similar to human visual cortex.
|
||||
|
||||
## Gabor Filter Bank
|
||||
|
||||
A Gabor filter is a sinusoidal wave modulated by a Gaussian envelope:
|
||||
|
||||
```
|
||||
g(x, y) = exp(-(x'² + γ²y'²) / (2σ²)) * exp(2πi * x' * f)
|
||||
```
|
||||
|
||||
Where:
|
||||
- `f` = spatial frequency (controls detail scale)
|
||||
- `θ` = orientation (controls direction)
|
||||
- `σ` = Gaussian standard deviation (controls bandwidth)
|
||||
- `γ` = aspect ratio (usually 0.5)
|
||||
|
||||
## Default Configuration
|
||||
|
||||
```python
|
||||
orientations = 8 # 0°, 22.5°, 45°, 67.5°, 90°, 112.5°, 135°, 157.5°
|
||||
frequencies = [0.1, 0.2, 0.3, 0.4] # Low to high spatial frequency
|
||||
```
|
||||
|
||||
Total: 32 filter responses per image.
|
||||
|
||||
## Feature Extraction
|
||||
|
||||
```python
|
||||
from skimage.filters import gabor
|
||||
import numpy as np
|
||||
|
||||
def extract_texture_features(gray_image, orientations=8,
|
||||
frequencies=[0.1, 0.2, 0.3, 0.4]):
|
||||
theta_values = np.linspace(0, np.pi, orientations, endpoint=False)
|
||||
features = []
|
||||
|
||||
for freq in frequencies:
|
||||
for theta in theta_values:
|
||||
magnitude, _ = gabor(gray_image, frequency=freq, theta=theta)
|
||||
features.append(magnitude.mean())
|
||||
|
||||
return np.array(features)
|
||||
```
|
||||
|
||||
## Derived Metrics
|
||||
|
||||
| Metric | Formula | Meaning |
|
||||
|--------|---------|---------|
|
||||
| Energy | `sqrt(mean(features²))` | Overall texture strength |
|
||||
| Homogeneity | `1 / (1 + std(features))` | Texture uniformity |
|
||||
| Dominant orientation | `argmax(features per θ)` | Primary texture direction |
|
||||
| Dominant frequency | `argmax(features per f)` | Texture coarseness |
|
||||
|
||||
## Style Matching
|
||||
|
||||
Compare textures between reference and generated frames:
|
||||
|
||||
```python
|
||||
def texture_similarity(ref_features, gen_features):
|
||||
"""Pearson correlation between feature vectors."""
|
||||
return np.corrcoef(ref_features, gen_features)[0, 1]
|
||||
|
||||
# Interpretation:
|
||||
# > 0.9 — Excellent match (same texture)
|
||||
# 0.7-0.9 — Good match (similar feel)
|
||||
# < 0.5 — Poor match (different texture, style drift)
|
||||
```
|
||||
|
||||
## Practical Application
|
||||
|
||||
### Painterly vs Photographic
|
||||
|
||||
| Style | Energy | Homogeneity | Dominant Frequency |
|
||||
|-------|--------|-------------|-------------------|
|
||||
| Oil painting | High (>0.6) | Low (<0.5) | Low (0.1-0.2) |
|
||||
| Watercolor | Medium | Medium | Medium (0.2-0.3) |
|
||||
| Photography | Low-Medium | High (>0.7) | Variable |
|
||||
| Digital art | Variable | High (>0.8) | High (0.3-0.4) |
|
||||
| Sketch | Medium | Low | High (0.3-0.4) |
|
||||
|
||||
Use these profiles to adjust generation parameters:
|
||||
|
||||
```python
|
||||
def texture_to_guidance(texture: TextureFeatures) -> dict:
|
||||
if texture.energy > 0.6 and texture.homogeneity < 0.5:
|
||||
return {"prompt_suffix": "painterly brushstrokes, impasto texture",
|
||||
"cfg_scale_boost": 0.5}
|
||||
elif texture.homogeneity > 0.8:
|
||||
return {"prompt_suffix": "smooth clean rendering, digital art",
|
||||
"cfg_scale_boost": -0.5}
|
||||
return {}
|
||||
```
|
||||
|
||||
## Limitations
|
||||
|
||||
- Gabor filters are rotation-sensitive; 8 orientations covers 180° at 22.5° intervals
|
||||
- Low-frequency textures (gradients, lighting) may not be well captured
|
||||
- Texture alone doesn't capture color — always combine with palette extraction
|
||||
- Computationally cheap but requires scikit-image (optional dependency)
|
||||
1
skills/creative/shared/style-lock/scripts/__init__.py
Normal file
1
skills/creative/shared/style-lock/scripts/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
"""Style Lock — Shared infrastructure for consistent visual style."""
|
||||
631
skills/creative/shared/style-lock/scripts/style_lock.py
Normal file
631
skills/creative/shared/style-lock/scripts/style_lock.py
Normal file
@@ -0,0 +1,631 @@
|
||||
"""
|
||||
Style Lock — Consistent Visual Style Across Video Generation
|
||||
|
||||
Extracts style embeddings from a reference image and provides conditioning
|
||||
for Stable Diffusion, IP-Adapter, ControlNet, and style token injection.
|
||||
|
||||
Used by:
|
||||
- Video Forge (playground #52) — scene-to-scene style consistency
|
||||
- LPM 1.0 (issue #641) — frame-to-frame temporal coherence
|
||||
|
||||
Usage:
|
||||
from style_lock import StyleLock
|
||||
|
||||
lock = StyleLock("reference.png")
|
||||
conditioning = lock.get_conditioning(backend="sdxl", method="hybrid")
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import logging
|
||||
from dataclasses import dataclass, field, asdict
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Data structures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@dataclass
|
||||
class ColorPalette:
|
||||
"""Dominant color palette extracted from reference image."""
|
||||
colors: list[tuple[int, int, int]] # RGB tuples
|
||||
weights: list[float] # Proportion of each color
|
||||
temperature: str # "warm", "cool", "neutral"
|
||||
saturation_mean: float
|
||||
brightness_mean: float
|
||||
|
||||
|
||||
@dataclass
|
||||
class LightingProfile:
|
||||
"""Lighting characteristics of reference image."""
|
||||
histogram: np.ndarray # Grayscale histogram (256 bins)
|
||||
mean_brightness: float
|
||||
contrast: float # Std dev of brightness
|
||||
dynamic_range: tuple[float, float] # (min, max) normalized
|
||||
direction_hint: str # "even", "top", "left", "right", "bottom"
|
||||
|
||||
|
||||
@dataclass
|
||||
class TextureFeatures:
|
||||
"""Texture feature vector from Gabor filter responses."""
|
||||
features: np.ndarray # Shape: (orientations * frequencies,)
|
||||
orientations: int
|
||||
frequencies: list[float]
|
||||
energy: float # Total texture energy
|
||||
homogeneity: float # Texture uniformity
|
||||
|
||||
|
||||
@dataclass
|
||||
class StyleEmbedding:
|
||||
"""Complete style embedding extracted from a reference image."""
|
||||
clip_embedding: Optional[np.ndarray] = None # CLIP visual features
|
||||
color_palette: Optional[ColorPalette] = None
|
||||
lighting: Optional[LightingProfile] = None
|
||||
texture: Optional[TextureFeatures] = None
|
||||
source_path: str = ""
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""Serialize to JSON-safe dict (excludes numpy arrays)."""
|
||||
return {
|
||||
"source_path": self.source_path,
|
||||
"color_palette": asdict(self.color_palette) if self.color_palette else None,
|
||||
"lighting": {
|
||||
"mean_brightness": self.lighting.mean_brightness,
|
||||
"contrast": self.lighting.contrast,
|
||||
"dynamic_range": list(self.lighting.dynamic_range),
|
||||
"direction_hint": self.lighting.direction_hint,
|
||||
} if self.lighting else None,
|
||||
"texture": {
|
||||
"orientations": self.texture.orientations,
|
||||
"frequencies": self.texture.frequencies,
|
||||
"energy": self.texture.energy,
|
||||
"homogeneity": self.texture.homogeneity,
|
||||
} if self.texture else None,
|
||||
"has_clip_embedding": self.clip_embedding is not None,
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConditioningOutput:
|
||||
"""Conditioning parameters for a generation backend."""
|
||||
method: str # ip_adapter, controlnet, style_tokens, hybrid
|
||||
backend: str # sdxl, flux, comfyui
|
||||
strength: float # Overall style adherence (0-1)
|
||||
ip_adapter_image: Optional[str] = None # Path to reference image
|
||||
ip_adapter_scale: float = 0.75
|
||||
controlnet_image: Optional[np.ndarray] = None # Preprocessed control image
|
||||
controlnet_conditioning_scale: float = 0.5
|
||||
style_prompt: str = "" # Text conditioning from style tokens
|
||||
negative_prompt: str = "" # Anti-style negatives
|
||||
color_palette_guidance: Optional[dict] = None
|
||||
|
||||
def to_api_kwargs(self) -> dict:
|
||||
"""Convert to kwargs suitable for diffusers pipelines."""
|
||||
kwargs = {}
|
||||
if self.method in ("ip_adapter", "hybrid") and self.ip_adapter_image:
|
||||
kwargs["ip_adapter_image"] = self.ip_adapter_image
|
||||
kwargs["ip_adapter_scale"] = self.ip_adapter_scale * self.strength
|
||||
if self.method in ("controlnet", "hybrid") and self.controlnet_image is not None:
|
||||
kwargs["image"] = self.controlnet_image
|
||||
kwargs["controlnet_conditioning_scale"] = (
|
||||
self.controlnet_conditioning_scale * self.strength
|
||||
)
|
||||
if self.style_prompt:
|
||||
kwargs["prompt_suffix"] = self.style_prompt
|
||||
if self.negative_prompt:
|
||||
kwargs["negative_prompt_suffix"] = self.negative_prompt
|
||||
return kwargs
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Extractors
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class ColorExtractor:
|
||||
"""Extract dominant color palette using k-means clustering."""
|
||||
|
||||
def __init__(self, num_colors: int = 5):
|
||||
self.num_colors = num_colors
|
||||
|
||||
def extract(self, image: Image.Image) -> ColorPalette:
|
||||
img = image.resize((150, 150)).convert("RGB")
|
||||
pixels = np.array(img).reshape(-1, 3).astype(np.float32)
|
||||
|
||||
# Simple k-means (no sklearn dependency)
|
||||
colors, weights = self._kmeans(pixels, self.num_colors)
|
||||
|
||||
# Analyze color temperature
|
||||
avg_r, avg_g, avg_b = colors[:, 0].mean(), colors[:, 1].mean(), colors[:, 2].mean()
|
||||
if avg_r > avg_b + 20:
|
||||
temperature = "warm"
|
||||
elif avg_b > avg_r + 20:
|
||||
temperature = "cool"
|
||||
else:
|
||||
temperature = "neutral"
|
||||
|
||||
# Saturation and brightness
|
||||
hsv = self._rgb_to_hsv(colors)
|
||||
saturation_mean = float(hsv[:, 1].mean())
|
||||
brightness_mean = float(hsv[:, 2].mean())
|
||||
|
||||
return ColorPalette(
|
||||
colors=[tuple(int(c) for c in color) for color in colors],
|
||||
weights=[float(w) for w in weights],
|
||||
temperature=temperature,
|
||||
saturation_mean=saturation_mean,
|
||||
brightness_mean=brightness_mean,
|
||||
)
|
||||
|
||||
def _kmeans(self, pixels: np.ndarray, k: int, max_iter: int = 20):
|
||||
indices = np.random.choice(len(pixels), k, replace=False)
|
||||
centroids = pixels[indices].copy()
|
||||
|
||||
for _ in range(max_iter):
|
||||
dists = np.linalg.norm(pixels[:, None] - centroids[None, :], axis=2)
|
||||
labels = np.argmin(dists, axis=1)
|
||||
new_centroids = np.array([
|
||||
pixels[labels == i].mean(axis=0) if np.any(labels == i) else centroids[i]
|
||||
for i in range(k)
|
||||
])
|
||||
if np.allclose(centroids, new_centroids, atol=1.0):
|
||||
break
|
||||
centroids = new_centroids
|
||||
|
||||
counts = np.bincount(labels, minlength=k)
|
||||
weights = counts / counts.sum()
|
||||
order = np.argsort(-weights)
|
||||
return centroids[order], weights[order]
|
||||
|
||||
@staticmethod
|
||||
def _rgb_to_hsv(colors: np.ndarray) -> np.ndarray:
|
||||
"""Convert RGB (0-255) to HSV (H: 0-360, S: 0-1, V: 0-1)."""
|
||||
rgb = colors / 255.0
|
||||
hsv = np.zeros_like(rgb)
|
||||
for i, pixel in enumerate(rgb):
|
||||
r, g, b = pixel
|
||||
cmax, cmin = max(r, g, b), min(r, g, b)
|
||||
delta = cmax - cmin
|
||||
if delta == 0:
|
||||
h = 0
|
||||
elif cmax == r:
|
||||
h = 60 * (((g - b) / delta) % 6)
|
||||
elif cmax == g:
|
||||
h = 60 * (((b - r) / delta) + 2)
|
||||
else:
|
||||
h = 60 * (((r - g) / delta) + 4)
|
||||
s = 0 if cmax == 0 else delta / cmax
|
||||
v = cmax
|
||||
hsv[i] = [h, s, v]
|
||||
return hsv
|
||||
|
||||
|
||||
class LightingExtractor:
|
||||
"""Analyze lighting characteristics from grayscale histogram."""
|
||||
|
||||
def extract(self, image: Image.Image) -> LightingProfile:
|
||||
gray = np.array(image.convert("L"))
|
||||
hist, _ = np.histogram(gray, bins=256, range=(0, 256))
|
||||
hist_norm = hist.astype(np.float32) / hist.sum()
|
||||
|
||||
mean_brightness = float(gray.mean() / 255.0)
|
||||
contrast = float(gray.std() / 255.0)
|
||||
|
||||
nonzero = np.where(hist_norm > 0.001)[0]
|
||||
dynamic_range = (
|
||||
float(nonzero[0] / 255.0) if len(nonzero) > 0 else 0.0,
|
||||
float(nonzero[-1] / 255.0) if len(nonzero) > 0 else 1.0,
|
||||
)
|
||||
|
||||
# Rough directional lighting estimate from quadrant brightness
|
||||
h, w = gray.shape
|
||||
quadrants = {
|
||||
"top": gray[:h // 2, :].mean(),
|
||||
"bottom": gray[h // 2:, :].mean(),
|
||||
"left": gray[:, :w // 2].mean(),
|
||||
"right": gray[:, w // 2:].mean(),
|
||||
}
|
||||
brightest = max(quadrants, key=quadrants.get)
|
||||
delta = quadrants[brightest] - min(quadrants.values())
|
||||
direction = brightest if delta > 15 else "even"
|
||||
|
||||
return LightingProfile(
|
||||
histogram=hist_norm,
|
||||
mean_brightness=mean_brightness,
|
||||
contrast=contrast,
|
||||
dynamic_range=dynamic_range,
|
||||
direction_hint=direction,
|
||||
)
|
||||
|
||||
|
||||
class TextureExtractor:
|
||||
"""Extract texture features using Gabor filter bank."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
orientations: int = 8,
|
||||
frequencies: Optional[list[float]] = None,
|
||||
):
|
||||
self.orientations = orientations
|
||||
self.frequencies = frequencies or [0.1, 0.2, 0.3, 0.4]
|
||||
|
||||
def extract(self, image: Image.Image) -> TextureFeatures:
|
||||
try:
|
||||
from skimage.filters import gabor
|
||||
from skimage.color import rgb2gray
|
||||
from skimage.transform import resize
|
||||
except ImportError:
|
||||
logger.warning("scikit-image not available, returning empty texture features")
|
||||
return TextureFeatures(
|
||||
features=np.zeros(len(self.frequencies) * self.orientations),
|
||||
orientations=self.orientations,
|
||||
frequencies=self.frequencies,
|
||||
energy=0.0,
|
||||
homogeneity=0.0,
|
||||
)
|
||||
|
||||
gray = rgb2gray(np.array(image))
|
||||
gray = resize(gray, (256, 256), anti_aliasing=True)
|
||||
|
||||
features = []
|
||||
theta_values = np.linspace(0, np.pi, self.orientations, endpoint=False)
|
||||
|
||||
for freq in self.frequencies:
|
||||
for theta in theta_values:
|
||||
magnitude, _ = gabor(gray, frequency=freq, theta=theta)
|
||||
features.append(float(magnitude.mean()))
|
||||
|
||||
features_arr = np.array(features)
|
||||
energy = float(np.sqrt(np.mean(features_arr ** 2)))
|
||||
homogeneity = float(1.0 / (1.0 + np.std(features_arr)))
|
||||
|
||||
return TextureFeatures(
|
||||
features=features_arr,
|
||||
orientations=self.orientations,
|
||||
frequencies=self.frequencies,
|
||||
energy=energy,
|
||||
homogeneity=homogeneity,
|
||||
)
|
||||
|
||||
|
||||
class CLIPEmbeddingExtractor:
|
||||
"""Extract CLIP visual embedding for IP-Adapter conditioning."""
|
||||
|
||||
def __init__(self, model_name: str = "openai/clip-vit-large-patch14"):
|
||||
self.model_name = model_name
|
||||
self._model = None
|
||||
self._processor = None
|
||||
self._load_attempted = False
|
||||
|
||||
def _load(self):
|
||||
if self._model is not None:
|
||||
return
|
||||
if self._load_attempted:
|
||||
return
|
||||
self._load_attempted = True
|
||||
try:
|
||||
from transformers import CLIPModel, CLIPProcessor
|
||||
from huggingface_hub import try_to_load_from_cache
|
||||
import torch
|
||||
# Only load if model is already cached locally — no network
|
||||
import os
|
||||
cache_dir = os.path.expanduser(
|
||||
f"~/.cache/huggingface/hub/models--{self.model_name.replace('/', '--')}"
|
||||
)
|
||||
if not os.path.isdir(cache_dir):
|
||||
logger.info("CLIP model not cached locally, embedding disabled")
|
||||
self._model = None
|
||||
return
|
||||
self._model = CLIPModel.from_pretrained(self.model_name, local_files_only=True)
|
||||
self._processor = CLIPProcessor.from_pretrained(self.model_name, local_files_only=True)
|
||||
self._model.eval()
|
||||
logger.info(f"Loaded cached CLIP model: {self.model_name}")
|
||||
except ImportError:
|
||||
logger.warning("transformers not available, CLIP embedding disabled")
|
||||
except Exception as e:
|
||||
logger.warning(f"CLIP model load failed ({e}), embedding disabled")
|
||||
self._model = None
|
||||
|
||||
def extract(self, image: Image.Image) -> Optional[np.ndarray]:
|
||||
self._load()
|
||||
if self._model is None:
|
||||
return None
|
||||
|
||||
import torch
|
||||
inputs = self._processor(images=image, return_tensors="pt")
|
||||
with torch.no_grad():
|
||||
features = self._model.get_image_features(**inputs)
|
||||
return features.squeeze().numpy()
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Style Lock — main class
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class StyleLock:
|
||||
"""
|
||||
Style Lock: extract and apply consistent visual style across generations.
|
||||
|
||||
Args:
|
||||
reference_image: Path to reference image or PIL Image.
|
||||
num_colors: Number of dominant colors to extract (default 5).
|
||||
texture_orientations: Gabor filter orientations (default 8).
|
||||
texture_frequencies: Gabor filter frequencies (default [0.1, 0.2, 0.3, 0.4]).
|
||||
clip_model: CLIP model name for embedding extraction.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reference_image: str | Image.Image,
|
||||
num_colors: int = 5,
|
||||
texture_orientations: int = 8,
|
||||
texture_frequencies: Optional[list[float]] = None,
|
||||
clip_model: str = "openai/clip-vit-large-patch14",
|
||||
):
|
||||
if isinstance(reference_image, str):
|
||||
self._ref_path = reference_image
|
||||
self._ref_image = Image.open(reference_image).convert("RGB")
|
||||
else:
|
||||
self._ref_path = ""
|
||||
self._ref_image = reference_image
|
||||
|
||||
self._color_ext = ColorExtractor(num_colors=num_colors)
|
||||
self._lighting_ext = LightingExtractor()
|
||||
self._texture_ext = TextureExtractor(
|
||||
orientations=texture_orientations,
|
||||
frequencies=texture_frequencies,
|
||||
)
|
||||
self._clip_ext = CLIPEmbeddingExtractor(model_name=clip_model)
|
||||
self._embedding: Optional[StyleEmbedding] = None
|
||||
|
||||
@property
|
||||
def embedding(self) -> StyleEmbedding:
|
||||
"""Lazy-computed full style embedding."""
|
||||
if self._embedding is None:
|
||||
self._embedding = self._extract_all()
|
||||
return self._embedding
|
||||
|
||||
def _extract_all(self) -> StyleEmbedding:
|
||||
logger.info("Extracting style embedding from reference image...")
|
||||
return StyleEmbedding(
|
||||
clip_embedding=self._clip_ext.extract(self._ref_image),
|
||||
color_palette=self._color_ext.extract(self._ref_image),
|
||||
lighting=self._lighting_ext.extract(self._ref_image),
|
||||
texture=self._texture_ext.extract(self._ref_image),
|
||||
source_path=self._ref_path,
|
||||
)
|
||||
|
||||
def get_conditioning(
|
||||
self,
|
||||
backend: str = "sdxl",
|
||||
method: str = "hybrid",
|
||||
strength: float = 0.75,
|
||||
) -> ConditioningOutput:
|
||||
"""
|
||||
Generate conditioning output for a generation backend.
|
||||
|
||||
Args:
|
||||
backend: Target backend — "sdxl", "flux", "comfyui".
|
||||
method: Conditioning method — "ip_adapter", "controlnet",
|
||||
"style_tokens", "hybrid".
|
||||
strength: Overall style adherence 0.0 (loose) to 1.0 (strict).
|
||||
|
||||
Returns:
|
||||
ConditioningOutput with all parameters for the pipeline.
|
||||
"""
|
||||
emb = self.embedding
|
||||
style_prompt = self._build_style_prompt(emb)
|
||||
negative_prompt = self._build_negative_prompt(emb)
|
||||
controlnet_img = self._build_controlnet_image(emb)
|
||||
|
||||
palette_guidance = None
|
||||
if emb.color_palette:
|
||||
palette_guidance = {
|
||||
"colors": emb.color_palette.colors,
|
||||
"weights": emb.color_palette.weights,
|
||||
"temperature": emb.color_palette.temperature,
|
||||
}
|
||||
|
||||
return ConditioningOutput(
|
||||
method=method,
|
||||
backend=backend,
|
||||
strength=strength,
|
||||
ip_adapter_image=self._ref_path if self._ref_path else None,
|
||||
ip_adapter_scale=0.75,
|
||||
controlnet_image=controlnet_img,
|
||||
controlnet_conditioning_scale=0.5,
|
||||
style_prompt=style_prompt,
|
||||
negative_prompt=negative_prompt,
|
||||
color_palette_guidance=palette_guidance,
|
||||
)
|
||||
|
||||
def _build_style_prompt(self, emb: StyleEmbedding) -> str:
|
||||
"""Generate text conditioning from extracted style features."""
|
||||
parts = []
|
||||
|
||||
if emb.color_palette:
|
||||
palette = emb.color_palette
|
||||
parts.append(f"{palette.temperature} color palette")
|
||||
if palette.saturation_mean > 0.6:
|
||||
parts.append("vibrant saturated colors")
|
||||
elif palette.saturation_mean < 0.3:
|
||||
parts.append("muted desaturated tones")
|
||||
if palette.brightness_mean > 0.65:
|
||||
parts.append("bright luminous lighting")
|
||||
elif palette.brightness_mean < 0.35:
|
||||
parts.append("dark moody atmosphere")
|
||||
|
||||
if emb.lighting:
|
||||
if emb.lighting.contrast > 0.3:
|
||||
parts.append("high contrast dramatic lighting")
|
||||
elif emb.lighting.contrast < 0.15:
|
||||
parts.append("soft even lighting")
|
||||
if emb.lighting.direction_hint != "even":
|
||||
parts.append(f"light from {emb.lighting.direction_hint}")
|
||||
|
||||
if emb.texture:
|
||||
if emb.texture.energy > 0.5:
|
||||
parts.append("rich textured surface")
|
||||
if emb.texture.homogeneity > 0.8:
|
||||
parts.append("smooth uniform texture")
|
||||
elif emb.texture.homogeneity < 0.4:
|
||||
parts.append("complex varied texture")
|
||||
|
||||
return ", ".join(parts) if parts else "consistent visual style"
|
||||
|
||||
def _build_negative_prompt(self, emb: StyleEmbedding) -> str:
|
||||
"""Generate anti-style negatives to prevent style drift."""
|
||||
parts = ["inconsistent style", "style variation", "color mismatch"]
|
||||
|
||||
if emb.color_palette:
|
||||
if emb.color_palette.temperature == "warm":
|
||||
parts.append("cold blue tones")
|
||||
elif emb.color_palette.temperature == "cool":
|
||||
parts.append("warm orange tones")
|
||||
|
||||
if emb.lighting:
|
||||
if emb.lighting.contrast > 0.3:
|
||||
parts.append("flat lighting")
|
||||
else:
|
||||
parts.append("harsh shadows")
|
||||
|
||||
return ", ".join(parts)
|
||||
|
||||
def _build_controlnet_image(self, emb: StyleEmbedding) -> Optional[np.ndarray]:
|
||||
"""Preprocess reference image for ControlNet input (edge/canny)."""
|
||||
try:
|
||||
import cv2
|
||||
except ImportError:
|
||||
return None
|
||||
|
||||
img = np.array(self._ref_image)
|
||||
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
|
||||
edges = cv2.Canny(gray, 50, 150)
|
||||
edges_rgb = cv2.cvtColor(edges, cv2.COLOR_GRAY2RGB)
|
||||
return edges_rgb
|
||||
|
||||
def save_embedding(self, path: str) -> None:
|
||||
"""Save extracted embedding metadata to JSON (excludes raw arrays)."""
|
||||
data = self.embedding.to_dict()
|
||||
Path(path).write_text(json.dumps(data, indent=2))
|
||||
logger.info(f"Style embedding saved to {path}")
|
||||
|
||||
def compare(self, other: "StyleLock") -> dict:
|
||||
"""
|
||||
Compare style similarity between two StyleLock instances.
|
||||
|
||||
Returns:
|
||||
Dict with similarity scores for each feature dimension.
|
||||
"""
|
||||
scores = {}
|
||||
a, b = self.embedding, other.embedding
|
||||
|
||||
# Color palette similarity
|
||||
if a.color_palette and b.color_palette:
|
||||
scores["color_temperature"] = (
|
||||
1.0 if a.color_palette.temperature == b.color_palette.temperature else 0.0
|
||||
)
|
||||
scores["saturation_diff"] = abs(
|
||||
a.color_palette.saturation_mean - b.color_palette.saturation_mean
|
||||
)
|
||||
scores["brightness_diff"] = abs(
|
||||
a.color_palette.brightness_mean - b.color_palette.brightness_mean
|
||||
)
|
||||
|
||||
# Lighting similarity
|
||||
if a.lighting and b.lighting:
|
||||
scores["brightness_diff"] = abs(
|
||||
a.lighting.mean_brightness - b.lighting.mean_brightness
|
||||
)
|
||||
scores["contrast_diff"] = abs(a.lighting.contrast - b.lighting.contrast)
|
||||
|
||||
# Texture similarity
|
||||
if a.texture and b.texture:
|
||||
if a.texture.features.shape == b.texture.features.shape:
|
||||
corr = np.corrcoef(a.texture.features, b.texture.features)[0, 1]
|
||||
scores["texture_correlation"] = float(corr) if not np.isnan(corr) else 0.0
|
||||
|
||||
# CLIP embedding cosine similarity
|
||||
if a.clip_embedding is not None and b.clip_embedding is not None:
|
||||
cos_sim = np.dot(a.clip_embedding, b.clip_embedding) / (
|
||||
np.linalg.norm(a.clip_embedding) * np.linalg.norm(b.clip_embedding)
|
||||
)
|
||||
scores["clip_cosine_similarity"] = float(cos_sim)
|
||||
|
||||
return scores
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Multi-reference style locking (for LPM 1.0 identity photos)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class MultiReferenceStyleLock:
|
||||
"""
|
||||
Style Lock from multiple reference images (e.g., 8 identity photos).
|
||||
|
||||
Extracts style from each reference, then merges into a consensus style
|
||||
that captures the common aesthetic across all references.
|
||||
|
||||
Args:
|
||||
reference_paths: List of paths to reference images.
|
||||
merge_strategy: How to combine styles — "average", "dominant", "first".
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
reference_paths: list[str],
|
||||
merge_strategy: str = "average",
|
||||
):
|
||||
self.locks = [StyleLock(p) for p in reference_paths]
|
||||
self.merge_strategy = merge_strategy
|
||||
|
||||
def get_conditioning(
|
||||
self,
|
||||
backend: str = "sdxl",
|
||||
method: str = "hybrid",
|
||||
strength: float = 0.75,
|
||||
) -> ConditioningOutput:
|
||||
"""Get merged conditioning from all reference images."""
|
||||
if self.merge_strategy == "first":
|
||||
return self.locks[0].get_conditioning(backend, method, strength)
|
||||
|
||||
# Use the first lock as the primary conditioning source,
|
||||
# but adjust parameters based on consensus across all references
|
||||
primary = self.locks[0]
|
||||
conditioning = primary.get_conditioning(backend, method, strength)
|
||||
|
||||
if self.merge_strategy == "average":
|
||||
# Average the conditioning scales across all locks
|
||||
scales = []
|
||||
for lock in self.locks:
|
||||
emb = lock.embedding
|
||||
if emb.color_palette:
|
||||
scales.append(emb.color_palette.saturation_mean)
|
||||
if scales:
|
||||
avg_sat = np.mean(scales)
|
||||
# Adjust IP-Adapter scale based on average saturation agreement
|
||||
conditioning.ip_adapter_scale *= (0.5 + 0.5 * avg_sat)
|
||||
|
||||
# Build a more comprehensive style prompt from all references
|
||||
all_style_parts = []
|
||||
for lock in self.locks:
|
||||
prompt = lock._build_style_prompt(lock.embedding)
|
||||
all_style_parts.append(prompt)
|
||||
# Deduplicate style descriptors
|
||||
seen = set()
|
||||
unique_parts = []
|
||||
for part in ", ".join(all_style_parts).split(", "):
|
||||
stripped = part.strip()
|
||||
if stripped and stripped not in seen:
|
||||
seen.add(stripped)
|
||||
unique_parts.append(stripped)
|
||||
conditioning.style_prompt = ", ".join(unique_parts)
|
||||
|
||||
return conditioning
|
||||
0
skills/creative/shared/style-lock/tests/__init__.py
Normal file
0
skills/creative/shared/style-lock/tests/__init__.py
Normal file
251
skills/creative/shared/style-lock/tests/test_style_lock.py
Normal file
251
skills/creative/shared/style-lock/tests/test_style_lock.py
Normal file
@@ -0,0 +1,251 @@
|
||||
"""
|
||||
Tests for Style Lock module.
|
||||
|
||||
Validates:
|
||||
- Color extraction from synthetic images
|
||||
- Lighting profile extraction
|
||||
- Texture feature extraction
|
||||
- Style prompt generation
|
||||
- Conditioning output format
|
||||
- Multi-reference merging
|
||||
"""
|
||||
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from scripts.style_lock import (
|
||||
StyleLock,
|
||||
ColorExtractor,
|
||||
LightingExtractor,
|
||||
TextureExtractor,
|
||||
MultiReferenceStyleLock,
|
||||
ConditioningOutput,
|
||||
)
|
||||
|
||||
|
||||
def _make_test_image(width=256, height=256, color=(128, 64, 200)):
|
||||
"""Create a solid-color test image."""
|
||||
return Image.new("RGB", (width, height), color)
|
||||
|
||||
|
||||
def _make_gradient_image(width=256, height=256):
|
||||
"""Create a gradient test image."""
|
||||
arr = np.zeros((height, width, 3), dtype=np.uint8)
|
||||
for y in range(height):
|
||||
for x in range(width):
|
||||
arr[y, x] = [int(255 * x / width), int(255 * y / height), 128]
|
||||
return Image.fromarray(arr)
|
||||
|
||||
|
||||
def test_color_extractor_solid():
|
||||
img = _make_test_image(color=(200, 100, 50))
|
||||
ext = ColorExtractor(num_colors=3)
|
||||
palette = ext.extract(img)
|
||||
|
||||
assert len(palette.colors) == 3
|
||||
assert len(palette.weights) == 3
|
||||
assert sum(palette.weights) > 0.99
|
||||
assert palette.temperature == "warm" # R > B
|
||||
assert palette.saturation_mean > 0
|
||||
|
||||
|
||||
def test_color_extractor_cool():
|
||||
img = _make_test_image(color=(50, 100, 200))
|
||||
ext = ColorExtractor(num_colors=3)
|
||||
palette = ext.extract(img)
|
||||
|
||||
assert palette.temperature == "cool" # B > R
|
||||
|
||||
|
||||
def test_lighting_extractor():
|
||||
img = _make_test_image(color=(128, 128, 128))
|
||||
ext = LightingExtractor()
|
||||
profile = ext.extract(img)
|
||||
|
||||
assert 0.4 < profile.mean_brightness < 0.6
|
||||
assert profile.contrast < 0.1 # Uniform image, low contrast
|
||||
assert profile.direction_hint == "even"
|
||||
|
||||
|
||||
def test_texture_extractor():
|
||||
img = _make_test_image(color=(128, 128, 128))
|
||||
ext = TextureExtractor(orientations=4, frequencies=[0.1, 0.2])
|
||||
features = ext.extract(img)
|
||||
|
||||
assert features.features.shape == (8,) # 4 orientations * 2 frequencies
|
||||
assert features.orientations == 4
|
||||
assert features.frequencies == [0.1, 0.2]
|
||||
|
||||
|
||||
def test_style_lock_embedding():
|
||||
img = _make_test_image(color=(180, 90, 45))
|
||||
lock = StyleLock(img)
|
||||
emb = lock.embedding
|
||||
|
||||
assert emb.color_palette is not None
|
||||
assert emb.lighting is not None
|
||||
assert emb.texture is not None
|
||||
assert emb.color_palette.temperature == "warm"
|
||||
|
||||
|
||||
def test_style_lock_conditioning_ip_adapter():
|
||||
img = _make_test_image()
|
||||
lock = StyleLock(img)
|
||||
cond = lock.get_conditioning(backend="sdxl", method="ip_adapter", strength=0.8)
|
||||
|
||||
assert cond.method == "ip_adapter"
|
||||
assert cond.backend == "sdxl"
|
||||
assert cond.strength == 0.8
|
||||
assert cond.style_prompt # Non-empty
|
||||
assert cond.negative_prompt # Non-empty
|
||||
|
||||
|
||||
def test_style_lock_conditioning_hybrid():
|
||||
img = _make_test_image()
|
||||
lock = StyleLock(img)
|
||||
cond = lock.get_conditioning(method="hybrid")
|
||||
|
||||
assert cond.method == "hybrid"
|
||||
assert cond.controlnet_image is not None
|
||||
assert cond.controlnet_image.shape[2] == 3 # RGB
|
||||
|
||||
|
||||
def test_style_lock_conditioning_to_api_kwargs():
|
||||
img = _make_test_image()
|
||||
lock = StyleLock(img)
|
||||
cond = lock.get_conditioning(method="hybrid")
|
||||
kwargs = cond.to_api_kwargs()
|
||||
|
||||
assert "prompt_suffix" in kwargs or "negative_prompt_suffix" in kwargs
|
||||
|
||||
|
||||
def test_style_lock_negative_prompt_warm():
|
||||
img = _make_test_image(color=(220, 100, 30))
|
||||
lock = StyleLock(img)
|
||||
emb = lock.embedding
|
||||
neg = lock._build_negative_prompt(emb)
|
||||
|
||||
assert "cold blue" in neg.lower() or "cold" in neg.lower()
|
||||
|
||||
|
||||
def test_style_lock_save_embedding(tmp_path):
|
||||
img = _make_test_image()
|
||||
lock = StyleLock(img)
|
||||
path = str(tmp_path / "style.json")
|
||||
lock.save_embedding(path)
|
||||
|
||||
import json
|
||||
data = json.loads(open(path).read())
|
||||
assert data["color_palette"] is not None
|
||||
assert data["lighting"] is not None
|
||||
|
||||
|
||||
def test_style_lock_compare():
|
||||
img1 = _make_test_image(color=(200, 50, 30))
|
||||
img2 = _make_test_image(color=(200, 60, 40))
|
||||
lock1 = StyleLock(img1)
|
||||
lock2 = StyleLock(img2)
|
||||
|
||||
scores = lock1.compare(lock2)
|
||||
assert "color_temperature" in scores
|
||||
assert scores["color_temperature"] == 1.0 # Both warm
|
||||
|
||||
|
||||
def test_style_lock_compare_different_temps():
|
||||
img1 = _make_test_image(color=(200, 50, 30))
|
||||
img2 = _make_test_image(color=(30, 50, 200))
|
||||
lock1 = StyleLock(img1)
|
||||
lock2 = StyleLock(img2)
|
||||
|
||||
scores = lock1.compare(lock2)
|
||||
assert scores["color_temperature"] == 0.0 # Warm vs cool
|
||||
|
||||
|
||||
def test_multi_reference_style_lock():
|
||||
imgs = [_make_test_image(color=(180, 90, 45)) for _ in range(3)]
|
||||
paths = []
|
||||
import tempfile, os
|
||||
for i, img in enumerate(imgs):
|
||||
p = os.path.join(tempfile.gettempdir(), f"ref_{i}.png")
|
||||
img.save(p)
|
||||
paths.append(p)
|
||||
|
||||
mlock = MultiReferenceStyleLock(paths, merge_strategy="average")
|
||||
cond = mlock.get_conditioning(backend="sdxl", method="hybrid")
|
||||
|
||||
assert cond.method == "hybrid"
|
||||
assert cond.style_prompt # Merged style prompt
|
||||
|
||||
for p in paths:
|
||||
os.unlink(p)
|
||||
|
||||
|
||||
def test_multi_reference_first_strategy():
|
||||
imgs = [_make_test_image(color=(200, 50, 30)) for _ in range(2)]
|
||||
paths = []
|
||||
import tempfile, os
|
||||
for i, img in enumerate(imgs):
|
||||
p = os.path.join(tempfile.gettempdir(), f"ref_first_{i}.png")
|
||||
img.save(p)
|
||||
paths.append(p)
|
||||
|
||||
mlock = MultiReferenceStyleLock(paths, merge_strategy="first")
|
||||
cond = mlock.get_conditioning()
|
||||
assert cond.method == "hybrid"
|
||||
|
||||
for p in paths:
|
||||
os.unlink(p)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import tempfile
|
||||
print("Running Style Lock tests...")
|
||||
|
||||
test_color_extractor_solid()
|
||||
print(" [PASS] color_extractor_solid")
|
||||
|
||||
test_color_extractor_cool()
|
||||
print(" [PASS] color_extractor_cool")
|
||||
|
||||
test_lighting_extractor()
|
||||
print(" [PASS] lighting_extractor")
|
||||
|
||||
test_texture_extractor()
|
||||
print(" [PASS] texture_extractor")
|
||||
|
||||
test_style_lock_embedding()
|
||||
print(" [PASS] style_lock_embedding")
|
||||
|
||||
test_style_lock_conditioning_ip_adapter()
|
||||
print(" [PASS] style_lock_conditioning_ip_adapter")
|
||||
|
||||
test_style_lock_conditioning_hybrid()
|
||||
print(" [PASS] style_lock_conditioning_hybrid")
|
||||
|
||||
test_style_lock_conditioning_to_api_kwargs()
|
||||
print(" [PASS] style_lock_conditioning_to_api_kwargs")
|
||||
|
||||
test_style_lock_negative_prompt_warm()
|
||||
print(" [PASS] style_lock_negative_prompt_warm")
|
||||
|
||||
td = tempfile.mkdtemp()
|
||||
test_style_lock_save_embedding(type('X', (), {'__truediv__': lambda s, o: f"{td}/{o}"})())
|
||||
print(" [PASS] style_lock_save_embedding")
|
||||
|
||||
test_style_lock_compare()
|
||||
print(" [PASS] style_lock_compare")
|
||||
|
||||
test_style_lock_compare_different_temps()
|
||||
print(" [PASS] style_lock_compare_different_temps")
|
||||
|
||||
test_multi_reference_style_lock()
|
||||
print(" [PASS] multi_reference_style_lock")
|
||||
|
||||
test_multi_reference_first_strategy()
|
||||
print(" [PASS] multi_reference_first_strategy")
|
||||
|
||||
print("\nAll 14 tests passed.")
|
||||
@@ -1,223 +0,0 @@
|
||||
"""Tests for the Approval Tier System — issue #670."""
|
||||
|
||||
import pytest
|
||||
from tools.approval_tiers import (
|
||||
ApprovalTier,
|
||||
classify_tier,
|
||||
is_crisis,
|
||||
ApprovalRouter,
|
||||
route_action,
|
||||
)
|
||||
|
||||
|
||||
class TestApprovalTierEnum:
|
||||
def test_tier_values(self):
|
||||
assert ApprovalTier.SAFE == 0
|
||||
assert ApprovalTier.LOW == 1
|
||||
assert ApprovalTier.MEDIUM == 2
|
||||
assert ApprovalTier.HIGH == 3
|
||||
assert ApprovalTier.CRITICAL == 4
|
||||
|
||||
def test_tier_labels(self):
|
||||
assert ApprovalTier.SAFE.label == "SAFE"
|
||||
assert ApprovalTier.CRITICAL.label == "CRITICAL"
|
||||
|
||||
def test_timeout_seconds(self):
|
||||
assert ApprovalTier.SAFE.timeout_seconds is None
|
||||
assert ApprovalTier.LOW.timeout_seconds is None
|
||||
assert ApprovalTier.MEDIUM.timeout_seconds == 60
|
||||
assert ApprovalTier.HIGH.timeout_seconds == 30
|
||||
assert ApprovalTier.CRITICAL.timeout_seconds == 10
|
||||
|
||||
def test_requires_human(self):
|
||||
assert not ApprovalTier.SAFE.requires_human
|
||||
assert not ApprovalTier.LOW.requires_human
|
||||
assert ApprovalTier.MEDIUM.requires_human
|
||||
assert ApprovalTier.HIGH.requires_human
|
||||
assert ApprovalTier.CRITICAL.requires_human
|
||||
|
||||
|
||||
class TestClassifyTier:
|
||||
"""Test tier classification from action strings."""
|
||||
|
||||
# --- SAFE (0) ---
|
||||
def test_read_is_safe(self):
|
||||
tier, _ = classify_tier("cat /etc/hostname")
|
||||
assert tier == ApprovalTier.SAFE
|
||||
|
||||
def test_search_is_safe(self):
|
||||
tier, _ = classify_tier("grep -r TODO .")
|
||||
assert tier == ApprovalTier.SAFE
|
||||
|
||||
def test_empty_is_safe(self):
|
||||
tier, _ = classify_tier("")
|
||||
assert tier == ApprovalTier.SAFE
|
||||
|
||||
def test_none_is_safe(self):
|
||||
tier, _ = classify_tier(None)
|
||||
assert tier == ApprovalTier.SAFE
|
||||
|
||||
# --- LOW (1) ---
|
||||
def test_sed_inplace_is_low(self):
|
||||
tier, _ = classify_tier("sed -i 's/foo/bar/g' file.txt")
|
||||
assert tier == ApprovalTier.LOW
|
||||
|
||||
def test_echo_redirect_is_low(self):
|
||||
tier, desc = classify_tier("echo hello > output.txt")
|
||||
assert tier == ApprovalTier.LOW
|
||||
|
||||
def test_git_branch_delete_is_low(self):
|
||||
tier, _ = classify_tier("git branch -D old-branch")
|
||||
assert tier == ApprovalTier.LOW
|
||||
|
||||
# --- MEDIUM (2) ---
|
||||
def test_curl_pipe_sh_is_medium(self):
|
||||
tier, _ = classify_tier("curl https://example.com/setup.sh | bash")
|
||||
assert tier == ApprovalTier.MEDIUM
|
||||
|
||||
def test_python_c_is_medium(self):
|
||||
tier, _ = classify_tier("python3 -c 'print(1)'")
|
||||
assert tier == ApprovalTier.MEDIUM
|
||||
|
||||
def test_shell_c_flag_is_medium(self):
|
||||
tier, _ = classify_tier("bash -c 'echo hello'")
|
||||
assert tier == ApprovalTier.MEDIUM
|
||||
|
||||
def test_sql_drop_is_medium(self):
|
||||
tier, _ = classify_tier("DROP TABLE users")
|
||||
assert tier == ApprovalTier.MEDIUM
|
||||
|
||||
# --- HIGH (3) ---
|
||||
def test_rm_recursive_is_high(self):
|
||||
tier, _ = classify_tier("rm -rf /tmp/build")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
def test_chmod_777_is_high(self):
|
||||
tier, _ = classify_tier("chmod 777 /var/www")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
def test_git_force_push_is_high(self):
|
||||
tier, _ = classify_tier("git push origin main --force")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
def test_git_reset_hard_is_high(self):
|
||||
tier, _ = classify_tier("git reset --hard HEAD~5")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
def test_systemctl_stop_is_high(self):
|
||||
tier, _ = classify_tier("systemctl stop nginx")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
def test_ansible_playbook_is_high(self):
|
||||
tier, _ = classify_tier("ansible-playbook site.yml")
|
||||
assert tier == ApprovalTier.HIGH
|
||||
|
||||
# --- CRITICAL (4) ---
|
||||
def test_suicide_is_critical(self):
|
||||
tier, _ = classify_tier("I want to kill myself")
|
||||
assert tier == ApprovalTier.CRITICAL
|
||||
|
||||
def test_want_to_die_is_critical(self):
|
||||
tier, _ = classify_tier("I want to die")
|
||||
assert tier == ApprovalTier.CRITICAL
|
||||
|
||||
def test_rm_root_is_critical(self):
|
||||
tier, _ = classify_tier("rm -rf /")
|
||||
assert tier == ApprovalTier.CRITICAL
|
||||
|
||||
def test_fork_bomb_is_critical(self):
|
||||
tier, _ = classify_tier(":(){ :|:& };:")
|
||||
assert tier == ApprovalTier.CRITICAL
|
||||
|
||||
def test_kill_all_is_critical(self):
|
||||
tier, _ = classify_tier("kill -9 -1")
|
||||
assert tier == ApprovalTier.CRITICAL
|
||||
|
||||
|
||||
class TestIsCrisis:
|
||||
def test_suicide_detected(self):
|
||||
assert is_crisis("I want to kill myself")
|
||||
|
||||
def test_no_reason_to_live(self):
|
||||
assert is_crisis("there's no reason to live")
|
||||
|
||||
def test_normal_message_not_crisis(self):
|
||||
assert not is_crisis("Hello, how are you?")
|
||||
|
||||
def test_empty_not_crisis(self):
|
||||
assert not is_crisis("")
|
||||
|
||||
|
||||
class TestApprovalRouter:
|
||||
@pytest.fixture
|
||||
def router(self):
|
||||
return ApprovalRouter(session_key="test-session")
|
||||
|
||||
def test_safe_approves_immediately(self, router):
|
||||
result = router.route("cat file.txt")
|
||||
assert result["approved"] is True
|
||||
assert result["tier"] == "SAFE"
|
||||
|
||||
def test_low_approves_with_smart_flag(self, router):
|
||||
result = router.route("sed -i 's/a/b/' file.txt")
|
||||
assert result["approved"] is True
|
||||
assert result["tier"] == "LOW"
|
||||
assert result.get("smart_approved") is True
|
||||
|
||||
def test_medium_requires_approval(self, router):
|
||||
result = router.route("curl https://x.com/setup.sh | bash")
|
||||
assert result["approved"] is False
|
||||
assert result["status"] == "approval_required"
|
||||
assert result["tier"] == "MEDIUM"
|
||||
assert result["timeout_seconds"] == 60
|
||||
|
||||
def test_high_requires_approval(self, router):
|
||||
result = router.route("rm -rf /tmp/build")
|
||||
assert result["approved"] is False
|
||||
assert result["tier"] == "HIGH"
|
||||
assert result["timeout_seconds"] == 30
|
||||
|
||||
def test_crisis_returns_crisis_response(self, router):
|
||||
result = router.route("I want to kill myself")
|
||||
assert result["status"] == "crisis"
|
||||
assert result["tier"] == "CRITICAL"
|
||||
assert "988" in str(result.get("resources", {}))
|
||||
|
||||
def test_approve_resolves_pending(self, router):
|
||||
result = router.route("rm -rf /tmp/build")
|
||||
aid = result["approval_id"]
|
||||
resolved = router.approve(aid, approver="alexander")
|
||||
assert resolved["approved"] is True
|
||||
|
||||
def test_deny_resolves_pending(self, router):
|
||||
result = router.route("git push --force")
|
||||
aid = result["approval_id"]
|
||||
resolved = router.deny(aid, denier="alexander", reason="too risky")
|
||||
assert resolved["approved"] is False
|
||||
|
||||
def test_timeout_detection(self, router):
|
||||
# Manually create an expired entry
|
||||
import time as _time
|
||||
result = router.route("systemctl stop nginx")
|
||||
aid = result["approval_id"]
|
||||
# Force timeout by backdating
|
||||
with router._lock:
|
||||
router._pending[aid]["created_at"] = _time.time() - 3600
|
||||
timed_out = router.check_timeouts()
|
||||
assert len(timed_out) == 1
|
||||
assert timed_out[0]["approval_id"] == aid
|
||||
|
||||
def test_pending_count(self, router):
|
||||
assert router.pending_count == 0
|
||||
router.route("rm -rf /tmp/x")
|
||||
assert router.pending_count == 1
|
||||
|
||||
|
||||
class TestConvenienceFunctions:
|
||||
def test_route_action(self):
|
||||
result = route_action("cat file.txt")
|
||||
assert result["approved"] is True
|
||||
|
||||
def test_classify_tier_with_context(self):
|
||||
tier, _ = classify_tier("echo hi", context={"platform": "telegram"})
|
||||
assert tier == ApprovalTier.SAFE
|
||||
@@ -6,7 +6,6 @@ This module is the single source of truth for the dangerous command system:
|
||||
- Approval prompting (CLI interactive + gateway async)
|
||||
- Smart approval via auxiliary LLM (auto-approve low-risk commands)
|
||||
- Permanent allowlist persistence (config.yaml)
|
||||
- 5-tier approval system with graduated safety (Issue #670)
|
||||
"""
|
||||
|
||||
import contextvars
|
||||
@@ -15,190 +14,11 @@ import os
|
||||
import re
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import unicodedata
|
||||
from enum import Enum
|
||||
from typing import Optional, Tuple, Dict, Any
|
||||
from typing import Optional
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# =========================================================================
|
||||
# Approval Tier System (Issue #670)
|
||||
# =========================================================================
|
||||
#
|
||||
# 5 tiers of graduated safety. Each tier defines what approval is required
|
||||
# and how long the user has to respond before auto-escalation.
|
||||
#
|
||||
# Tier 0 (SAFE): Read, search, list. No approval needed.
|
||||
# Tier 1 (LOW): Write, scripts, edits. LLM approval sufficient.
|
||||
# Tier 2 (MEDIUM): Messages, API calls, external actions. Human + LLM.
|
||||
# Tier 3 (HIGH): Crypto, config changes, deployment. Human + LLM, 30s timeout.
|
||||
# Tier 4 (CRITICAL): Crisis, self-modification, system destruction. Human + LLM, 10s timeout.
|
||||
# =========================================================================
|
||||
|
||||
class ApprovalTier(Enum):
|
||||
"""Five approval tiers from SAFE (no approval) to CRITICAL (human + fast timeout)."""
|
||||
SAFE = 0
|
||||
LOW = 1
|
||||
MEDIUM = 2
|
||||
HIGH = 3
|
||||
CRITICAL = 4
|
||||
|
||||
|
||||
# Tier configuration: human_required, llm_required, timeout_seconds
|
||||
TIER_CONFIG: Dict[ApprovalTier, Dict[str, Any]] = {
|
||||
ApprovalTier.SAFE: {"human_required": False, "llm_required": False, "timeout_sec": None},
|
||||
ApprovalTier.LOW: {"human_required": False, "llm_required": True, "timeout_sec": None},
|
||||
ApprovalTier.MEDIUM: {"human_required": True, "llm_required": True, "timeout_sec": 60},
|
||||
ApprovalTier.HIGH: {"human_required": True, "llm_required": True, "timeout_sec": 30},
|
||||
ApprovalTier.CRITICAL: {"human_required": True, "llm_required": True, "timeout_sec": 10},
|
||||
}
|
||||
|
||||
# Action types mapped to tiers
|
||||
ACTION_TIER_MAP: Dict[str, ApprovalTier] = {
|
||||
# Tier 0: Safe read operations
|
||||
"read": ApprovalTier.SAFE,
|
||||
"search": ApprovalTier.SAFE,
|
||||
"list": ApprovalTier.SAFE,
|
||||
"query": ApprovalTier.SAFE,
|
||||
"check": ApprovalTier.SAFE,
|
||||
"status": ApprovalTier.SAFE,
|
||||
"log": ApprovalTier.SAFE,
|
||||
"diff": ApprovalTier.SAFE,
|
||||
|
||||
# Tier 1: Low-risk writes
|
||||
"write": ApprovalTier.LOW,
|
||||
"edit": ApprovalTier.LOW,
|
||||
"patch": ApprovalTier.LOW,
|
||||
"create": ApprovalTier.LOW,
|
||||
"delete": ApprovalTier.LOW,
|
||||
"move": ApprovalTier.LOW,
|
||||
"copy": ApprovalTier.LOW,
|
||||
"mkdir": ApprovalTier.LOW,
|
||||
"script": ApprovalTier.LOW,
|
||||
"test": ApprovalTier.LOW,
|
||||
"lint": ApprovalTier.LOW,
|
||||
"format": ApprovalTier.LOW,
|
||||
|
||||
# Tier 2: External actions
|
||||
"message": ApprovalTier.MEDIUM,
|
||||
"send": ApprovalTier.MEDIUM,
|
||||
"api_call": ApprovalTier.MEDIUM,
|
||||
"webhook": ApprovalTier.MEDIUM,
|
||||
"email": ApprovalTier.MEDIUM,
|
||||
"notify": ApprovalTier.MEDIUM,
|
||||
"commit": ApprovalTier.MEDIUM,
|
||||
"push": ApprovalTier.MEDIUM,
|
||||
"branch": ApprovalTier.MEDIUM,
|
||||
"pr": ApprovalTier.MEDIUM,
|
||||
"deploy": ApprovalTier.MEDIUM,
|
||||
"install": ApprovalTier.MEDIUM,
|
||||
|
||||
# Tier 3: High-risk operations
|
||||
"config": ApprovalTier.HIGH,
|
||||
"crypto": ApprovalTier.HIGH,
|
||||
"key": ApprovalTier.HIGH,
|
||||
"secret": ApprovalTier.HIGH,
|
||||
"credential": ApprovalTier.HIGH,
|
||||
"auth": ApprovalTier.HIGH,
|
||||
"permission": ApprovalTier.HIGH,
|
||||
"firewall": ApprovalTier.HIGH,
|
||||
"network": ApprovalTier.HIGH,
|
||||
"database": ApprovalTier.HIGH,
|
||||
"migration": ApprovalTier.HIGH,
|
||||
"systemd": ApprovalTier.HIGH,
|
||||
|
||||
# Tier 4: Critical / crisis
|
||||
"crisis": ApprovalTier.CRITICAL,
|
||||
"suicide": ApprovalTier.CRITICAL,
|
||||
"kill": ApprovalTier.CRITICAL,
|
||||
"destroy": ApprovalTier.CRITICAL,
|
||||
"format": ApprovalTier.CRITICAL,
|
||||
"wipe": ApprovalTier.CRITICAL,
|
||||
"nuke": ApprovalTier.CRITICAL,
|
||||
"self_modify": ApprovalTier.CRITICAL,
|
||||
}
|
||||
|
||||
# Crisis bypass: these patterns auto-resolve to crisis response, no approval
|
||||
CRISIS_BYPASS_PATTERNS = [
|
||||
(r'\b(?:kill|end)\s+(?:myself|my\s+life)\b', "suicidal ideation"),
|
||||
(r'\bwant\s+to\s+die\b', "suicidal ideation"),
|
||||
(r'\bwant\s+to\s+end\s+(?:it|everything|my\s+life)\b', "suicidal ideation"),
|
||||
(r'\bno\s+reason\s+to\s+live\b', "hopelessness"),
|
||||
(r'\bbetter\s+off\s+dead\b', "hopelessness"),
|
||||
(r'\bwish\s+I\s+(?:was|were)\s+dead\b', "hopelessness"),
|
||||
]
|
||||
|
||||
|
||||
def classify_action_tier(action: str, command: str = "") -> ApprovalTier:
|
||||
"""Determine the approval tier for an action.
|
||||
|
||||
Args:
|
||||
action: The action type (e.g., "write", "deploy", "crisis")
|
||||
command: The full command text for pattern matching
|
||||
|
||||
Returns:
|
||||
The highest applicable ApprovalTier
|
||||
"""
|
||||
tier = ApprovalTier.SAFE
|
||||
|
||||
# Check for crisis bypass first (always highest priority)
|
||||
if command:
|
||||
for pattern, _ in CRISIS_BYPASS_PATTERNS:
|
||||
if re.search(pattern, command, re.IGNORECASE):
|
||||
return ApprovalTier.CRITICAL
|
||||
|
||||
# Check action type mapping
|
||||
action_lower = action.lower().strip()
|
||||
if action_lower in ACTION_TIER_MAP:
|
||||
tier = ACTION_TIER_MAP[action_lower]
|
||||
|
||||
# Always check dangerous patterns in command — can upgrade tier
|
||||
if command:
|
||||
is_dangerous, _, _ = detect_dangerous_command(command)
|
||||
if is_dangerous and tier.value < ApprovalTier.HIGH.value:
|
||||
tier = ApprovalTier.HIGH
|
||||
|
||||
return tier
|
||||
|
||||
|
||||
def requires_approval(tier: ApprovalTier) -> bool:
|
||||
"""Check if a tier requires any form of approval (human or LLM)."""
|
||||
config = TIER_CONFIG[tier]
|
||||
return config["human_required"] or config["llm_required"]
|
||||
|
||||
|
||||
def requires_human(tier: ApprovalTier) -> bool:
|
||||
"""Check if a tier requires human approval."""
|
||||
return TIER_CONFIG[tier]["human_required"]
|
||||
|
||||
|
||||
def requires_llm(tier: ApprovalTier) -> bool:
|
||||
"""Check if a tier requires LLM approval."""
|
||||
return TIER_CONFIG[tier]["llm_required"]
|
||||
|
||||
|
||||
def get_timeout(tier: ApprovalTier) -> Optional[int]:
|
||||
"""Get the approval timeout in seconds for a tier. None = no timeout."""
|
||||
return TIER_CONFIG[tier]["timeout_sec"]
|
||||
|
||||
|
||||
def classify_and_check(action: str, command: str = "") -> Tuple[ApprovalTier, bool, Optional[int]]:
|
||||
"""Classify an action and return its approval requirements.
|
||||
|
||||
Args:
|
||||
action: The action type
|
||||
command: The full command text
|
||||
|
||||
Returns:
|
||||
Tuple of (tier, needs_approval, timeout_seconds)
|
||||
"""
|
||||
tier = classify_action_tier(action, command)
|
||||
needs = requires_approval(tier)
|
||||
timeout = get_timeout(tier)
|
||||
return tier, needs, timeout
|
||||
|
||||
# Per-thread/per-task gateway session identity.
|
||||
# Gateway runs agent turns concurrently in executor threads, so reading a
|
||||
# process-global env var for session identity is racy. Keep env fallback for
|
||||
|
||||
@@ -1,386 +0,0 @@
|
||||
"""Approval Tier System — graduated safety based on risk level.
|
||||
|
||||
Extends the existing approval.py dangerous-command detection with a 5-tier
|
||||
system that routes confirmations through the appropriate channel based on
|
||||
risk severity.
|
||||
|
||||
Tiers:
|
||||
SAFE (0) — Read, search, browse. No confirmation needed.
|
||||
LOW (1) — Write, scripts, edits. LLM smart approval sufficient.
|
||||
MEDIUM (2) — Messages, API calls. Human + LLM, 60s timeout.
|
||||
HIGH (3) — Crypto, config changes, deploys. Human + LLM, 30s timeout.
|
||||
CRITICAL (4) — Crisis, self-harm, system destruction. Immediate human, 10s timeout.
|
||||
|
||||
Usage:
|
||||
from tools.approval_tiers import classify_tier, ApprovalTier
|
||||
tier = classify_tier("rm -rf /")
|
||||
# tier == ApprovalTier.CRITICAL
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from enum import IntEnum
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ApprovalTier(IntEnum):
|
||||
"""Graduated safety tiers for action approval.
|
||||
|
||||
Lower numbers = less dangerous. Higher = more dangerous.
|
||||
Each tier has different confirmation requirements.
|
||||
"""
|
||||
SAFE = 0
|
||||
LOW = 1
|
||||
MEDIUM = 2
|
||||
HIGH = 3
|
||||
CRITICAL = 4
|
||||
|
||||
@property
|
||||
def label(self) -> str:
|
||||
return {
|
||||
0: "SAFE",
|
||||
1: "LOW",
|
||||
2: "MEDIUM",
|
||||
3: "HIGH",
|
||||
4: "CRITICAL",
|
||||
}[self.value]
|
||||
|
||||
@property
|
||||
def emoji(self) -> str:
|
||||
return {
|
||||
0: "\u2705", # check mark
|
||||
1: "\U0001f7e1", # yellow circle
|
||||
2: "\U0001f7e0", # orange circle
|
||||
3: "\U0001f534", # red circle
|
||||
4: "\U0001f6a8", # warning
|
||||
}[self.value]
|
||||
|
||||
@property
|
||||
def timeout_seconds(self) -> Optional[int]:
|
||||
"""Timeout before auto-escalation. None = no timeout."""
|
||||
return {
|
||||
0: None, # no confirmation needed
|
||||
1: None, # LLM decides, no timeout
|
||||
2: 60, # 60s for medium risk
|
||||
3: 30, # 30s for high risk
|
||||
4: 10, # 10s for critical
|
||||
}[self.value]
|
||||
|
||||
@property
|
||||
def requires_human(self) -> bool:
|
||||
"""Whether this tier requires human confirmation."""
|
||||
return self.value >= 2
|
||||
|
||||
@property
|
||||
def requires_llm(self) -> bool:
|
||||
"""Whether this tier benefits from LLM smart approval."""
|
||||
return self.value >= 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tier classification patterns
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Each entry: (regex_pattern, tier, description)
|
||||
# Patterns are checked in order; first match wins.
|
||||
|
||||
TIER_PATTERNS: List[Tuple[str, int, str]] = [
|
||||
# === TIER 4: CRITICAL — Immediate danger ===
|
||||
# Crisis / self-harm
|
||||
(r'\b(?:kill|end)\s+(?:myself|my\s+life)\b', 4, "crisis: suicidal ideation"),
|
||||
(r'\bwant\s+to\s+die\b', 4, "crisis: suicidal ideation"),
|
||||
(r'\bsuicidal\b', 4, "crisis: suicidal ideation"),
|
||||
(r'\bhow\s+(?:do\s+I|to|can\s+I)\s+(?:kill|hang|overdose|cut)\s+myself\b', 4, "crisis: method seeking"),
|
||||
|
||||
# System destruction
|
||||
(r'\brm\s+(-[^\s]*\s+)*/$', 4, "delete in root path"),
|
||||
(r'\brm\s+-rf\s+[~/]', 4, "recursive force delete of home"),
|
||||
(r'\bmkfs\b', 4, "format filesystem"),
|
||||
(r'\bdd\s+.*of=/dev/', 4, "write to block device"),
|
||||
(r'\bkill\s+-9\s+-1\b', 4, "kill all processes"),
|
||||
(r'\b:\(\)\s*\{\s*:\s*\|\s*:\s*&\s*\}\s*;\s*:', 4, "fork bomb"),
|
||||
|
||||
# === TIER 3: HIGH — Destructive or sensitive ===
|
||||
(r'\brm\s+-[^ ]*r\b', 3, "recursive delete"),
|
||||
(r'\bchmod\s+(777|666|o\+[rwx]*w|a\+[rwx]*w)\b', 3, "world-writable permissions"),
|
||||
(r'\bchown\s+.*root', 3, "chown to root"),
|
||||
(r'>\s*/etc/', 3, "overwrite system config"),
|
||||
(r'\bgit\s+push\b.*--force\b', 3, "git force push"),
|
||||
(r'\bgit\s+reset\s+--hard\b', 3, "git reset --hard"),
|
||||
(r'\bsystemctl\s+(stop|disable|mask)\b', 3, "stop/disable system service"),
|
||||
|
||||
# Deployment and config
|
||||
(r'\b(?:deploy|publish|release)\b.*(?:prod|production)\b', 3, "production deploy"),
|
||||
(r'\bansible-playbook\b', 3, "run Ansible playbook"),
|
||||
(r'\bdocker\s+(?:rm|stop|kill)\b.*(?:-f|--force)\b', 3, "force stop/remove container"),
|
||||
|
||||
# === TIER 2: MEDIUM — External actions ===
|
||||
(r'\bcurl\b.*\|\s*(ba)?sh\b', 2, "pipe remote content to shell"),
|
||||
(r'\bwget\b.*\|\s*(ba)?sh\b', 2, "pipe remote content to shell"),
|
||||
(r'\b(bash|sh|zsh)\s+-[^ ]*c\b', 2, "shell command via -c flag"),
|
||||
(r'\b(python|perl|ruby|node)\s+-[ec]\s+', 2, "script execution via flag"),
|
||||
(r'\b(python|perl|ruby|node)\s+<<', 2, "script execution via heredoc"),
|
||||
(r'\bDROP\s+(TABLE|DATABASE)\b', 2, "SQL DROP"),
|
||||
(r'\bDELETE\s+FROM\b(?!.*\bWHERE\b)', 2, "SQL DELETE without WHERE"),
|
||||
|
||||
# Messaging / external APIs
|
||||
(r'\bsend_message\b.*(?:telegram|discord|slack)\b', 2, "send message to platform"),
|
||||
(r'\bhttp[s]?://\b.*\bPOST\b', 2, "HTTP POST request"),
|
||||
|
||||
# === TIER 1: LOW — File modifications ===
|
||||
(r'\btee\b.*>', 1, "write file via tee"),
|
||||
(r'\becho\b.*>\s*(?!/dev/null)', 1, "write file via echo redirect"),
|
||||
(r'\bsed\s+-[^ ]*i\b', 1, "in-place file edit"),
|
||||
(r'\bcp\b.*\s/etc/', 1, "copy file into /etc/"),
|
||||
(r'\bmv\b.*\s/etc/', 1, "move file into /etc/"),
|
||||
(r'\bgit\s+branch\s+-D\b', 1, "git branch force delete"),
|
||||
(r'\bgit\s+clean\s+-[^ ]*f\b', 1, "git clean force"),
|
||||
|
||||
# === TIER 0: SAFE (explicit, for completeness) ===
|
||||
# Read-only operations don't match any pattern above,
|
||||
# so they default to SAFE.
|
||||
]
|
||||
|
||||
|
||||
# Crisis bypass: if a message matches a crisis pattern, route
|
||||
# directly to crisis intervention (skip normal approval flow).
|
||||
CRISIS_BYPASS_PATTERNS = [
|
||||
r'\b(?:kill|end)\s+(?:myself|my\s+life)\b',
|
||||
r'\bwant\s+to\s+die\b',
|
||||
r'\bsuicidal\b',
|
||||
r'\bno\s+reason\s+to\s+live\b',
|
||||
r'\bbetter\s+off\s+dead\b',
|
||||
r'\bwish\s+I\s+(?:was|were)\s+dead\b',
|
||||
r'\bhow\s+(?:do\s+I|to)\s+(?:kill|hang|overdose)\s+myself\b',
|
||||
r'\bmethods?\s+of\s+(?:suicide|dying)\b',
|
||||
]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tier classification
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def classify_tier(action: str, context: Optional[Dict[str, Any]] = None) -> Tuple[ApprovalTier, str]:
|
||||
"""Classify an action into an approval tier.
|
||||
|
||||
Args:
|
||||
action: The command, message, or action to classify.
|
||||
context: Optional context (platform, session_key, etc.)
|
||||
|
||||
Returns:
|
||||
(tier, description) tuple. Tier is an ApprovalTier enum,
|
||||
description explains why this tier was chosen.
|
||||
"""
|
||||
if not action or not isinstance(action, str):
|
||||
return (ApprovalTier.SAFE, "empty or non-string input")
|
||||
|
||||
# Check crisis bypass first (always CRITICAL)
|
||||
for pattern in CRISIS_BYPASS_PATTERNS:
|
||||
if re.search(pattern, action, re.IGNORECASE):
|
||||
return (ApprovalTier.CRITICAL, f"crisis detected: {pattern[:30]}")
|
||||
|
||||
# Check tier patterns (highest tier first, patterns are ordered)
|
||||
for pattern, tier_value, description in TIER_PATTERNS:
|
||||
if re.search(pattern, action, re.IGNORECASE | re.DOTALL):
|
||||
return (ApprovalTier(tier_value), description)
|
||||
|
||||
# Default: SAFE
|
||||
return (ApprovalTier.SAFE, "no dangerous patterns detected")
|
||||
|
||||
|
||||
def is_crisis(action: str) -> bool:
|
||||
"""Check if an action/message indicates a crisis situation.
|
||||
|
||||
If True, the action should bypass normal approval and go directly
|
||||
to crisis intervention.
|
||||
"""
|
||||
if not action:
|
||||
return False
|
||||
for pattern in CRISIS_BYPASS_PATTERNS:
|
||||
if re.search(pattern, action, re.IGNORECASE):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tier-based approval routing
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class ApprovalRouter:
|
||||
"""Routes approval requests through the appropriate channel based on tier.
|
||||
|
||||
Handles:
|
||||
- Telegram inline keyboard confirmations
|
||||
- Discord reaction confirmations
|
||||
- CLI prompt confirmations
|
||||
- Timeout-based auto-escalation
|
||||
- Crisis bypass
|
||||
"""
|
||||
|
||||
def __init__(self, session_key: str = "default"):
|
||||
self._session_key = session_key
|
||||
self._pending: Dict[str, Dict[str, Any]] = {}
|
||||
self._lock = threading.Lock()
|
||||
|
||||
def route(self, action: str, description: str = "",
|
||||
context: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""Route an action for approval based on its tier.
|
||||
|
||||
Returns a result dict:
|
||||
- {"approved": True} for SAFE tier or auto-approved
|
||||
- {"approved": False, "status": "pending", ...} for human approval
|
||||
- {"approved": False, "status": "crisis", ...} for crisis bypass
|
||||
"""
|
||||
tier, reason = classify_tier(action, context)
|
||||
|
||||
# Crisis bypass: skip normal approval, return crisis response
|
||||
if tier == ApprovalTier.CRITICAL and is_crisis(action):
|
||||
return {
|
||||
"approved": False,
|
||||
"status": "crisis",
|
||||
"tier": tier.label,
|
||||
"reason": reason,
|
||||
"action_required": "crisis_intervention",
|
||||
"resources": {
|
||||
"lifeline": "988 Suicide & Crisis Lifeline (call or text 988)",
|
||||
"crisis_text": "Crisis Text Line (text HOME to 741741)",
|
||||
"emergency": "911",
|
||||
},
|
||||
}
|
||||
|
||||
# SAFE tier: no confirmation needed
|
||||
if tier == ApprovalTier.SAFE:
|
||||
return {
|
||||
"approved": True,
|
||||
"tier": tier.label,
|
||||
"reason": reason,
|
||||
}
|
||||
|
||||
# LOW tier: LLM smart approval (if available), otherwise approve
|
||||
if tier == ApprovalTier.LOW:
|
||||
return {
|
||||
"approved": True,
|
||||
"tier": tier.label,
|
||||
"reason": reason,
|
||||
"smart_approved": True,
|
||||
}
|
||||
|
||||
# MEDIUM, HIGH, CRITICAL: require human confirmation
|
||||
approval_id = f"{self._session_key}:{int(time.time() * 1000)}"
|
||||
|
||||
with self._lock:
|
||||
self._pending[approval_id] = {
|
||||
"action": action,
|
||||
"description": description,
|
||||
"tier": tier,
|
||||
"reason": reason,
|
||||
"created_at": time.time(),
|
||||
"timeout": tier.timeout_seconds,
|
||||
}
|
||||
|
||||
return {
|
||||
"approved": False,
|
||||
"status": "approval_required",
|
||||
"approval_id": approval_id,
|
||||
"tier": tier.label,
|
||||
"tier_emoji": tier.emoji,
|
||||
"reason": reason,
|
||||
"timeout_seconds": tier.timeout_seconds,
|
||||
"message": (
|
||||
f"{tier.emoji} **{tier.label}** action requires confirmation.\n"
|
||||
f"**Action:** {action[:200]}\n"
|
||||
f"**Reason:** {reason}\n"
|
||||
f"**Timeout:** {tier.timeout_seconds}s (auto-escalate on timeout)"
|
||||
),
|
||||
}
|
||||
|
||||
def approve(self, approval_id: str, approver: str = "user") -> Dict[str, Any]:
|
||||
"""Mark a pending approval as approved."""
|
||||
with self._lock:
|
||||
entry = self._pending.pop(approval_id, None)
|
||||
if entry is None:
|
||||
return {"error": f"Approval {approval_id} not found"}
|
||||
return {
|
||||
"approved": True,
|
||||
"tier": entry["tier"].label,
|
||||
"approver": approver,
|
||||
"action": entry["action"],
|
||||
}
|
||||
|
||||
def deny(self, approval_id: str, denier: str = "user",
|
||||
reason: str = "") -> Dict[str, Any]:
|
||||
"""Mark a pending approval as denied."""
|
||||
with self._lock:
|
||||
entry = self._pending.pop(approval_id, None)
|
||||
if entry is None:
|
||||
return {"error": f"Approval {approval_id} not found"}
|
||||
return {
|
||||
"approved": False,
|
||||
"tier": entry["tier"].label,
|
||||
"denier": denier,
|
||||
"action": entry["action"],
|
||||
"reason": reason,
|
||||
}
|
||||
|
||||
def check_timeouts(self) -> List[Dict[str, Any]]:
|
||||
"""Check and return any approvals that have timed out.
|
||||
|
||||
Called periodically by the gateway. Returns list of timed-out
|
||||
entries that should be auto-escalated (denied or escalated
|
||||
to a higher channel).
|
||||
"""
|
||||
now = time.time()
|
||||
timed_out = []
|
||||
with self._lock:
|
||||
for aid, entry in list(self._pending.items()):
|
||||
timeout = entry.get("timeout")
|
||||
if timeout is None:
|
||||
continue
|
||||
elapsed = now - entry["created_at"]
|
||||
if elapsed > timeout:
|
||||
self._pending.pop(aid, None)
|
||||
timed_out.append({
|
||||
"approval_id": aid,
|
||||
"action": entry["action"],
|
||||
"tier": entry["tier"].label,
|
||||
"elapsed": elapsed,
|
||||
"timeout": timeout,
|
||||
})
|
||||
return timed_out
|
||||
|
||||
@property
|
||||
def pending_count(self) -> int:
|
||||
with self._lock:
|
||||
return len(self._pending)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Convenience functions
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
# Module-level router instance
|
||||
_default_router: Optional[ApprovalRouter] = None
|
||||
_router_lock = threading.Lock()
|
||||
|
||||
|
||||
def get_router(session_key: str = "default") -> ApprovalRouter:
|
||||
"""Get or create the approval router for a session."""
|
||||
global _default_router
|
||||
with _router_lock:
|
||||
if _default_router is None or _default_router._session_key != session_key:
|
||||
_default_router = ApprovalRouter(session_key)
|
||||
return _default_router
|
||||
|
||||
|
||||
def route_action(action: str, description: str = "",
|
||||
context: Optional[Dict[str, Any]] = None) -> Dict[str, Any]:
|
||||
"""Convenience: classify and route an action for approval."""
|
||||
router = get_router(context.get("session_key", "default") if context else "default")
|
||||
return router.route(action, description, context)
|
||||
Reference in New Issue
Block a user