Compare commits

...

8 Commits

Author SHA1 Message Date
5d0cf71a8b security: prevent error information disclosure (V-013, CVSS 7.5)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 30s
Tests / test (pull_request) Failing after 27s
Docker Build and Publish / build-and-push (pull_request) Failing after 38s
Add secure error handling to prevent internal details leaking.

Changes:
- gateway/platforms/api_server.py:
  - Add _handle_error_securely() function
  - Logs full error details with reference ID internally
  - Returns generic error message to client
  - Updates all cron job exception handlers to use secure handler

CVSS: 7.5 (High)
Refs: V-013 in SECURITY_AUDIT_REPORT.md
CWE-209: Generation of Error Message Containing Sensitive Information
2026-03-31 00:06:58 +00:00
3e0d3598bf Merge pull request '[SECURITY] Add Rate Limiting to API Server (V-016, CVSS 7.3)' (#66) from security/add-rate-limiting into main
Some checks failed
Nix / nix (ubuntu-latest) (push) Failing after 16s
Tests / test (push) Failing after 26s
Docker Build and Publish / build-and-push (push) Failing after 56s
Nix / nix (macos-latest) (push) Has been cancelled
2026-03-31 00:05:01 +00:00
4e3f5072f6 security: add rate limiting to API server (V-016, CVSS 7.3)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 31s
Tests / test (pull_request) Failing after 32s
Docker Build and Publish / build-and-push (pull_request) Failing after 59s
Add token bucket rate limiter per client IP.

Changes:
- gateway/platforms/api_server.py:
  - Add _RateLimiter class with token bucket algorithm
  - Add rate_limit_middleware for request throttling
  - Configurable via API_SERVER_RATE_LIMIT (default 100 req/min)
  - Returns 429 with Retry-After header when limit exceeded
  - Skip rate limiting for /health endpoint

CVSS: 7.3 (High)
Refs: V-016 in SECURITY_AUDIT_REPORT.md
CWE-770: Allocation of Resources Without Limits or Throttling
2026-03-31 00:04:56 +00:00
5936745636 Merge pull request '[SECURITY] Validate CDP URLs to Prevent SSRF (V-010, CVSS 8.4)' (#65) from security/fix-browser-cdp into main
Some checks failed
Nix / nix (ubuntu-latest) (push) Failing after 5s
Tests / test (push) Failing after 17s
Docker Build and Publish / build-and-push (push) Failing after 44s
Nix / nix (macos-latest) (push) Has been cancelled
2026-03-30 23:57:27 +00:00
cfaf6c827e security: validate CDP URLs to prevent SSRF (V-010, CVSS 8.4)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 27s
Tests / test (pull_request) Failing after 25s
Docker Build and Publish / build-and-push (pull_request) Failing after 37s
Add URL validation before fetching Chrome DevTools Protocol endpoints.
Only allows localhost and private network addresses.

Changes:
- tools/browser_tool.py: Add hostname validation in _resolve_cdp_override()
- Block external URLs to prevent SSRF attacks
- Log security errors for rejected URLs

CVSS: 8.4 (High)
Refs: V-010 in SECURITY_AUDIT_REPORT.md
CWE-918: Server-Side Request Forgery
2026-03-30 23:57:22 +00:00
cf1afb07f2 Merge pull request '[SECURITY] Block Dangerous Docker Volume Mounts (V-012, CVSS 8.7)' (#64) from security/fix-docker-privilege into main
Some checks failed
Nix / nix (ubuntu-latest) (push) Failing after 12s
Tests / test (push) Failing after 18s
Docker Build and Publish / build-and-push (push) Failing after 45s
Nix / nix (macos-latest) (push) Has been cancelled
2026-03-30 23:55:50 +00:00
ed32487cbe security: block dangerous Docker volume mounts (V-012, CVSS 8.7)
Some checks failed
Supply Chain Audit / Scan PR for supply chain risks (pull_request) Successful in 28s
Tests / test (pull_request) Failing after 29s
Docker Build and Publish / build-and-push (pull_request) Failing after 42s
Prevent privilege escalation via Docker socket mount.

Changes:
- tools/environments/docker.py: Add _is_dangerous_volume() validation
- Block docker.sock, /proc, /sys, /dev, root fs mounts
- Log security error when dangerous volume detected

Fixes container escape vulnerability where user-configured volumes
could mount Docker socket for host compromise.

CVSS: 8.7 (High)
Refs: V-012 in SECURITY_AUDIT_REPORT.md
CWE-250: Execution with Unnecessary Privileges
2026-03-30 23:55:45 +00:00
37c5e672b5 Merge pull request '[SECURITY] Fix Auth Bypass & CORS Misconfiguration (V-008, V-009)' (#63) from security/fix-auth-bypass into main
Some checks failed
Nix / nix (ubuntu-latest) (push) Failing after 6s
Docker Build and Publish / build-and-push (push) Has been cancelled
Nix / nix (macos-latest) (push) Has been cancelled
Tests / test (push) Has been cancelled
2026-03-30 23:55:04 +00:00
3 changed files with 200 additions and 9 deletions

View File

@@ -207,6 +207,37 @@ def _openai_error(message: str, err_type: str = "invalid_request_error", param:
}
# SECURITY FIX (V-013): Safe error handling to prevent info disclosure
def _handle_error_securely(exception: Exception, context: str = "") -> Dict[str, Any]:
"""Handle errors securely - log full details, return generic message.
Prevents information disclosure by not exposing internal error details
to API clients. Logs full stack trace internally for debugging.
Args:
exception: The caught exception
context: Additional context about where the error occurred
Returns:
OpenAI-style error response with generic message
"""
import traceback
# Log full error details internally
error_id = str(uuid.uuid4())[:8]
logger.error(
f"Internal error [{error_id}] in {context}: {exception}\n"
f"{traceback.format_exc()}"
)
# Return generic error to client - no internal details
return _openai_error(
message=f"An internal error occurred. Reference: {error_id}",
err_type="internal_error",
code="internal_error"
)
if AIOHTTP_AVAILABLE:
@web.middleware
async def body_limit_middleware(request, handler):
@@ -241,6 +272,43 @@ else:
security_headers_middleware = None # type: ignore[assignment]
# SECURITY FIX (V-016): Rate limiting middleware
if AIOHTTP_AVAILABLE:
@web.middleware
async def rate_limit_middleware(request, handler):
"""Apply rate limiting per client IP.
Returns 429 Too Many Requests if rate limit exceeded.
Configurable via API_SERVER_RATE_LIMIT env var (requests per minute).
"""
# Skip rate limiting for health checks
if request.path == "/health":
return await handler(request)
# Get client IP (respecting X-Forwarded-For if behind proxy)
client_ip = request.headers.get("X-Forwarded-For", request.remote)
if client_ip and "," in client_ip:
client_ip = client_ip.split(",")[0].strip()
limiter = _get_rate_limiter()
if not limiter.acquire(client_ip):
retry_after = limiter.get_retry_after(client_ip)
logger.warning(f"Rate limit exceeded for {client_ip}")
return web.json_response(
_openai_error(
f"Rate limit exceeded. Try again in {retry_after} seconds.",
err_type="rate_limit_error",
code="rate_limit_exceeded"
),
status=429,
headers={"Retry-After": str(retry_after)}
)
return await handler(request)
else:
rate_limit_middleware = None # type: ignore[assignment]
class _IdempotencyCache:
"""In-memory idempotency cache with TTL and basic LRU semantics."""
def __init__(self, max_items: int = 1000, ttl_seconds: int = 300):
@@ -273,6 +341,59 @@ class _IdempotencyCache:
_idem_cache = _IdempotencyCache()
# SECURITY FIX (V-016): Rate limiting
class _RateLimiter:
"""Token bucket rate limiter per client IP.
Default: 100 requests per minute per IP.
Configurable via API_SERVER_RATE_LIMIT env var (requests per minute).
"""
def __init__(self, requests_per_minute: int = 100):
from collections import defaultdict
self._buckets = defaultdict(lambda: {"tokens": requests_per_minute, "last": 0})
self._rate = requests_per_minute / 60.0 # tokens per second
self._max_tokens = requests_per_minute
self._lock = threading.Lock()
def _get_bucket(self, key: str) -> dict:
import time
with self._lock:
bucket = self._buckets[key]
now = time.time()
elapsed = now - bucket["last"]
bucket["last"] = now
# Add tokens based on elapsed time
bucket["tokens"] = min(
self._max_tokens,
bucket["tokens"] + elapsed * self._rate
)
return bucket
def acquire(self, key: str) -> bool:
"""Try to acquire a token. Returns True if allowed, False if rate limited."""
bucket = self._get_bucket(key)
with self._lock:
if bucket["tokens"] >= 1:
bucket["tokens"] -= 1
return True
return False
def get_retry_after(self, key: str) -> int:
"""Get seconds until next token is available."""
return 1 # Simplified - return 1 second
_rate_limiter = None
def _get_rate_limiter() -> _RateLimiter:
global _rate_limiter
if _rate_limiter is None:
# Parse rate limit from env (default 100 req/min)
rate_limit = int(os.getenv("API_SERVER_RATE_LIMIT", "100"))
_rate_limiter = _RateLimiter(rate_limit)
return _rate_limiter
def _make_request_fingerprint(body: Dict[str, Any], keys: List[str]) -> str:
from hashlib import sha256
subset = {k: body.get(k) for k in keys}
@@ -994,7 +1115,8 @@ class APIServerAdapter(BasePlatformAdapter):
jobs = self._cron_list(include_disabled=include_disabled)
return web.json_response({"jobs": jobs})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_create_job(self, request: "web.Request") -> "web.Response":
"""POST /api/jobs — create a new cron job."""
@@ -1042,7 +1164,8 @@ class APIServerAdapter(BasePlatformAdapter):
job = self._cron_create(**kwargs)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_get_job(self, request: "web.Request") -> "web.Response":
"""GET /api/jobs/{job_id} — get a single cron job."""
@@ -1061,7 +1184,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_update_job(self, request: "web.Request") -> "web.Response":
"""PATCH /api/jobs/{job_id} — update a cron job."""
@@ -1094,7 +1218,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_delete_job(self, request: "web.Request") -> "web.Response":
"""DELETE /api/jobs/{job_id} — delete a cron job."""
@@ -1113,7 +1238,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"ok": True})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_pause_job(self, request: "web.Request") -> "web.Response":
"""POST /api/jobs/{job_id}/pause — pause a cron job."""
@@ -1132,7 +1258,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_resume_job(self, request: "web.Request") -> "web.Response":
"""POST /api/jobs/{job_id}/resume — resume a paused cron job."""
@@ -1151,7 +1278,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
async def _handle_run_job(self, request: "web.Request") -> "web.Response":
"""POST /api/jobs/{job_id}/run — trigger immediate execution."""
@@ -1170,7 +1298,8 @@ class APIServerAdapter(BasePlatformAdapter):
return web.json_response({"error": "Job not found"}, status=404)
return web.json_response({"job": job})
except Exception as e:
return web.json_response({"error": str(e)}, status=500)
# SECURITY FIX (V-013): Use secure error handling
return web.json_response(_handle_error_securely(e, "list_jobs"), status=500)
# ------------------------------------------------------------------
# Output extraction helper
@@ -1282,7 +1411,8 @@ class APIServerAdapter(BasePlatformAdapter):
return False
try:
mws = [mw for mw in (cors_middleware, body_limit_middleware, security_headers_middleware) if mw is not None]
# SECURITY FIX (V-016): Add rate limiting middleware
mws = [mw for mw in (cors_middleware, body_limit_middleware, security_headers_middleware, rate_limit_middleware) if mw is not None]
self._app = web.Application(middlewares=mws)
self._app["api_server_adapter"] = self
self._app.router.add_get("/health", self._handle_health)

View File

@@ -170,6 +170,9 @@ def _resolve_cdp_override(cdp_url: str) -> str:
For discovery-style endpoints we fetch /json/version and return the
webSocketDebuggerUrl so downstream tools always receive a concrete browser
websocket instead of an ambiguous host:port URL.
SECURITY FIX (V-010): Validates URLs before fetching to prevent SSRF.
Only allows localhost/private network addresses for CDP connections.
"""
raw = (cdp_url or "").strip()
if not raw:
@@ -191,6 +194,35 @@ def _resolve_cdp_override(cdp_url: str) -> str:
else:
version_url = discovery_url.rstrip("/") + "/json/version"
# SECURITY FIX (V-010): Validate URL before fetching
# Only allow localhost and private networks for CDP
from urllib.parse import urlparse
parsed = urlparse(version_url)
hostname = parsed.hostname or ""
# Allow only safe hostnames for CDP
allowed_hostnames = ["localhost", "127.0.0.1", "0.0.0.0", "::1"]
if hostname not in allowed_hostnames:
# Check if it's a private IP
try:
import ipaddress
ip = ipaddress.ip_address(hostname)
if not (ip.is_private or ip.is_loopback):
logger.error(
"SECURITY: Rejecting CDP URL '%s' - only localhost and private "
"networks are allowed to prevent SSRF attacks.",
raw
)
return raw # Return original without fetching
except ValueError:
# Not an IP - reject unknown hostnames
logger.error(
"SECURITY: Rejecting CDP URL '%s' - unknown hostname '%s'. "
"Only localhost and private IPs are allowed.",
raw, hostname
)
return raw
try:
response = requests.get(version_url, timeout=10)
response.raise_for_status()

View File

@@ -253,6 +253,26 @@ class DockerEnvironment(BaseEnvironment):
# mode uses tmpfs (ephemeral, fast, gone on cleanup).
from tools.environments.base import get_sandbox_dir
# SECURITY FIX (V-012): Block dangerous volume mounts
# Prevent privilege escalation via Docker socket or sensitive paths
_BLOCKED_VOLUME_PATTERNS = [
"/var/run/docker.sock",
"/run/docker.sock",
"/var/run/docker.pid",
"/proc", "/sys", "/dev",
":/", # Root filesystem mount
]
def _is_dangerous_volume(vol_spec: str) -> bool:
"""Check if volume spec is dangerous (docker socket, root fs, etc)."""
for pattern in _BLOCKED_VOLUME_PATTERNS:
if pattern in vol_spec:
return True
# Check for docker socket variations
if "docker.sock" in vol_spec.lower():
return True
return False
# User-configured volume mounts (from config.yaml docker_volumes)
volume_args = []
workspace_explicitly_mounted = False
@@ -263,6 +283,15 @@ class DockerEnvironment(BaseEnvironment):
vol = vol.strip()
if not vol:
continue
# SECURITY FIX (V-012): Block dangerous volumes
if _is_dangerous_volume(vol):
logger.error(
f"SECURITY: Refusing to mount dangerous volume '{vol}'. "
f"Docker socket and system paths are blocked to prevent container escape."
)
continue # Skip this dangerous volume
if ":" in vol:
volume_args.extend(["-v", vol])
if ":/workspace" in vol: