Compare commits
1 Commits
step35/875
...
fix/544
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7c99058b0b |
@@ -1,126 +0,0 @@
|
||||
# Username OSINT Operator Policy
|
||||
|
||||
**Effective**: 2026-04-26
|
||||
**Applies to**: Username enumeration results produced by `maigret` / `socialscan` / `sherlock`
|
||||
**Exempt**: Manual human social-engineering (this policy covers automated tool output only)
|
||||
**Related**: timmy-home#875, `research/username-osint/decision-memo.md`
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This policy governs how username OSINT findings are stored, interpreted, and acted upon within Timmy. It exists to prevent:
|
||||
- Treating heuristic matches as identity proof
|
||||
- Accumulating stale or misattributed data in durable storage
|
||||
- Acting on findings without human review and source validation
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope
|
||||
|
||||
This policy applies when any of the following tools are invoked:
|
||||
- `maigret` (primary)
|
||||
- `socialscan` (secondary)
|
||||
- `sherlock` (archived/reference-only)
|
||||
|
||||
Tools may be invoked:
|
||||
- via `hermes` session with explicit instruction
|
||||
- via standalone script in `scripts/username-osint/`
|
||||
- via ad-hoc terminal command (operator discretion)
|
||||
|
||||
---
|
||||
|
||||
## 3. Storage boundaries
|
||||
|
||||
### 3.1 File locations
|
||||
- **Research packets** (bounded study artifacts) → `research/username-osint/`
|
||||
- **Single-use findings** (ad-hoc runs not tied to a study) → `/tmp/` (ephemeral)
|
||||
- **Canonical knowledge** (vetted, review-approved) → `knowledge/username-handles/` (if such a directory exists; otherwise never write to durable knowledge store)
|
||||
|
||||
### 3.2 Naming & provenance envelope
|
||||
Every saved artifact (to `research/username-osint/` or any durable location) **must** include a YAML frontmatter block:
|
||||
|
||||
```yaml
|
||||
---
|
||||
date: YYYY-MM-DD
|
||||
tool: maigret|socialscan|sherlock # exact command line used
|
||||
tool_version: <pip show version output>
|
||||
username_pattern: <pattern or list used; e.g. "alice,bob,charlie" or "@corp-employees.txt">
|
||||
sample_platforms: [github,twitter,instagram,reddit] # or "full-site-list"
|
||||
status: draft|review|approved|rejected
|
||||
reviewer: <hermes username or empty if unreviewed>
|
||||
provenance_notes: |
|
||||
Free-text notes about rate limits, VPN usage, time-of-day, or other context
|
||||
that affects reproducibility.
|
||||
---
|
||||
```
|
||||
|
||||
The frontmatter is followed by the tool's raw JSON output (preserved verbatim) plus an optional human summary.
|
||||
|
||||
---
|
||||
|
||||
## 4. Invocation rules
|
||||
|
||||
| Invocation type | Allowed | Conditions |
|
||||
|---|---|---|
|
||||
| **Explicit Hermes command** | ✅ | User must name the tool and sample set explicitly in the session |
|
||||
| **Automated pipeline** | ⚠️ | Must include `--json` flag and write to `research/username-osint/` with provenance frontmatter |
|
||||
| **Blind/autonomous discovery** | ❌ | Agent may NOT autonomously decide to run username enumeration |
|
||||
|
||||
**No silent runs**. Every invocation must be traceable to a user message or logged pipeline step.
|
||||
|
||||
---
|
||||
|
||||
## 5. Interpretation guardrails
|
||||
|
||||
### 5.1 Language conventions (what you CAN say)
|
||||
- ✅ "Handle `alice` is found on GitHub (HTTP 200)"
|
||||
- ✅ "Platform presence detected for `alice` on 4 of 4 checked services"
|
||||
- ✅ "No public handle matches were found in the sample set"
|
||||
|
||||
### 5.2 Prohibited language (what you CANNOT say)
|
||||
- ❌ "`alice` is the identity of the target"
|
||||
- ❌ "This proves `alice` owns these accounts"
|
||||
- ❌ "These accounts belong to the subject"
|
||||
- ❌ "We have identified the person behind handle X"
|
||||
|
||||
**Rationale**: HTTP presence ≠ identity ownership. Platform migration, shared devices, and impersonation are common. These tools detect *availability of a public handle*, not *ownership of an identity*.
|
||||
|
||||
---
|
||||
|
||||
## 6. Review & retention
|
||||
|
||||
### 6.1 Review requirement
|
||||
Any artifact promoted from `research/username-osint/` to `knowledge/` (if such exists) **must** be reviewed by a human operator. Review checklist:
|
||||
- [ ] Source tool version recorded in frontmatter
|
||||
- [ ] False-positive spot-check performed (≥10% of found handles manually verified)
|
||||
- [ ] Implausible matches flagged (e.g., handles that are 10+ years old but target is known to be <5)
|
||||
- [ ] Storage location confirmed appropriate (research vs knowledge)
|
||||
|
||||
### 6.2 Retention & deletion
|
||||
- **Research artifacts**: Retained indefinitely (they are dated study packets)
|
||||
- **Single-use findings** in `/tmp/`: Deleted after 7 days by cron job (`scripts/cleanup_tmp_artifacts.sh`)
|
||||
- Stale artifacts without `status: approved` after 90 days are **archived** (moved to `archive/`), not deleted
|
||||
|
||||
---
|
||||
|
||||
## 7. Audit trail
|
||||
|
||||
All tool invocations that write to durable storage **must** log to `~/.timmy/logs/username-osint.log` with:
|
||||
```
|
||||
YYYY-MM-DD HH:MM:SS | tool=<tool> | usernames=<count> | platforms=<list> | output=<path> | reviewer=<name or "unreviewed">
|
||||
```
|
||||
|
||||
This enables traceability from any stored JSON back to the exact run.
|
||||
|
||||
---
|
||||
|
||||
## 8. Exceptions
|
||||
|
||||
Requests for exception to this policy require:
|
||||
1. A written justification in the research artifact's frontmatter (`provenance_notes`)
|
||||
2. Human reviewer sign-off in the `reviewer` field
|
||||
3. Explicit `status: approved` designation
|
||||
|
||||
No exceptions are granted for autonomous or unattended runs.
|
||||
|
||||
@@ -1,107 +0,0 @@
|
||||
# Username OSINT Study — Decision Memo
|
||||
|
||||
**Date**: 2026-04-26
|
||||
**Study artifact**: `research/username-osint/tool-comparison.md`
|
||||
**Parent issue**: timmy-home#875
|
||||
**Status**: Complete — Recommendation Adopted
|
||||
|
||||
---
|
||||
|
||||
## Problem statement
|
||||
|
||||
Sherlock is currently the go-to username enumeration tool in Timmy workflows, but it is:
|
||||
- Slow (sequential requests)
|
||||
- Infrequently maintained
|
||||
- Broad but shallow in site coverage definition
|
||||
|
||||
We need to determine whether to:
|
||||
1. Stay with Sherlock
|
||||
2. Switch to Maigret
|
||||
3. Switch to Socialscan
|
||||
4. Adopt a layered stack (tool per use-case)
|
||||
5. Continue watching the ecosystem
|
||||
|
||||
---
|
||||
|
||||
## Method
|
||||
|
||||
Bounded sample set:
|
||||
- **Usernames**: `alice`, `bob`, `charlie`, `dave`, `eve` (common test handles)
|
||||
- **Platforms**: GitHub, Twitter/X, Instagram, Reddit
|
||||
- **Metrics collected**:
|
||||
- Install steps / friction
|
||||
- Total wall-clock time
|
||||
- Number of matches reported
|
||||
- False-positive indicators (404 pages served as 200, rate-limit gate pages)
|
||||
- Output format machine-readability
|
||||
- Output file size on disk
|
||||
|
||||
All tools run locally on macOS 14 (Apple Silicon) with Python 3.11. No API keys used; only public scrape.
|
||||
|
||||
Reference: `research/username-osint/tool-comparison.md` provides the full matrix.
|
||||
|
||||
---
|
||||
|
||||
## Findings (excerpt)
|
||||
|
||||
| Tool | Runtime | Matches | False positives | Install size |
|
||||
|---|---|---|---|---|
|
||||
| Sherlock | 45 s | 11 | 2 (GitHub 200-for-404) | ~15 MB |
|
||||
| Maigret | 12 s | 12 | 0 | ~8 MB |
|
||||
| Socialscan | 3 s | 9 | 0 | ~1 MB |
|
||||
|
||||
**Coverage**: Maigret's site list is ~2.5× larger than Sherlock's and ~8× larger than Socialscan's.
|
||||
|
||||
**Accuracy**: Maigret and Socialscan correctly classified GitHub vacancies; Sherlock treated GitHub's custom 404-with-recommendations page (HTTP 200) as a profile hit.
|
||||
|
||||
**Maintenance velocity**: Maigret merged 47 PRs in the last 90 days; Sherlock merged 6. Socialscan is stable with minimal churn.
|
||||
|
||||
**Output structure**: All three produce JSON, but schemas differ. Maigret's includes `response_time_ms` and explicit `status` values (`found`, `not_found`, ` unexplained_error`).
|
||||
|
||||
---
|
||||
|
||||
## Recommendation
|
||||
|
||||
**Adopt Maigret as the primary username OSINT tool.** Keep Socialscan as a fast secondary option for CI/quick checks. Archive Sherlock as reference-only.
|
||||
|
||||
**Rationale**:
|
||||
- **Speed**: 3–4× faster than Sherlock with async HTTP (no additional hardware)
|
||||
- **Accuracy**: Better 404/not-found classification eliminates manual filtering
|
||||
- **Maintenance**: Active maintainer + clear contribution path
|
||||
- **Coverage**: Broadest site set without compromising signal-to-noise
|
||||
|
||||
---
|
||||
|
||||
## Implementation impact
|
||||
|
||||
- Replace `sherlock` invocations in any active scripts with `maigret`
|
||||
- No config changes required (no API keys anywhere)
|
||||
- Update output-parsing logic to Maigret's `status: found|not_found` fields (simpler than Sherlock's HTTP-status dance)
|
||||
- **Storage schema** changes: see `docs/USERNAME_OSINT_POLICY.md` for the provenance envelope
|
||||
|
||||
---
|
||||
|
||||
## Risks & mitigations
|
||||
|
||||
| Risk | Severity | Mitigation |
|
||||
|---|---|---|
|
||||
| Maigret site definitions drift / breakage over time | Medium | Monthly snapshot of site-data commit hash stored alongside each research artifact (provenance) |
|
||||
| False sense of precision from `status: found` | High | Language policy (see `USERNAME_OSINT_POLICY.md`) requires "handle found" not "identity confirmed" |
|
||||
| Rate-limiting by target platforms | Low | Maigret includes automatic adaptive delays; still ≤1 s between requests |
|
||||
|
||||
---
|
||||
|
||||
## Success criteria
|
||||
|
||||
- [x] Comparison matrix complete
|
||||
- [x] Decision recorded with clear rationale
|
||||
- [x] Operator policy written (see `docs/USERNAME_OSINT_POLICY.md`)
|
||||
- [x] Transition plan documented in this memo
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- Full comparison: `research/username-osint/tool-comparison.md`
|
||||
- Operator policy: `docs/USERNAME_OSINT_POLICY.md`
|
||||
- Parent issue: timmy-home#875
|
||||
@@ -1,118 +0,0 @@
|
||||
# Username OSINT Tool Comparison — Sherlock / Maigret / Socialscan
|
||||
|
||||
**Date**: 2026-04-26
|
||||
**Research backlog item**: timmy-home#875
|
||||
**Sample set**: 5 usernames across 4 platforms (Twitter, Instagram, GitHub, Reddit)
|
||||
**Method**: Local-first install + direct CLI invocations; no API keys used
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
| Dimension | Sherlock | Maigret | Socialscan |
|
||||
|---|---|---|---|
|
||||
| **Install footprint** | `git clone + pip install -r requirements.txt` (pyproject.toml) | `pip install maigret` (single package) | `pip install socialscan` (single package) |
|
||||
| **Supported sites** | ~200 (site list in `sherlock/resources/data.json`) | ~500 (site list in `maigret/data.py`) | ~30 (primary focus: major social platforms) |
|
||||
| **Python requirement** | 3.8+ | 3.7+ | 3.6+ |
|
||||
| **Output formats** | JSON, CSV, HTML + terminal table | JSON, HTML (+ terminal coloured output) | Text table + JSON (via `--json`) |
|
||||
| **Sovereignty fit** | Local-only; no external deps beyond requests | Local-only; no external deps beyond aiohttp | Local-only; pure stdlib + requests |
|
||||
| **Maintenance state** | Last release 2024-03; PRs merged slowly | Last release 2025-12; active development | Last release 2024-05; minimal but stable |
|
||||
| **Async support** | Sequential (one site at a time) | Async (aiohttp — concurrent across sites) | Sequential but fast (small site list) |
|
||||
| **False-positive handling** | "Unavailable" ≠ "doesn't exist"; returns HTTP status codes | Metadata extraction + 404 detection; better error classification | Simple HTTP status check; limited nuance |
|
||||
| **Provenance metadata** | HTTP status + final URL + error code per-site | HTTP status + response time + platform-specific indicators | HTTP status code only |
|
||||
| **Niches** | Mature, well-documented, extensible site definitions | Broadest coverage, modern codebase, better performance | Fastest to run, smallest install, library-first design |
|
||||
|
||||
---
|
||||
|
||||
## Bounded sample run (same 5 usernames, 4 platforms)
|
||||
|
||||
| Tool | Total runtime | Found matches | False-positive flags | Notes |
|
||||
|---|---|---|---|---|
|
||||
| Sherlock | ~45 s | 11 | 2 (GitHub 404 page returned 200) | Requires `--print-all` to see 404 vs 503 noise |
|
||||
| Maigret | ~12 s | 12 | 0 | Async concurrency + better 404 detection |
|
||||
| Socialscan | ~3 s | 9 | 0 | Limited site list misses niche platforms |
|
||||
|
||||
### Sample command used
|
||||
```bash
|
||||
# Sherlock (JSON report)
|
||||
python3 -m sherlock --output json --folder output/sherlock user1 user2 user3 user4 user5
|
||||
|
||||
# Maigret (HTML + JSON)
|
||||
maigret --html --json output/maigret user1 user2 user3 user4 user5
|
||||
|
||||
# Socialscan (JSON)
|
||||
socialscan --json user1 user2 user3 user4 user5 > output/socialscan.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Friction & maintenance
|
||||
|
||||
| Aspect | Sherlock | Maigret | Socialscan |
|
||||
|---|---|---|---|
|
||||
| **Install friction** | Clone + pip install -r; depends on `requests`, `colorama` | Single pip install; depends on `aiohttp`, `requests`, `beautifulsoup4` | Single pip install; depends only on `requests` |
|
||||
| **Update frequency** | Low — ~2 releases/year; PRs take weeks | High — monthly releases; active Discord | Low — stable, few changes needed |
|
||||
| **Site list hygiene** | JSON array; easy to edit manually but large file | Python dict; code-driven but harder to hand-edit | Hard-coded module list; easiest to read |
|
||||
| **Disk footprint** | ~15 MB (full repo with HTML report) | ~8 MB (pip-installed package) | ~1 MB (tiny package) |
|
||||
| **Configuration** | CLI flags only; no config file | CLI + optional `~/.config/maigret.json` | CLI only; zero config |
|
||||
|
||||
---
|
||||
|
||||
## Output structure comparison
|
||||
|
||||
**Sherlock** (`output/sherlock/<username>.json`):
|
||||
```json
|
||||
{
|
||||
"username": "user1",
|
||||
"found_on": {
|
||||
"GitHub": {"http_status": 200, "url": "https://github.com/user1"},
|
||||
"Twitter": {"http_status": 404, "error": "Not Found"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Maigret** (`output/maigret/<username>.json`):
|
||||
```json
|
||||
{
|
||||
"username": "user1",
|
||||
"sites": {
|
||||
"GitHub": {"status": "found", "url": "https://github.com/user1", "response_time_ms": 412},
|
||||
"Twitter": {"status": "not_found", "error": "404"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Socialscan** (stdout + `--json`):
|
||||
```json
|
||||
[{"platform":"github","username":"user1","available":false}, ...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Sovereignty assessment
|
||||
|
||||
All three are **local-first, API-key-free** tools. None require cloud accounts. Network calls are direct to target platforms; no telemetry.
|
||||
|
||||
**Concern**: None of these tools expose request metadata (headers seen by target, IP rate-limit info) in a way that could be stored for reproducibility. We store only final status.
|
||||
|
||||
---
|
||||
|
||||
## Verdict matrix
|
||||
|
||||
| Use case | Recommended tool | Rationale |
|
||||
|---|---|---|
|
||||
| **Quick one-off check** | Socialscan | Smallest, fastest, minimal install |
|
||||
| **Broad coverage for many usernames** | Maigret | Async performance + best site list |
|
||||
| **Audit trail with per-site raw HTTP status** | Sherlock | Verbose JSON preserves raw 200/404/503 distinction |
|
||||
| **Low-end hardware / constrained environments** | Socialcan (typo intentional — it's small) | Tiny dependency tree |
|
||||
| **Future extensibility** | Maigret | Active maintainership + modular design |
|
||||
|
||||
---
|
||||
|
||||
## Next steps (non-blocking)
|
||||
|
||||
- Keep **Maigret** as the primary investigation tool (coverage + speed + maintenance).
|
||||
- Use **Socialscan** for smoke-checks in CI (speed).
|
||||
- **Sherlock** archived as reference; not retired but not actively used.
|
||||
- Consider writing a thin wrapper that normalizes output to a single provenance schema (see `docs/USERNAME_OSINT_POLICY.md`).
|
||||
|
||||
51
scripts/README_bezalel_gemma4_vps.md
Normal file
51
scripts/README_bezalel_gemma4_vps.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Bezalel Gemma 4 VPS Wiring
|
||||
|
||||
Issue: timmy-home #544
|
||||
|
||||
This helper is the repo-side operator bundle for wiring a live Gemma 4 endpoint into Bezalel's VPS config without hardcoding one dead pod forever.
|
||||
|
||||
What `scripts/bezalel_gemma4_vps.py` now does:
|
||||
- normalizes any explicit endpoint to an OpenAI-compatible `/v1` base URL
|
||||
- prefers `--vertex-base-url` over `--base-url` over `--pod-id`
|
||||
- targets the issue's real config path by default: `/root/wizards/bezalel/home/config.yaml`
|
||||
- can write the `Big Brain` provider block into that config
|
||||
- can run a lightweight `/chat/completions` probe against the endpoint
|
||||
- emits the exact `ssh root@104.131.15.18 ... curl ...` command needed to prove the endpoint is reachable from the Bezalel VPS
|
||||
|
||||
Example dry-run:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--base-url https://<pod-id>-11434.proxy.runpod.net \
|
||||
--json
|
||||
```
|
||||
|
||||
Example live wiring once a real endpoint exists:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--base-url https://<pod-id>-11434.proxy.runpod.net \
|
||||
--config-path /root/wizards/bezalel/home/config.yaml \
|
||||
--write-config \
|
||||
--verify-chat
|
||||
```
|
||||
|
||||
If Vertex AI is fronted by an OpenAI-compatible bridge, prefer that explicit URL:
|
||||
|
||||
```bash
|
||||
python3 scripts/bezalel_gemma4_vps.py \
|
||||
--vertex-base-url https://<bridge-host>/v1 \
|
||||
--json
|
||||
```
|
||||
|
||||
What this repo change proves:
|
||||
- Bezalel's config target is explicit and correct for the VPS lane
|
||||
- the helper no longer silently writes to the local operator's home directory
|
||||
- endpoint normalization is deterministic
|
||||
- the remote proof command is generated from the same normalized URL the config writer uses
|
||||
|
||||
What still requires live infrastructure outside the repo:
|
||||
- a valid paid RunPod or Vertex credential
|
||||
- a real GPU endpoint serving Gemma 4
|
||||
- successful execution of the emitted SSH proof command on `104.131.15.18`
|
||||
- successful Bezalel Hermes chat against that live endpoint
|
||||
@@ -8,12 +8,14 @@ Safe by default:
|
||||
- can call the RunPod GraphQL API if a key is provided and --apply-runpod is used
|
||||
- can update a Hermes config file in-place when --write-config is used
|
||||
- can verify an OpenAI-compatible endpoint with a lightweight chat probe
|
||||
- emits the exact Bezalel VPS curl proof command for remote verification
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import shlex
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib import request
|
||||
@@ -27,7 +29,9 @@ DEFAULT_IMAGE = "ollama/ollama:latest"
|
||||
DEFAULT_MODEL = "gemma4:latest"
|
||||
DEFAULT_PROVIDER_NAME = "Big Brain"
|
||||
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "runpod" / "access_key"
|
||||
DEFAULT_CONFIG_PATH = Path.home() / "wizards" / "bezalel" / "home" / "config.yaml"
|
||||
DEFAULT_CONFIG_PATH = Path("/root/wizards/bezalel/home/config.yaml")
|
||||
DEFAULT_BEZALEL_VPS_HOST = "104.131.15.18"
|
||||
DEFAULT_VERIFY_PROMPT = "Say READY"
|
||||
|
||||
|
||||
def build_deploy_mutation(
|
||||
@@ -63,8 +67,31 @@ mutation {{
|
||||
'''.strip()
|
||||
|
||||
|
||||
def normalize_openai_base_url(base_url: str) -> str:
|
||||
normalized = (base_url or "").strip().rstrip("/")
|
||||
if not normalized:
|
||||
return normalized
|
||||
for suffix in ("/chat/completions", "/models"):
|
||||
if normalized.endswith(suffix):
|
||||
normalized = normalized[: -len(suffix)]
|
||||
break
|
||||
if not normalized.endswith("/v1"):
|
||||
normalized = f"{normalized}/v1"
|
||||
return normalized
|
||||
|
||||
|
||||
def build_runpod_endpoint(pod_id: str, port: int = 11434) -> str:
|
||||
return f"https://{pod_id}-{port}.proxy.runpod.net/v1"
|
||||
return normalize_openai_base_url(f"https://{pod_id}-{port}.proxy.runpod.net")
|
||||
|
||||
|
||||
def resolve_base_url(*, vertex_base_url: str | None = None, base_url: str | None = None, pod_id: str | None = None) -> tuple[str | None, str | None]:
|
||||
if vertex_base_url:
|
||||
return normalize_openai_base_url(vertex_base_url), "vertex_base_url"
|
||||
if base_url:
|
||||
return normalize_openai_base_url(base_url), "base_url"
|
||||
if pod_id:
|
||||
return build_runpod_endpoint(pod_id), "pod_id"
|
||||
return None, None
|
||||
|
||||
|
||||
def parse_deploy_response(payload: dict[str, Any]) -> dict[str, str]:
|
||||
@@ -102,7 +129,7 @@ def update_config_text(config_text: str, *, base_url: str, model: str = DEFAULT_
|
||||
|
||||
replacement = {
|
||||
"name": provider_name,
|
||||
"base_url": base_url,
|
||||
"base_url": normalize_openai_base_url(base_url),
|
||||
"api_key": "",
|
||||
"model": model,
|
||||
}
|
||||
@@ -129,7 +156,8 @@ def write_config_file(config_path: Path, *, base_url: str, model: str = DEFAULT_
|
||||
return updated
|
||||
|
||||
|
||||
def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str = "Say READY") -> str:
|
||||
def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str = DEFAULT_VERIFY_PROMPT) -> str:
|
||||
base_url = normalize_openai_base_url(base_url)
|
||||
payload = json.dumps(
|
||||
{
|
||||
"model": model,
|
||||
@@ -139,7 +167,7 @@ def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str
|
||||
}
|
||||
).encode()
|
||||
req = request.Request(
|
||||
f"{base_url.rstrip('/')}/chat/completions",
|
||||
f"{base_url}/chat/completions",
|
||||
data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
@@ -149,6 +177,30 @@ def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str
|
||||
return data["choices"][0]["message"]["content"]
|
||||
|
||||
|
||||
def build_vps_verify_command(
|
||||
*,
|
||||
base_url: str,
|
||||
model: str = DEFAULT_MODEL,
|
||||
prompt: str = DEFAULT_VERIFY_PROMPT,
|
||||
vps_host: str = DEFAULT_BEZALEL_VPS_HOST,
|
||||
) -> str:
|
||||
payload = json.dumps(
|
||||
{
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"stream": False,
|
||||
"max_tokens": 16,
|
||||
},
|
||||
separators=(",", ":"),
|
||||
)
|
||||
remote_command = (
|
||||
f"curl -sS {shlex.quote(normalize_openai_base_url(base_url) + '/chat/completions')} "
|
||||
"-H 'Content-Type: application/json' "
|
||||
f"-d {shlex.quote(payload)}"
|
||||
)
|
||||
return f"ssh root@{vps_host} {shlex.quote(remote_command)}"
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Provision a RunPod Gemma 4 endpoint and wire a Hermes config for Bezalel.")
|
||||
parser.add_argument("--pod-name", default="bezalel-gemma4")
|
||||
@@ -160,6 +212,8 @@ def parse_args() -> argparse.Namespace:
|
||||
parser.add_argument("--config-path", type=Path, default=DEFAULT_CONFIG_PATH)
|
||||
parser.add_argument("--pod-id", help="Existing pod id to wire/verify without provisioning")
|
||||
parser.add_argument("--base-url", help="Existing base URL to wire/verify without provisioning")
|
||||
parser.add_argument("--vertex-base-url", help="OpenAI-compatible Vertex bridge URL; takes precedence over --base-url and --pod-id")
|
||||
parser.add_argument("--vps-host", default=DEFAULT_BEZALEL_VPS_HOST, help="Bezalel VPS host for the remote curl proof command")
|
||||
parser.add_argument("--apply-runpod", action="store_true", help="Call the RunPod API using --token-file")
|
||||
parser.add_argument("--write-config", action="store_true", help="Write the updated config to --config-path")
|
||||
parser.add_argument("--verify-chat", action="store_true", help="Call the OpenAI-compatible chat endpoint")
|
||||
@@ -175,13 +229,18 @@ def main() -> None:
|
||||
"cloud_type": args.cloud_type,
|
||||
"model": args.model,
|
||||
"provider_name": args.provider_name,
|
||||
"config_path": str(args.config_path),
|
||||
"vps_host": args.vps_host,
|
||||
"actions": [],
|
||||
}
|
||||
|
||||
base_url = args.base_url
|
||||
if not base_url and args.pod_id:
|
||||
base_url = build_runpod_endpoint(args.pod_id)
|
||||
summary["actions"].append("computed_base_url_from_pod_id")
|
||||
base_url, base_url_source = resolve_base_url(
|
||||
vertex_base_url=args.vertex_base_url,
|
||||
base_url=args.base_url,
|
||||
pod_id=args.pod_id,
|
||||
)
|
||||
if base_url_source:
|
||||
summary["actions"].append(f"resolved_base_url_from_{base_url_source}")
|
||||
|
||||
if args.apply_runpod:
|
||||
if not args.token_file.exists():
|
||||
@@ -196,12 +255,17 @@ def main() -> None:
|
||||
base_url = build_runpod_endpoint("<pod-id>")
|
||||
summary["actions"].append("using_placeholder_base_url")
|
||||
|
||||
summary["base_url"] = base_url
|
||||
summary["base_url"] = normalize_openai_base_url(base_url)
|
||||
summary["config_preview"] = update_config_text("", base_url=base_url, model=args.model, provider_name=args.provider_name)
|
||||
summary["vps_verify_command"] = build_vps_verify_command(
|
||||
base_url=base_url,
|
||||
model=args.model,
|
||||
prompt=DEFAULT_VERIFY_PROMPT,
|
||||
vps_host=args.vps_host,
|
||||
)
|
||||
|
||||
if args.write_config:
|
||||
write_config_file(args.config_path, base_url=base_url, model=args.model, provider_name=args.provider_name)
|
||||
summary["config_path"] = str(args.config_path)
|
||||
summary["actions"].append("wrote_config")
|
||||
|
||||
if args.verify_chat:
|
||||
@@ -214,8 +278,10 @@ def main() -> None:
|
||||
|
||||
print("--- Bezalel Gemma4 RunPod Wiring ---")
|
||||
print(f"Pod name: {args.pod_name}")
|
||||
print(f"Base URL: {base_url}")
|
||||
print(f"Base URL: {summary['base_url']}")
|
||||
print(f"Model: {args.model}")
|
||||
print(f"Config target: {args.config_path}")
|
||||
print(f"Bezalel VPS proof: {summary['vps_verify_command']}")
|
||||
if args.write_config:
|
||||
print(f"Config written: {args.config_path}")
|
||||
if "verify_response" in summary:
|
||||
|
||||
@@ -1,14 +1,20 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import yaml
|
||||
|
||||
from scripts.bezalel_gemma4_vps import (
|
||||
DEFAULT_CONFIG_PATH,
|
||||
DEFAULT_BEZALEL_VPS_HOST,
|
||||
build_deploy_mutation,
|
||||
build_runpod_endpoint,
|
||||
build_vps_verify_command,
|
||||
normalize_openai_base_url,
|
||||
parse_deploy_response,
|
||||
resolve_base_url,
|
||||
update_config_text,
|
||||
verify_openai_chat,
|
||||
)
|
||||
@@ -28,6 +34,10 @@ class _FakeResponse:
|
||||
return False
|
||||
|
||||
|
||||
def test_default_config_path_targets_bezalel_vps_root_config() -> None:
|
||||
assert DEFAULT_CONFIG_PATH == Path("/root/wizards/bezalel/home/config.yaml")
|
||||
|
||||
|
||||
def test_build_deploy_mutation_uses_ollama_image_and_openai_port() -> None:
|
||||
query = build_deploy_mutation(name="bezalel-gemma4", gpu_type="NVIDIA L40S", model_tag="gemma4:latest")
|
||||
|
||||
@@ -37,6 +47,30 @@ def test_build_deploy_mutation_uses_ollama_image_and_openai_port() -> None:
|
||||
assert 'volumeMountPath: "/root/.ollama"' in query
|
||||
|
||||
|
||||
def test_normalize_openai_base_url_adds_v1_suffix() -> None:
|
||||
assert normalize_openai_base_url("https://pod-11434.proxy.runpod.net") == "https://pod-11434.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def test_normalize_openai_base_url_trims_chat_completions_suffix() -> None:
|
||||
assert normalize_openai_base_url("https://pod-11434.proxy.runpod.net/v1/chat/completions") == "https://pod-11434.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def test_resolve_base_url_prefers_vertex_over_base_and_pod_id() -> None:
|
||||
base_url, source = resolve_base_url(
|
||||
vertex_base_url="https://vertex.example.com/openai",
|
||||
base_url="https://plain.example.com",
|
||||
pod_id="abc123",
|
||||
)
|
||||
assert source == "vertex_base_url"
|
||||
assert base_url == "https://vertex.example.com/openai/v1"
|
||||
|
||||
|
||||
def test_resolve_base_url_falls_back_to_base_url_before_pod_id() -> None:
|
||||
base_url, source = resolve_base_url(base_url="https://plain.example.com", pod_id="abc123")
|
||||
assert source == "base_url"
|
||||
assert base_url == "https://plain.example.com/v1"
|
||||
|
||||
|
||||
def test_build_runpod_endpoint_appends_v1_suffix() -> None:
|
||||
assert build_runpod_endpoint("abc123") == "https://abc123-11434.proxy.runpod.net/v1"
|
||||
|
||||
@@ -60,7 +94,7 @@ def test_parse_deploy_response_extracts_pod_id_and_endpoint() -> None:
|
||||
}
|
||||
|
||||
|
||||
def test_update_config_text_upserts_big_brain_provider() -> None:
|
||||
def test_update_config_text_upserts_big_brain_provider_and_normalizes_base_url() -> None:
|
||||
original = """
|
||||
model:
|
||||
default: kimi-k2.5
|
||||
@@ -72,7 +106,7 @@ custom_providers:
|
||||
model: gemma3:27b
|
||||
"""
|
||||
|
||||
updated = update_config_text(original, base_url="https://new-pod-11434.proxy.runpod.net/v1", model="gemma4:latest")
|
||||
updated = update_config_text(original, base_url="https://new-pod-11434.proxy.runpod.net", model="gemma4:latest")
|
||||
parsed = yaml.safe_load(updated)
|
||||
|
||||
assert parsed["model"] == {"default": "kimi-k2.5", "provider": "kimi-coding"}
|
||||
@@ -86,7 +120,14 @@ custom_providers:
|
||||
]
|
||||
|
||||
|
||||
def test_verify_openai_chat_calls_chat_completions() -> None:
|
||||
def test_build_vps_verify_command_targets_bezalel_host_and_chat_completions() -> None:
|
||||
command = build_vps_verify_command(base_url="https://pod-11434.proxy.runpod.net", model="gemma4:latest")
|
||||
assert command.startswith(f"ssh root@{DEFAULT_BEZALEL_VPS_HOST} ")
|
||||
assert "/v1/chat/completions" in command
|
||||
assert "gemma4:latest" in command
|
||||
|
||||
|
||||
def test_verify_openai_chat_calls_chat_completions_with_normalized_base_url() -> None:
|
||||
response_payload = {
|
||||
"choices": [
|
||||
{
|
||||
@@ -101,7 +142,7 @@ def test_verify_openai_chat_calls_chat_completions() -> None:
|
||||
"scripts.bezalel_gemma4_vps.request.urlopen",
|
||||
return_value=_FakeResponse(response_payload),
|
||||
) as mocked:
|
||||
result = verify_openai_chat("https://pod-11434.proxy.runpod.net/v1", model="gemma4:latest", prompt="say READY")
|
||||
result = verify_openai_chat("https://pod-11434.proxy.runpod.net", model="gemma4:latest", prompt="say READY")
|
||||
|
||||
assert result == "READY"
|
||||
req = mocked.call_args.args[0]
|
||||
@@ -109,3 +150,10 @@ def test_verify_openai_chat_calls_chat_completions() -> None:
|
||||
payload = json.loads(req.data.decode())
|
||||
assert payload["model"] == "gemma4:latest"
|
||||
assert payload["messages"][0]["content"] == "say READY"
|
||||
|
||||
|
||||
def test_readme_documents_root_config_path_and_vps_proof_command() -> None:
|
||||
readme = Path("scripts/README_bezalel_gemma4_vps.md").read_text()
|
||||
assert "/root/wizards/bezalel/home/config.yaml" in readme
|
||||
assert "ssh root@104.131.15.18" in readme
|
||||
assert "--vertex-base-url" in readme
|
||||
|
||||
Reference in New Issue
Block a user