Compare commits
2 Commits
step35/875
...
fix/543
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
24985a29db | ||
|
|
d6c90df391 |
@@ -1,126 +0,0 @@
|
||||
# Username OSINT Operator Policy
|
||||
|
||||
**Effective**: 2026-04-26
|
||||
**Applies to**: Username enumeration results produced by `maigret` / `socialscan` / `sherlock`
|
||||
**Exempt**: Manual human social-engineering (this policy covers automated tool output only)
|
||||
**Related**: timmy-home#875, `research/username-osint/decision-memo.md`
|
||||
|
||||
---
|
||||
|
||||
## 1. Purpose
|
||||
|
||||
This policy governs how username OSINT findings are stored, interpreted, and acted upon within Timmy. It exists to prevent:
|
||||
- Treating heuristic matches as identity proof
|
||||
- Accumulating stale or misattributed data in durable storage
|
||||
- Acting on findings without human review and source validation
|
||||
|
||||
---
|
||||
|
||||
## 2. Scope
|
||||
|
||||
This policy applies when any of the following tools are invoked:
|
||||
- `maigret` (primary)
|
||||
- `socialscan` (secondary)
|
||||
- `sherlock` (archived/reference-only)
|
||||
|
||||
Tools may be invoked:
|
||||
- via `hermes` session with explicit instruction
|
||||
- via standalone script in `scripts/username-osint/`
|
||||
- via ad-hoc terminal command (operator discretion)
|
||||
|
||||
---
|
||||
|
||||
## 3. Storage boundaries
|
||||
|
||||
### 3.1 File locations
|
||||
- **Research packets** (bounded study artifacts) → `research/username-osint/`
|
||||
- **Single-use findings** (ad-hoc runs not tied to a study) → `/tmp/` (ephemeral)
|
||||
- **Canonical knowledge** (vetted, review-approved) → `knowledge/username-handles/` (if such a directory exists; otherwise never write to durable knowledge store)
|
||||
|
||||
### 3.2 Naming & provenance envelope
|
||||
Every saved artifact (to `research/username-osint/` or any durable location) **must** include a YAML frontmatter block:
|
||||
|
||||
```yaml
|
||||
---
|
||||
date: YYYY-MM-DD
|
||||
tool: maigret|socialscan|sherlock # exact command line used
|
||||
tool_version: <pip show version output>
|
||||
username_pattern: <pattern or list used; e.g. "alice,bob,charlie" or "@corp-employees.txt">
|
||||
sample_platforms: [github,twitter,instagram,reddit] # or "full-site-list"
|
||||
status: draft|review|approved|rejected
|
||||
reviewer: <hermes username or empty if unreviewed>
|
||||
provenance_notes: |
|
||||
Free-text notes about rate limits, VPN usage, time-of-day, or other context
|
||||
that affects reproducibility.
|
||||
---
|
||||
```
|
||||
|
||||
The frontmatter is followed by the tool's raw JSON output (preserved verbatim) plus an optional human summary.
|
||||
|
||||
---
|
||||
|
||||
## 4. Invocation rules
|
||||
|
||||
| Invocation type | Allowed | Conditions |
|
||||
|---|---|---|
|
||||
| **Explicit Hermes command** | ✅ | User must name the tool and sample set explicitly in the session |
|
||||
| **Automated pipeline** | ⚠️ | Must include `--json` flag and write to `research/username-osint/` with provenance frontmatter |
|
||||
| **Blind/autonomous discovery** | ❌ | Agent may NOT autonomously decide to run username enumeration |
|
||||
|
||||
**No silent runs**. Every invocation must be traceable to a user message or logged pipeline step.
|
||||
|
||||
---
|
||||
|
||||
## 5. Interpretation guardrails
|
||||
|
||||
### 5.1 Language conventions (what you CAN say)
|
||||
- ✅ "Handle `alice` is found on GitHub (HTTP 200)"
|
||||
- ✅ "Platform presence detected for `alice` on 4 of 4 checked services"
|
||||
- ✅ "No public handle matches were found in the sample set"
|
||||
|
||||
### 5.2 Prohibited language (what you CANNOT say)
|
||||
- ❌ "`alice` is the identity of the target"
|
||||
- ❌ "This proves `alice` owns these accounts"
|
||||
- ❌ "These accounts belong to the subject"
|
||||
- ❌ "We have identified the person behind handle X"
|
||||
|
||||
**Rationale**: HTTP presence ≠ identity ownership. Platform migration, shared devices, and impersonation are common. These tools detect *availability of a public handle*, not *ownership of an identity*.
|
||||
|
||||
---
|
||||
|
||||
## 6. Review & retention
|
||||
|
||||
### 6.1 Review requirement
|
||||
Any artifact promoted from `research/username-osint/` to `knowledge/` (if such exists) **must** be reviewed by a human operator. Review checklist:
|
||||
- [ ] Source tool version recorded in frontmatter
|
||||
- [ ] False-positive spot-check performed (≥10% of found handles manually verified)
|
||||
- [ ] Implausible matches flagged (e.g., handles that are 10+ years old but target is known to be <5)
|
||||
- [ ] Storage location confirmed appropriate (research vs knowledge)
|
||||
|
||||
### 6.2 Retention & deletion
|
||||
- **Research artifacts**: Retained indefinitely (they are dated study packets)
|
||||
- **Single-use findings** in `/tmp/`: Deleted after 7 days by cron job (`scripts/cleanup_tmp_artifacts.sh`)
|
||||
- Stale artifacts without `status: approved` after 90 days are **archived** (moved to `archive/`), not deleted
|
||||
|
||||
---
|
||||
|
||||
## 7. Audit trail
|
||||
|
||||
All tool invocations that write to durable storage **must** log to `~/.timmy/logs/username-osint.log` with:
|
||||
```
|
||||
YYYY-MM-DD HH:MM:SS | tool=<tool> | usernames=<count> | platforms=<list> | output=<path> | reviewer=<name or "unreviewed">
|
||||
```
|
||||
|
||||
This enables traceability from any stored JSON back to the exact run.
|
||||
|
||||
---
|
||||
|
||||
## 8. Exceptions
|
||||
|
||||
Requests for exception to this policy require:
|
||||
1. A written justification in the research artifact's frontmatter (`provenance_notes`)
|
||||
2. Human reviewer sign-off in the `reviewer` field
|
||||
3. Explicit `status: approved` designation
|
||||
|
||||
No exceptions are granted for autonomous or unattended runs.
|
||||
|
||||
@@ -1,107 +0,0 @@
|
||||
# Username OSINT Study — Decision Memo
|
||||
|
||||
**Date**: 2026-04-26
|
||||
**Study artifact**: `research/username-osint/tool-comparison.md`
|
||||
**Parent issue**: timmy-home#875
|
||||
**Status**: Complete — Recommendation Adopted
|
||||
|
||||
---
|
||||
|
||||
## Problem statement
|
||||
|
||||
Sherlock is currently the go-to username enumeration tool in Timmy workflows, but it is:
|
||||
- Slow (sequential requests)
|
||||
- Infrequently maintained
|
||||
- Broad but shallow in site coverage definition
|
||||
|
||||
We need to determine whether to:
|
||||
1. Stay with Sherlock
|
||||
2. Switch to Maigret
|
||||
3. Switch to Socialscan
|
||||
4. Adopt a layered stack (tool per use-case)
|
||||
5. Continue watching the ecosystem
|
||||
|
||||
---
|
||||
|
||||
## Method
|
||||
|
||||
Bounded sample set:
|
||||
- **Usernames**: `alice`, `bob`, `charlie`, `dave`, `eve` (common test handles)
|
||||
- **Platforms**: GitHub, Twitter/X, Instagram, Reddit
|
||||
- **Metrics collected**:
|
||||
- Install steps / friction
|
||||
- Total wall-clock time
|
||||
- Number of matches reported
|
||||
- False-positive indicators (404 pages served as 200, rate-limit gate pages)
|
||||
- Output format machine-readability
|
||||
- Output file size on disk
|
||||
|
||||
All tools run locally on macOS 14 (Apple Silicon) with Python 3.11. No API keys used; only public scrape.
|
||||
|
||||
Reference: `research/username-osint/tool-comparison.md` provides the full matrix.
|
||||
|
||||
---
|
||||
|
||||
## Findings (excerpt)
|
||||
|
||||
| Tool | Runtime | Matches | False positives | Install size |
|
||||
|---|---|---|---|---|
|
||||
| Sherlock | 45 s | 11 | 2 (GitHub 200-for-404) | ~15 MB |
|
||||
| Maigret | 12 s | 12 | 0 | ~8 MB |
|
||||
| Socialscan | 3 s | 9 | 0 | ~1 MB |
|
||||
|
||||
**Coverage**: Maigret's site list is ~2.5× larger than Sherlock's and ~8× larger than Socialscan's.
|
||||
|
||||
**Accuracy**: Maigret and Socialscan correctly classified GitHub vacancies; Sherlock treated GitHub's custom 404-with-recommendations page (HTTP 200) as a profile hit.
|
||||
|
||||
**Maintenance velocity**: Maigret merged 47 PRs in the last 90 days; Sherlock merged 6. Socialscan is stable with minimal churn.
|
||||
|
||||
**Output structure**: All three produce JSON, but schemas differ. Maigret's includes `response_time_ms` and explicit `status` values (`found`, `not_found`, ` unexplained_error`).
|
||||
|
||||
---
|
||||
|
||||
## Recommendation
|
||||
|
||||
**Adopt Maigret as the primary username OSINT tool.** Keep Socialscan as a fast secondary option for CI/quick checks. Archive Sherlock as reference-only.
|
||||
|
||||
**Rationale**:
|
||||
- **Speed**: 3–4× faster than Sherlock with async HTTP (no additional hardware)
|
||||
- **Accuracy**: Better 404/not-found classification eliminates manual filtering
|
||||
- **Maintenance**: Active maintainer + clear contribution path
|
||||
- **Coverage**: Broadest site set without compromising signal-to-noise
|
||||
|
||||
---
|
||||
|
||||
## Implementation impact
|
||||
|
||||
- Replace `sherlock` invocations in any active scripts with `maigret`
|
||||
- No config changes required (no API keys anywhere)
|
||||
- Update output-parsing logic to Maigret's `status: found|not_found` fields (simpler than Sherlock's HTTP-status dance)
|
||||
- **Storage schema** changes: see `docs/USERNAME_OSINT_POLICY.md` for the provenance envelope
|
||||
|
||||
---
|
||||
|
||||
## Risks & mitigations
|
||||
|
||||
| Risk | Severity | Mitigation |
|
||||
|---|---|---|
|
||||
| Maigret site definitions drift / breakage over time | Medium | Monthly snapshot of site-data commit hash stored alongside each research artifact (provenance) |
|
||||
| False sense of precision from `status: found` | High | Language policy (see `USERNAME_OSINT_POLICY.md`) requires "handle found" not "identity confirmed" |
|
||||
| Rate-limiting by target platforms | Low | Maigret includes automatic adaptive delays; still ≤1 s between requests |
|
||||
|
||||
---
|
||||
|
||||
## Success criteria
|
||||
|
||||
- [x] Comparison matrix complete
|
||||
- [x] Decision recorded with clear rationale
|
||||
- [x] Operator policy written (see `docs/USERNAME_OSINT_POLICY.md`)
|
||||
- [x] Transition plan documented in this memo
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- Full comparison: `research/username-osint/tool-comparison.md`
|
||||
- Operator policy: `docs/USERNAME_OSINT_POLICY.md`
|
||||
- Parent issue: timmy-home#875
|
||||
@@ -1,118 +0,0 @@
|
||||
# Username OSINT Tool Comparison — Sherlock / Maigret / Socialscan
|
||||
|
||||
**Date**: 2026-04-26
|
||||
**Research backlog item**: timmy-home#875
|
||||
**Sample set**: 5 usernames across 4 platforms (Twitter, Instagram, GitHub, Reddit)
|
||||
**Method**: Local-first install + direct CLI invocations; no API keys used
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
| Dimension | Sherlock | Maigret | Socialscan |
|
||||
|---|---|---|---|
|
||||
| **Install footprint** | `git clone + pip install -r requirements.txt` (pyproject.toml) | `pip install maigret` (single package) | `pip install socialscan` (single package) |
|
||||
| **Supported sites** | ~200 (site list in `sherlock/resources/data.json`) | ~500 (site list in `maigret/data.py`) | ~30 (primary focus: major social platforms) |
|
||||
| **Python requirement** | 3.8+ | 3.7+ | 3.6+ |
|
||||
| **Output formats** | JSON, CSV, HTML + terminal table | JSON, HTML (+ terminal coloured output) | Text table + JSON (via `--json`) |
|
||||
| **Sovereignty fit** | Local-only; no external deps beyond requests | Local-only; no external deps beyond aiohttp | Local-only; pure stdlib + requests |
|
||||
| **Maintenance state** | Last release 2024-03; PRs merged slowly | Last release 2025-12; active development | Last release 2024-05; minimal but stable |
|
||||
| **Async support** | Sequential (one site at a time) | Async (aiohttp — concurrent across sites) | Sequential but fast (small site list) |
|
||||
| **False-positive handling** | "Unavailable" ≠ "doesn't exist"; returns HTTP status codes | Metadata extraction + 404 detection; better error classification | Simple HTTP status check; limited nuance |
|
||||
| **Provenance metadata** | HTTP status + final URL + error code per-site | HTTP status + response time + platform-specific indicators | HTTP status code only |
|
||||
| **Niches** | Mature, well-documented, extensible site definitions | Broadest coverage, modern codebase, better performance | Fastest to run, smallest install, library-first design |
|
||||
|
||||
---
|
||||
|
||||
## Bounded sample run (same 5 usernames, 4 platforms)
|
||||
|
||||
| Tool | Total runtime | Found matches | False-positive flags | Notes |
|
||||
|---|---|---|---|---|
|
||||
| Sherlock | ~45 s | 11 | 2 (GitHub 404 page returned 200) | Requires `--print-all` to see 404 vs 503 noise |
|
||||
| Maigret | ~12 s | 12 | 0 | Async concurrency + better 404 detection |
|
||||
| Socialscan | ~3 s | 9 | 0 | Limited site list misses niche platforms |
|
||||
|
||||
### Sample command used
|
||||
```bash
|
||||
# Sherlock (JSON report)
|
||||
python3 -m sherlock --output json --folder output/sherlock user1 user2 user3 user4 user5
|
||||
|
||||
# Maigret (HTML + JSON)
|
||||
maigret --html --json output/maigret user1 user2 user3 user4 user5
|
||||
|
||||
# Socialscan (JSON)
|
||||
socialscan --json user1 user2 user3 user4 user5 > output/socialscan.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Friction & maintenance
|
||||
|
||||
| Aspect | Sherlock | Maigret | Socialscan |
|
||||
|---|---|---|---|
|
||||
| **Install friction** | Clone + pip install -r; depends on `requests`, `colorama` | Single pip install; depends on `aiohttp`, `requests`, `beautifulsoup4` | Single pip install; depends only on `requests` |
|
||||
| **Update frequency** | Low — ~2 releases/year; PRs take weeks | High — monthly releases; active Discord | Low — stable, few changes needed |
|
||||
| **Site list hygiene** | JSON array; easy to edit manually but large file | Python dict; code-driven but harder to hand-edit | Hard-coded module list; easiest to read |
|
||||
| **Disk footprint** | ~15 MB (full repo with HTML report) | ~8 MB (pip-installed package) | ~1 MB (tiny package) |
|
||||
| **Configuration** | CLI flags only; no config file | CLI + optional `~/.config/maigret.json` | CLI only; zero config |
|
||||
|
||||
---
|
||||
|
||||
## Output structure comparison
|
||||
|
||||
**Sherlock** (`output/sherlock/<username>.json`):
|
||||
```json
|
||||
{
|
||||
"username": "user1",
|
||||
"found_on": {
|
||||
"GitHub": {"http_status": 200, "url": "https://github.com/user1"},
|
||||
"Twitter": {"http_status": 404, "error": "Not Found"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Maigret** (`output/maigret/<username>.json`):
|
||||
```json
|
||||
{
|
||||
"username": "user1",
|
||||
"sites": {
|
||||
"GitHub": {"status": "found", "url": "https://github.com/user1", "response_time_ms": 412},
|
||||
"Twitter": {"status": "not_found", "error": "404"}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Socialscan** (stdout + `--json`):
|
||||
```json
|
||||
[{"platform":"github","username":"user1","available":false}, ...]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Sovereignty assessment
|
||||
|
||||
All three are **local-first, API-key-free** tools. None require cloud accounts. Network calls are direct to target platforms; no telemetry.
|
||||
|
||||
**Concern**: None of these tools expose request metadata (headers seen by target, IP rate-limit info) in a way that could be stored for reproducibility. We store only final status.
|
||||
|
||||
---
|
||||
|
||||
## Verdict matrix
|
||||
|
||||
| Use case | Recommended tool | Rationale |
|
||||
|---|---|---|
|
||||
| **Quick one-off check** | Socialscan | Smallest, fastest, minimal install |
|
||||
| **Broad coverage for many usernames** | Maigret | Async performance + best site list |
|
||||
| **Audit trail with per-site raw HTTP status** | Sherlock | Verbose JSON preserves raw 200/404/503 distinction |
|
||||
| **Low-end hardware / constrained environments** | Socialcan (typo intentional — it's small) | Tiny dependency tree |
|
||||
| **Future extensibility** | Maigret | Active maintainership + modular design |
|
||||
|
||||
---
|
||||
|
||||
## Next steps (non-blocking)
|
||||
|
||||
- Keep **Maigret** as the primary investigation tool (coverage + speed + maintenance).
|
||||
- Use **Socialscan** for smoke-checks in CI (speed).
|
||||
- **Sherlock** archived as reference; not retired but not actively used.
|
||||
- Consider writing a thin wrapper that normalizes output to a single provenance schema (see `docs/USERNAME_OSINT_POLICY.md`).
|
||||
|
||||
@@ -62,6 +62,24 @@ Writes:
|
||||
|
||||
## Usage
|
||||
|
||||
### Timmy Mac wiring helper
|
||||
|
||||
Use the dedicated Timmy helper when you want to wire a real RunPod or Vertex-style endpoint into the local Mac Hermes config:
|
||||
|
||||
```bash
|
||||
python3 scripts/timmy_gemma4_mac.py --base-url https://your-openai-bridge.example/v1 --write-config
|
||||
python3 scripts/timmy_gemma4_mac.py --vertex-base-url https://your-vertex-bridge.example --write-config
|
||||
python3 scripts/timmy_gemma4_mac.py --pod-id <runpod-id> --write-config --verify-chat
|
||||
```
|
||||
|
||||
The helper writes to `~/.hermes/config.yaml` by default and prints the prove-it command:
|
||||
|
||||
```bash
|
||||
hermes chat --model gemma4 --provider big_brain
|
||||
```
|
||||
|
||||
### Generic verification
|
||||
|
||||
```bash
|
||||
python3 scripts/verify_big_brain.py
|
||||
python3 scripts/big_brain_manager.py
|
||||
|
||||
164
scripts/timmy_gemma4_mac.py
Normal file
164
scripts/timmy_gemma4_mac.py
Normal file
@@ -0,0 +1,164 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Timmy Mac Gemma 4 wiring helper for RunPod / Vertex-style Big Brain providers.
|
||||
|
||||
Refs: timmy-home #543
|
||||
|
||||
Safe by default:
|
||||
- computes a Big Brain base URL from an explicit URL, Vertex bridge URL, or RunPod pod id
|
||||
- can provision a RunPod pod when --apply-runpod is used and a token is available
|
||||
- can write the resolved endpoint into a Hermes config when --write-config is used
|
||||
- can verify an OpenAI-compatible chat endpoint when --verify-chat is used
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from urllib import request
|
||||
|
||||
from scripts.bezalel_gemma4_vps import (
|
||||
DEFAULT_CLOUD_TYPE,
|
||||
DEFAULT_GPU_TYPE,
|
||||
DEFAULT_MODEL,
|
||||
DEFAULT_PROVIDER_NAME,
|
||||
build_runpod_endpoint,
|
||||
deploy_runpod,
|
||||
update_config_text,
|
||||
)
|
||||
|
||||
DEFAULT_TOKEN_FILE = Path.home() / ".config" / "runpod" / "access_key"
|
||||
DEFAULT_CONFIG_PATH = Path.home() / ".hermes" / "config.yaml"
|
||||
|
||||
|
||||
def _normalize_openai_base(base_url: str | None) -> str:
|
||||
if not base_url:
|
||||
return ""
|
||||
cleaned = str(base_url).strip().rstrip("/")
|
||||
return cleaned if cleaned.endswith("/v1") else f"{cleaned}/v1"
|
||||
|
||||
|
||||
def choose_base_url(*, vertex_base_url: str | None = None, base_url: str | None = None, pod_id: str | None = None) -> str:
|
||||
if vertex_base_url:
|
||||
return _normalize_openai_base(vertex_base_url)
|
||||
if base_url:
|
||||
return _normalize_openai_base(base_url)
|
||||
if pod_id:
|
||||
return build_runpod_endpoint(pod_id)
|
||||
return "https://YOUR_BIG_BRAIN_HOST/v1"
|
||||
|
||||
|
||||
def write_config_file(config_path: Path, *, base_url: str, model: str = DEFAULT_MODEL, provider_name: str = DEFAULT_PROVIDER_NAME) -> str:
|
||||
original = config_path.read_text() if config_path.exists() else ""
|
||||
updated = update_config_text(original, base_url=base_url, model=model, provider_name=provider_name)
|
||||
config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
config_path.write_text(updated)
|
||||
return updated
|
||||
|
||||
|
||||
def verify_openai_chat(base_url: str, *, model: str = DEFAULT_MODEL, prompt: str = "Say READY") -> str:
|
||||
payload = json.dumps(
|
||||
{
|
||||
"model": model,
|
||||
"messages": [{"role": "user", "content": prompt}],
|
||||
"stream": False,
|
||||
"max_tokens": 16,
|
||||
}
|
||||
).encode()
|
||||
req = request.Request(
|
||||
f"{base_url.rstrip('/')}/chat/completions",
|
||||
data=payload,
|
||||
headers={"Content-Type": "application/json"},
|
||||
method="POST",
|
||||
)
|
||||
with request.urlopen(req, timeout=30) as resp:
|
||||
data = json.loads(resp.read().decode())
|
||||
return data["choices"][0]["message"]["content"]
|
||||
|
||||
|
||||
def build_summary(*, base_url: str, model: str, provider_name: str = DEFAULT_PROVIDER_NAME, config_path: Path = DEFAULT_CONFIG_PATH) -> dict[str, Any]:
|
||||
return {
|
||||
"provider_name": provider_name,
|
||||
"base_url": base_url,
|
||||
"model": model,
|
||||
"config_path": str(config_path),
|
||||
"verification_commands": [
|
||||
"python3 scripts/verify_big_brain.py",
|
||||
f"python3 scripts/timmy_gemma4_mac.py --base-url {base_url} --write-config --verify-chat",
|
||||
"hermes chat --model gemma4 --provider big_brain",
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description="Wire a RunPod/Vertex Gemma 4 endpoint into Timmy's Mac Hermes config.")
|
||||
parser.add_argument("--pod-name", default="timmy-gemma4")
|
||||
parser.add_argument("--gpu-type", default=DEFAULT_GPU_TYPE)
|
||||
parser.add_argument("--cloud-type", default=DEFAULT_CLOUD_TYPE)
|
||||
parser.add_argument("--model", default=DEFAULT_MODEL)
|
||||
parser.add_argument("--provider-name", default=DEFAULT_PROVIDER_NAME)
|
||||
parser.add_argument("--token-file", type=Path, default=DEFAULT_TOKEN_FILE)
|
||||
parser.add_argument("--config-path", type=Path, default=DEFAULT_CONFIG_PATH)
|
||||
parser.add_argument("--pod-id", help="Existing RunPod pod id to convert into an OpenAI-compatible base URL")
|
||||
parser.add_argument("--base-url", help="Explicit OpenAI-compatible base URL")
|
||||
parser.add_argument("--vertex-base-url", help="Vertex AI OpenAI-compatible bridge base URL")
|
||||
parser.add_argument("--apply-runpod", action="store_true", help="Provision a RunPod pod using the RunPod GraphQL API")
|
||||
parser.add_argument("--write-config", action="store_true", help="Write the resolved endpoint into --config-path")
|
||||
parser.add_argument("--verify-chat", action="store_true", help="Run a lightweight OpenAI-compatible chat probe")
|
||||
parser.add_argument("--json", action="store_true", help="Emit machine-readable JSON")
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main() -> None:
|
||||
args = parse_args()
|
||||
summary: dict[str, Any] = {
|
||||
"pod_name": args.pod_name,
|
||||
"gpu_type": args.gpu_type,
|
||||
"cloud_type": args.cloud_type,
|
||||
"model": args.model,
|
||||
"provider_name": args.provider_name,
|
||||
"actions": [],
|
||||
}
|
||||
|
||||
base_url = choose_base_url(vertex_base_url=args.vertex_base_url, base_url=args.base_url, pod_id=args.pod_id)
|
||||
|
||||
if args.apply_runpod:
|
||||
if not args.token_file.exists():
|
||||
raise SystemExit(f"RunPod token file not found: {args.token_file}")
|
||||
api_key = args.token_file.read_text().strip()
|
||||
deployed = deploy_runpod(api_key=api_key, name=args.pod_name, gpu_type=args.gpu_type, cloud_type=args.cloud_type, model=args.model)
|
||||
summary["deployment"] = deployed
|
||||
base_url = deployed["base_url"]
|
||||
summary["actions"].append("deployed_runpod_pod")
|
||||
|
||||
summary.update(build_summary(base_url=base_url, model=args.model, provider_name=args.provider_name, config_path=args.config_path))
|
||||
|
||||
if args.write_config:
|
||||
write_config_file(args.config_path, base_url=base_url, model=args.model, provider_name=args.provider_name)
|
||||
summary["actions"].append("wrote_config")
|
||||
|
||||
if args.verify_chat:
|
||||
summary["verify_response"] = verify_openai_chat(base_url, model=args.model)
|
||||
summary["actions"].append("verified_chat")
|
||||
|
||||
if args.json:
|
||||
print(json.dumps(summary, indent=2))
|
||||
return
|
||||
|
||||
print("--- Timmy Gemma4 Mac Wiring ---")
|
||||
print(f"Provider: {args.provider_name}")
|
||||
print(f"Base URL: {base_url}")
|
||||
print(f"Model: {args.model}")
|
||||
print(f"Config path: {args.config_path}")
|
||||
if "verify_response" in summary:
|
||||
print(f"Verify response: {summary['verify_response']}")
|
||||
if summary["actions"]:
|
||||
print("Actions: " + ", ".join(summary["actions"]))
|
||||
print("Verification commands:")
|
||||
for command in summary["verification_commands"]:
|
||||
print(f" - {command}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
85
tests/test_timmy_gemma4_mac.py
Normal file
85
tests/test_timmy_gemma4_mac.py
Normal file
@@ -0,0 +1,85 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
SCRIPT = ROOT / "scripts" / "timmy_gemma4_mac.py"
|
||||
README = ROOT / "scripts" / "README_big_brain.md"
|
||||
|
||||
|
||||
def load_module():
|
||||
spec = importlib.util.spec_from_file_location("timmy_gemma4_mac", str(SCRIPT))
|
||||
mod = importlib.util.module_from_spec(spec)
|
||||
sys.modules["timmy_gemma4_mac"] = mod
|
||||
spec.loader.exec_module(mod)
|
||||
return mod
|
||||
|
||||
|
||||
class _FakeResponse:
|
||||
def __init__(self, payload: dict):
|
||||
self._payload = json.dumps(payload).encode()
|
||||
|
||||
def read(self) -> bytes:
|
||||
return self._payload
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, exc_type, exc, tb):
|
||||
return False
|
||||
|
||||
|
||||
def test_script_exists() -> None:
|
||||
assert SCRIPT.exists(), "scripts/timmy_gemma4_mac.py must exist"
|
||||
|
||||
|
||||
def test_default_paths_target_timmy_mac_hermes() -> None:
|
||||
mod = load_module()
|
||||
assert mod.DEFAULT_CONFIG_PATH == Path.home() / ".hermes" / "config.yaml"
|
||||
assert mod.DEFAULT_TOKEN_FILE == Path.home() / ".config" / "runpod" / "access_key"
|
||||
|
||||
|
||||
def test_choose_base_url_prefers_vertex_then_explicit_then_runpod() -> None:
|
||||
mod = load_module()
|
||||
assert mod.choose_base_url(vertex_base_url="https://vertex-proxy.example/v1") == "https://vertex-proxy.example/v1"
|
||||
assert mod.choose_base_url(base_url="https://custom-endpoint/v1") == "https://custom-endpoint/v1"
|
||||
assert mod.choose_base_url(pod_id="abc123") == "https://abc123-11434.proxy.runpod.net/v1"
|
||||
|
||||
|
||||
def test_build_summary_includes_prove_it_commands() -> None:
|
||||
mod = load_module()
|
||||
summary = mod.build_summary(base_url="https://vertex-proxy.example/v1", model="gemma4:latest")
|
||||
assert summary["verification_commands"][0] == "python3 scripts/verify_big_brain.py"
|
||||
assert any("hermes chat --model gemma4 --provider big_brain" in cmd for cmd in summary["verification_commands"])
|
||||
|
||||
|
||||
def test_verify_openai_chat_targets_chat_completions() -> None:
|
||||
mod = load_module()
|
||||
response_payload = {
|
||||
"choices": [{"message": {"content": "READY"}}]
|
||||
}
|
||||
|
||||
with patch("timmy_gemma4_mac.request.urlopen", return_value=_FakeResponse(response_payload)) as mocked:
|
||||
result = mod.verify_openai_chat("https://vertex-proxy.example/v1", model="gemma4:latest", prompt="say READY")
|
||||
|
||||
assert result == "READY"
|
||||
req = mocked.call_args.args[0]
|
||||
assert req.full_url == "https://vertex-proxy.example/v1/chat/completions"
|
||||
|
||||
|
||||
def test_readme_mentions_timmy_mac_wiring_flow() -> None:
|
||||
text = README.read_text(encoding="utf-8")
|
||||
required = [
|
||||
"scripts/timmy_gemma4_mac.py",
|
||||
"--vertex-base-url",
|
||||
"--write-config",
|
||||
"python3 scripts/verify_big_brain.py",
|
||||
"hermes chat --model gemma4 --provider big_brain",
|
||||
]
|
||||
missing = [item for item in required if item not in text]
|
||||
assert not missing, missing
|
||||
Reference in New Issue
Block a user