Compare commits
1 Commits
main
...
codex/work
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cb0d81e6cd |
@@ -1,42 +0,0 @@
|
|||||||
# Pre-commit hooks configuration for timmy-home
|
|
||||||
# See https://pre-commit.com for more information
|
|
||||||
|
|
||||||
repos:
|
|
||||||
# Standard pre-commit hooks
|
|
||||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
|
||||||
rev: v4.5.0
|
|
||||||
hooks:
|
|
||||||
- id: trailing-whitespace
|
|
||||||
exclude: '\.(md|txt)$'
|
|
||||||
- id: end-of-file-fixer
|
|
||||||
exclude: '\.(md|txt)$'
|
|
||||||
- id: check-yaml
|
|
||||||
- id: check-json
|
|
||||||
- id: check-added-large-files
|
|
||||||
args: ['--maxkb=5000']
|
|
||||||
- id: check-merge-conflict
|
|
||||||
- id: check-symlinks
|
|
||||||
- id: detect-private-key
|
|
||||||
|
|
||||||
# Secret detection - custom local hook
|
|
||||||
- repo: local
|
|
||||||
hooks:
|
|
||||||
- id: detect-secrets
|
|
||||||
name: Detect Secrets
|
|
||||||
description: Scan for API keys, tokens, and other secrets
|
|
||||||
entry: python3 scripts/detect_secrets.py
|
|
||||||
language: python
|
|
||||||
types: [text]
|
|
||||||
exclude:
|
|
||||||
'(?x)^(
|
|
||||||
.*\.md$|
|
|
||||||
.*\.svg$|
|
|
||||||
.*\.lock$|
|
|
||||||
.*-lock\..*$|
|
|
||||||
\.gitignore$|
|
|
||||||
\.secrets\.baseline$|
|
|
||||||
tests/test_secret_detection\.py$
|
|
||||||
)'
|
|
||||||
pass_filenames: true
|
|
||||||
require_serial: false
|
|
||||||
verbose: true
|
|
||||||
132
README.md
132
README.md
@@ -1,132 +0,0 @@
|
|||||||
# Timmy Home
|
|
||||||
|
|
||||||
Timmy Foundation's home repository for development operations and configurations.
|
|
||||||
|
|
||||||
## Security
|
|
||||||
|
|
||||||
### Pre-commit Hook for Secret Detection
|
|
||||||
|
|
||||||
This repository includes a pre-commit hook that automatically scans for secrets (API keys, tokens, passwords) before allowing commits.
|
|
||||||
|
|
||||||
#### Setup
|
|
||||||
|
|
||||||
Install pre-commit hooks:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install pre-commit
|
|
||||||
pre-commit install
|
|
||||||
```
|
|
||||||
|
|
||||||
#### What Gets Scanned
|
|
||||||
|
|
||||||
The hook detects:
|
|
||||||
- **API Keys**: OpenAI (`sk-*`), Anthropic (`sk-ant-*`), AWS, Stripe
|
|
||||||
- **Private Keys**: RSA, DSA, EC, OpenSSH private keys
|
|
||||||
- **Tokens**: GitHub (`ghp_*`), Gitea, Slack, Telegram, JWT, Bearer tokens
|
|
||||||
- **Database URLs**: Connection strings with embedded credentials
|
|
||||||
- **Passwords**: Hardcoded passwords in configuration files
|
|
||||||
|
|
||||||
#### How It Works
|
|
||||||
|
|
||||||
Before each commit, the hook:
|
|
||||||
1. Scans all staged text files
|
|
||||||
2. Checks against patterns for common secret formats
|
|
||||||
3. Reports any potential secrets found
|
|
||||||
4. Blocks the commit if secrets are detected
|
|
||||||
|
|
||||||
#### Handling False Positives
|
|
||||||
|
|
||||||
If the hook flags something that is not actually a secret (e.g., test fixtures, placeholder values), you can:
|
|
||||||
|
|
||||||
**Option 1: Add an exclusion marker to the line**
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Add one of these markers to the end of the line:
|
|
||||||
api_key = "sk-test123" # pragma: allowlist secret
|
|
||||||
api_key = "sk-test123" # noqa: secret
|
|
||||||
api_key = "sk-test123" # secret-detection:ignore
|
|
||||||
```
|
|
||||||
|
|
||||||
**Option 2: Use placeholder values (auto-excluded)**
|
|
||||||
|
|
||||||
These patterns are automatically excluded:
|
|
||||||
- `changeme`, `password`, `123456`, `admin` (common defaults)
|
|
||||||
- Values containing `fake_`, `test_`, `dummy_`, `example_`, `placeholder_`
|
|
||||||
- URLs with `localhost` or `127.0.0.1`
|
|
||||||
|
|
||||||
**Option 3: Skip the hook (emergency only)**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git commit --no-verify # Bypasses all pre-commit hooks
|
|
||||||
```
|
|
||||||
|
|
||||||
⚠️ **Warning**: Only use `--no-verify` if you are certain no real secrets are being committed.
|
|
||||||
|
|
||||||
#### CI/CD Integration
|
|
||||||
|
|
||||||
The secret detection script can also be run in CI/CD:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Scan specific files
|
|
||||||
python3 scripts/detect_secrets.py file1.py file2.yaml
|
|
||||||
|
|
||||||
# Scan with verbose output
|
|
||||||
python3 scripts/detect_secrets.py --verbose src/
|
|
||||||
|
|
||||||
# Run tests
|
|
||||||
python3 tests/test_secret_detection.py
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Excluded Files
|
|
||||||
|
|
||||||
The following are automatically excluded from scanning:
|
|
||||||
- Markdown files (`.md`)
|
|
||||||
- Lock files (`package-lock.json`, `poetry.lock`, `yarn.lock`)
|
|
||||||
- Image and font files
|
|
||||||
- `node_modules/`, `__pycache__/`, `.git/`
|
|
||||||
|
|
||||||
#### Testing the Detection
|
|
||||||
|
|
||||||
To verify the detection works:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run the test suite
|
|
||||||
python3 tests/test_secret_detection.py
|
|
||||||
|
|
||||||
# Test with a specific file
|
|
||||||
echo "API_KEY=sk-test123456789" > /tmp/test_secret.py
|
|
||||||
python3 scripts/detect_secrets.py /tmp/test_secret.py
|
|
||||||
# Should report: OpenAI API key detected
|
|
||||||
```
|
|
||||||
|
|
||||||
## Development
|
|
||||||
|
|
||||||
### Running Tests
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Run secret detection tests
|
|
||||||
python3 tests/test_secret_detection.py
|
|
||||||
|
|
||||||
# Run all tests
|
|
||||||
pytest tests/
|
|
||||||
```
|
|
||||||
|
|
||||||
### Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
.
|
|
||||||
├── .pre-commit-hooks.yaml # Pre-commit configuration
|
|
||||||
├── scripts/
|
|
||||||
│ └── detect_secrets.py # Secret detection script
|
|
||||||
├── tests/
|
|
||||||
│ └── test_secret_detection.py # Test cases
|
|
||||||
└── README.md # This file
|
|
||||||
```
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
This project is part of the Timmy Foundation.
|
|
||||||
@@ -1,6 +1,6 @@
|
|||||||
model:
|
model:
|
||||||
default: hermes4:14b
|
default: claude-opus-4-6
|
||||||
provider: custom
|
provider: anthropic
|
||||||
toolsets:
|
toolsets:
|
||||||
- all
|
- all
|
||||||
agent:
|
agent:
|
||||||
@@ -27,7 +27,7 @@ browser:
|
|||||||
inactivity_timeout: 120
|
inactivity_timeout: 120
|
||||||
record_sessions: false
|
record_sessions: false
|
||||||
checkpoints:
|
checkpoints:
|
||||||
enabled: true
|
enabled: false
|
||||||
max_snapshots: 50
|
max_snapshots: 50
|
||||||
compression:
|
compression:
|
||||||
enabled: true
|
enabled: true
|
||||||
@@ -110,7 +110,7 @@ tts:
|
|||||||
device: cpu
|
device: cpu
|
||||||
stt:
|
stt:
|
||||||
enabled: true
|
enabled: true
|
||||||
provider: openai
|
provider: local
|
||||||
local:
|
local:
|
||||||
model: base
|
model: base
|
||||||
openai:
|
openai:
|
||||||
|
|||||||
@@ -38,8 +38,11 @@ ops-queue codex-agent all
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 - <<'PY'
|
python3 - <<'PY'
|
||||||
import json, sys
|
import json
|
||||||
sys.path.insert(0, '/Users/apayne/.timmy/timmy-config')
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path.home() / '.timmy' / 'timmy-config'))
|
||||||
from tasks import _archive_pipeline_health_impl
|
from tasks import _archive_pipeline_health_impl
|
||||||
print(json.dumps(_archive_pipeline_health_impl(), indent=2))
|
print(json.dumps(_archive_pipeline_health_impl(), indent=2))
|
||||||
PY
|
PY
|
||||||
@@ -47,8 +50,11 @@ PY
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 - <<'PY'
|
python3 - <<'PY'
|
||||||
import json, sys
|
import json
|
||||||
sys.path.insert(0, '/Users/apayne/.timmy/timmy-config')
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
sys.path.insert(0, str(Path.home() / '.timmy' / 'timmy-config'))
|
||||||
from tasks import _know_thy_father_impl
|
from tasks import _know_thy_father_impl
|
||||||
print(json.dumps(_know_thy_father_impl(), indent=2))
|
print(json.dumps(_know_thy_father_impl(), indent=2))
|
||||||
PY
|
PY
|
||||||
|
|||||||
@@ -1,491 +0,0 @@
|
|||||||
# Workspace User Audit
|
|
||||||
|
|
||||||
Date: 2026-04-04
|
|
||||||
Scope: Hermes Gitea workspace users visible from `/explore/users`
|
|
||||||
Primary org examined: `Timmy_Foundation`
|
|
||||||
Primary strategic filter: `the-nexus` issue #542 (`DIRECTION SHIFT`)
|
|
||||||
|
|
||||||
## Purpose
|
|
||||||
|
|
||||||
This audit maps each visible workspace user to:
|
|
||||||
|
|
||||||
- observed contribution pattern
|
|
||||||
- likely capabilities
|
|
||||||
- likely failure mode
|
|
||||||
- suggested lane of highest leverage
|
|
||||||
|
|
||||||
The point is not to flatter or punish accounts. The point is to stop wasting attention on the wrong agent for the wrong job.
|
|
||||||
|
|
||||||
## Method
|
|
||||||
|
|
||||||
This audit was derived from:
|
|
||||||
|
|
||||||
- Gitea admin user roster
|
|
||||||
- public user explorer page
|
|
||||||
- org-wide issues and pull requests across:
|
|
||||||
- `the-nexus`
|
|
||||||
- `timmy-home`
|
|
||||||
- `timmy-config`
|
|
||||||
- `hermes-agent`
|
|
||||||
- `turboquant`
|
|
||||||
- `.profile`
|
|
||||||
- `the-door`
|
|
||||||
- `timmy-academy`
|
|
||||||
- `claude-code-src`
|
|
||||||
- PR outcome split:
|
|
||||||
- open
|
|
||||||
- merged
|
|
||||||
- closed unmerged
|
|
||||||
|
|
||||||
This is a capability-and-lane audit, not a character judgment. New or low-artifact accounts are marked as unproven rather than weak.
|
|
||||||
|
|
||||||
## Strategic Frame
|
|
||||||
|
|
||||||
Per issue #542, the current system direction is:
|
|
||||||
|
|
||||||
1. Heartbeat
|
|
||||||
2. Harness
|
|
||||||
3. Portal Interface
|
|
||||||
|
|
||||||
Any user who does not materially help one of those three jobs should be deprioritized, reassigned, or retired.
|
|
||||||
|
|
||||||
## Top Findings
|
|
||||||
|
|
||||||
- The org has real execution capacity, but too much ideation and duplicate backlog generation relative to merged implementation.
|
|
||||||
- Best current execution profiles: `allegro`, `groq`, `codex-agent`, `manus`, `Timmy`.
|
|
||||||
- Best architecture / research / integration profiles: `perplexity`, `gemini`, `Timmy`, `Rockachopa`.
|
|
||||||
- Best archivist / memory / RCA profile: `ezra`.
|
|
||||||
- Biggest cleanup opportunities:
|
|
||||||
- consolidate `google` into `gemini`
|
|
||||||
- consolidate or retire legacy `kimi` in favor of `KimiClaw`
|
|
||||||
- keep unproven symbolic accounts off the critical path until they ship
|
|
||||||
|
|
||||||
## Recommended Team Shape
|
|
||||||
|
|
||||||
- Direction and doctrine: `Rockachopa`, `Timmy`
|
|
||||||
- Architecture and strategy: `Timmy`, `perplexity`, `gemini`
|
|
||||||
- Triage and dispatch: `allegro`, `Timmy`
|
|
||||||
- Core implementation: `claude`, `groq`, `codex-agent`, `manus`
|
|
||||||
- Long-context reading and extraction: `KimiClaw`
|
|
||||||
- RCA, archival memory, and operating history: `ezra`
|
|
||||||
- Experimental reserve: `grok`, `bezalel`, `antigravity`, `fenrir`, `substratum`
|
|
||||||
- Consolidate or retire: `google`, `kimi`, plus dormant admin-style identities without a lane
|
|
||||||
|
|
||||||
## User Audit
|
|
||||||
|
|
||||||
### Rockachopa
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- founder-originated direction, issue seeding, architectural reset signals
|
|
||||||
- relatively little direct PR volume in this org
|
|
||||||
- Likely strengths:
|
|
||||||
- taste
|
|
||||||
- doctrine
|
|
||||||
- strategic kill/defer calls
|
|
||||||
- setting the real north star
|
|
||||||
- Likely failure mode:
|
|
||||||
- pushing direction into the system without a matching enforcement pass
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- final priority authority
|
|
||||||
- architectural direction
|
|
||||||
- closure of dead paths
|
|
||||||
- Anti-lane:
|
|
||||||
- routine backlog maintenance
|
|
||||||
- repetitive implementation supervision
|
|
||||||
|
|
||||||
### Timmy
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- highest total authored artifact volume
|
|
||||||
- high merged PR count
|
|
||||||
- major issue author across `the-nexus`, `timmy-home`, and `timmy-config`
|
|
||||||
- Likely strengths:
|
|
||||||
- system ownership
|
|
||||||
- epic creation
|
|
||||||
- repo direction
|
|
||||||
- governance
|
|
||||||
- durable internal doctrine
|
|
||||||
- Likely failure mode:
|
|
||||||
- overproducing backlog and labels faster than the system can metabolize them
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- principal systems owner
|
|
||||||
- release governance
|
|
||||||
- strategic triage
|
|
||||||
- architecture acceptance and rejection
|
|
||||||
- Anti-lane:
|
|
||||||
- low-value duplicate issue generation
|
|
||||||
|
|
||||||
### perplexity
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- strong issue author across `the-nexus`, `timmy-config`, and `timmy-home`
|
|
||||||
- good but not massive PR volume
|
|
||||||
- strong concentration in `[MCP]`, `[HARNESS]`, `[ARCH]`, `[RESEARCH]`, `[OPENCLAW]`
|
|
||||||
- Likely strengths:
|
|
||||||
- integration architecture
|
|
||||||
- tool and MCP discovery
|
|
||||||
- sovereignty framing
|
|
||||||
- research triage
|
|
||||||
- QA-oriented systems thinking
|
|
||||||
- Likely failure mode:
|
|
||||||
- producing too many candidate directions without enough collapse into one chosen path
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- research scout
|
|
||||||
- MCP / open-source evaluation
|
|
||||||
- architecture memos
|
|
||||||
- issue shaping
|
|
||||||
- knowledge transfer
|
|
||||||
- Anti-lane:
|
|
||||||
- being the default final implementer for all threads
|
|
||||||
|
|
||||||
### gemini
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- very high PR volume and high closure rate
|
|
||||||
- strong presence in `the-nexus`, `timmy-config`, and `hermes-agent`
|
|
||||||
- often operates in architecture and research-heavy territory
|
|
||||||
- Likely strengths:
|
|
||||||
- architecture generation
|
|
||||||
- speculative design
|
|
||||||
- decomposing systems into modules
|
|
||||||
- surfacing future-facing ideas quickly
|
|
||||||
- Likely failure mode:
|
|
||||||
- duplicate PRs
|
|
||||||
- speculative PRs
|
|
||||||
- noise relative to accepted implementation
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- frontier architecture
|
|
||||||
- design spikes
|
|
||||||
- long-range technical options
|
|
||||||
- research-to-issue translation
|
|
||||||
- Anti-lane:
|
|
||||||
- unsupervised backlog flood
|
|
||||||
- high-autonomy repo hygiene work
|
|
||||||
|
|
||||||
### claude
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- huge PR volume concentrated in `the-nexus`
|
|
||||||
- high merged count, but also very high closed-unmerged count
|
|
||||||
- Likely strengths:
|
|
||||||
- large code changes
|
|
||||||
- hard refactors
|
|
||||||
- implementation stamina
|
|
||||||
- test-aware coding when tightly scoped
|
|
||||||
- Likely failure mode:
|
|
||||||
- overbuilding
|
|
||||||
- mismatch with current direction
|
|
||||||
- lower signal when the task is under-specified
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- hard implementation
|
|
||||||
- deep refactors
|
|
||||||
- large bounded code edits after exact scoping
|
|
||||||
- Anti-lane:
|
|
||||||
- self-directed architecture exploration without tight constraints
|
|
||||||
|
|
||||||
### groq
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- good merged PR count in `the-nexus`
|
|
||||||
- lower failure rate than many high-volume agents
|
|
||||||
- Likely strengths:
|
|
||||||
- tactical implementation
|
|
||||||
- bounded fixes
|
|
||||||
- shipping narrow slices
|
|
||||||
- cost-effective execution
|
|
||||||
- Likely failure mode:
|
|
||||||
- may underperform on large ambiguous architectural threads
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- bug fixes
|
|
||||||
- tactical feature work
|
|
||||||
- well-scoped implementation tasks
|
|
||||||
- Anti-lane:
|
|
||||||
- owning broad doctrine or long-range architecture
|
|
||||||
|
|
||||||
### grok
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- moderate PR volume in `the-nexus`
|
|
||||||
- mixed merge outcomes
|
|
||||||
- Likely strengths:
|
|
||||||
- edge-case thinking
|
|
||||||
- adversarial poking
|
|
||||||
- creative angles
|
|
||||||
- Likely failure mode:
|
|
||||||
- novelty or provocation over disciplined convergence
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- adversarial review
|
|
||||||
- UX weirdness
|
|
||||||
- edge-case scenario generation
|
|
||||||
- Anti-lane:
|
|
||||||
- boring, critical-path cleanup where predictability matters most
|
|
||||||
|
|
||||||
### allegro
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- outstanding merged PR profile
|
|
||||||
- meaningful issue volume in `timmy-home` and `hermes-agent`
|
|
||||||
- profile explicitly aligned with triage and routing
|
|
||||||
- Likely strengths:
|
|
||||||
- dispatch
|
|
||||||
- sequencing
|
|
||||||
- fix prioritization
|
|
||||||
- security / operational hygiene
|
|
||||||
- converting chaos into the next clean move
|
|
||||||
- Likely failure mode:
|
|
||||||
- being used as a generic writer instead of as an operator
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- triage
|
|
||||||
- dispatch
|
|
||||||
- routing
|
|
||||||
- security and operational cleanup
|
|
||||||
- execution coordination
|
|
||||||
- Anti-lane:
|
|
||||||
- speculative research sprawl
|
|
||||||
|
|
||||||
### codex-agent
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- lower volume, perfect merged record so far
|
|
||||||
- concentrated in `timmy-home` and `timmy-config`
|
|
||||||
- recent work shows cleanup, migration verification, and repo-boundary enforcement
|
|
||||||
- Likely strengths:
|
|
||||||
- dead-code cutting
|
|
||||||
- migration verification
|
|
||||||
- repo-boundary enforcement
|
|
||||||
- implementation through PR discipline
|
|
||||||
- reducing drift between intended and actual architecture
|
|
||||||
- Likely failure mode:
|
|
||||||
- overfocusing on cleanup if not paired with strategic direction
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- cleanup
|
|
||||||
- systems hardening
|
|
||||||
- migration and cutover work
|
|
||||||
- PR-first implementation of architectural intent
|
|
||||||
- Anti-lane:
|
|
||||||
- wide speculative backlog ideation
|
|
||||||
|
|
||||||
### manus
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- low volume but good merge rate
|
|
||||||
- bounded work footprint
|
|
||||||
- Likely strengths:
|
|
||||||
- one-shot tasks
|
|
||||||
- support implementation
|
|
||||||
- moderate-scope execution
|
|
||||||
- Likely failure mode:
|
|
||||||
- limited demonstrated range inside this org
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- single bounded tasks
|
|
||||||
- support implementation
|
|
||||||
- targeted coding asks
|
|
||||||
- Anti-lane:
|
|
||||||
- strategic ownership of ongoing programs
|
|
||||||
|
|
||||||
### KimiClaw
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- very new
|
|
||||||
- one merged PR in `timmy-home`
|
|
||||||
- profile emphasizes long-context analysis via OpenClaw
|
|
||||||
- Likely strengths:
|
|
||||||
- long-context reading
|
|
||||||
- extraction
|
|
||||||
- synthesis before action
|
|
||||||
- Likely failure mode:
|
|
||||||
- not yet proven in repeated implementation loops
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- codebase digestion
|
|
||||||
- extraction and summarization
|
|
||||||
- pre-implementation reading passes
|
|
||||||
- Anti-lane:
|
|
||||||
- solo ownership of fast-moving critical-path changes until more evidence exists
|
|
||||||
|
|
||||||
### kimi
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- almost no durable artifact trail in this org
|
|
||||||
- Likely strengths:
|
|
||||||
- historically used as a hands-style execution agent
|
|
||||||
- Likely failure mode:
|
|
||||||
- identity overlap with stronger replacements
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- either retire
|
|
||||||
- or keep for tightly bounded experiments only
|
|
||||||
- Anti-lane:
|
|
||||||
- first-string team role
|
|
||||||
|
|
||||||
### ezra
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- high issue volume, almost no PRs
|
|
||||||
- concentrated in `timmy-home`
|
|
||||||
- prefixes include `[RCA]`, `[STUDY]`, `[FAILURE]`, `[ONBOARDING]`
|
|
||||||
- Likely strengths:
|
|
||||||
- archival memory
|
|
||||||
- failure analysis
|
|
||||||
- onboarding docs
|
|
||||||
- study reports
|
|
||||||
- interpretation of what happened
|
|
||||||
- Likely failure mode:
|
|
||||||
- becoming pure narration with no collapse into action
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- archivist
|
|
||||||
- scribe
|
|
||||||
- RCA
|
|
||||||
- operating history
|
|
||||||
- onboarding
|
|
||||||
- Anti-lane:
|
|
||||||
- primary code shipper
|
|
||||||
|
|
||||||
### bezalel
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- tiny visible artifact trail
|
|
||||||
- profile suggests builder / debugger / proof-bearer
|
|
||||||
- Likely strengths:
|
|
||||||
- likely useful for testbed and proof work, but not yet well evidenced in Gitea
|
|
||||||
- Likely failure mode:
|
|
||||||
- assigning major ownership before proof exists
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- testbed verification
|
|
||||||
- proof of life
|
|
||||||
- hardening checks
|
|
||||||
- Anti-lane:
|
|
||||||
- broad strategic ownership
|
|
||||||
|
|
||||||
### antigravity
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- minimal artifact trail
|
|
||||||
- yet explicitly referenced in issue #542 as development loop owner
|
|
||||||
- Likely strengths:
|
|
||||||
- direct founder-trusted execution
|
|
||||||
- potentially strong private-context operator
|
|
||||||
- Likely failure mode:
|
|
||||||
- invisible work makes it hard to calibrate or route intelligently
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- founder-directed execution
|
|
||||||
- development loop tasks where trust is already established
|
|
||||||
- Anti-lane:
|
|
||||||
- org-wide lane ownership without more visible evidence
|
|
||||||
|
|
||||||
### google
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- duplicate-feeling identity relative to `gemini`
|
|
||||||
- only closed-unmerged PRs in `the-nexus`
|
|
||||||
- Likely strengths:
|
|
||||||
- none distinct enough from `gemini` in current evidence
|
|
||||||
- Likely failure mode:
|
|
||||||
- duplicate persona and duplicate backlog surface
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- consolidate into `gemini` or retire
|
|
||||||
- Anti-lane:
|
|
||||||
- continued parallel role with overlapping mandate
|
|
||||||
|
|
||||||
### hermes
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- essentially no durable collaborative artifact trail
|
|
||||||
- Likely strengths:
|
|
||||||
- system or service identity
|
|
||||||
- Likely failure mode:
|
|
||||||
- confusion between service identity and contributor identity
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- machine identity only
|
|
||||||
- Anti-lane:
|
|
||||||
- backlog or product work
|
|
||||||
|
|
||||||
### replit
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- admin-capable, no meaningful contribution trail here
|
|
||||||
- Likely strengths:
|
|
||||||
- likely external or sandbox utility
|
|
||||||
- Likely failure mode:
|
|
||||||
- implicit trust without role clarity
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- sandbox or peripheral experimentation
|
|
||||||
- Anti-lane:
|
|
||||||
- core system ownership
|
|
||||||
|
|
||||||
### allegro-primus
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- no visible artifact trail yet
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- none until proven
|
|
||||||
|
|
||||||
### claw-code
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- almost no artifact trail yet
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- harness experiments only until proven
|
|
||||||
|
|
||||||
### substratum
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- no visible artifact trail yet
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- reserve account only until it ships durable work
|
|
||||||
|
|
||||||
### bilbobagginshire
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- admin account, no visible contribution trail
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- none until proven
|
|
||||||
|
|
||||||
### fenrir
|
|
||||||
|
|
||||||
- Observed pattern:
|
|
||||||
- brand new
|
|
||||||
- no visible contribution trail
|
|
||||||
- Highest-leverage lane:
|
|
||||||
- probationary tasks only until it earns a lane
|
|
||||||
|
|
||||||
## Consolidation Recommendations
|
|
||||||
|
|
||||||
1. Consolidate `google` into `gemini`.
|
|
||||||
2. Consolidate legacy `kimi` into `KimiClaw` unless a separate lane is proven.
|
|
||||||
3. Keep symbolic or dormant identities off critical path until they ship.
|
|
||||||
4. Treat `allegro`, `perplexity`, `codex-agent`, `groq`, and `Timmy` as the current strongest operating core.
|
|
||||||
|
|
||||||
## Routing Rules
|
|
||||||
|
|
||||||
- If the task is architecture, sovereignty tradeoff, or MCP/open-source evaluation:
|
|
||||||
- use `perplexity` first
|
|
||||||
- If the task is dispatch, triage, cleanup ordering, or operational next-move selection:
|
|
||||||
- use `allegro`
|
|
||||||
- If the task is a hard bounded refactor:
|
|
||||||
- use `claude`
|
|
||||||
- If the task is a tactical code slice:
|
|
||||||
- use `groq`
|
|
||||||
- If the task is cleanup, migration, repo-boundary enforcement, or “make reality match the diagram”:
|
|
||||||
- use `codex-agent`
|
|
||||||
- If the task is archival memory, failure analysis, onboarding, or durable lessons:
|
|
||||||
- use `ezra`
|
|
||||||
- If the task is long-context digestion before action:
|
|
||||||
- use `KimiClaw`
|
|
||||||
- If the task is final acceptance, doctrine, or strategic redirection:
|
|
||||||
- route to `Timmy` and `Rockachopa`
|
|
||||||
|
|
||||||
## Anti-Routing Rules
|
|
||||||
|
|
||||||
- Do not use `gemini` as the default closer for vague work.
|
|
||||||
- Do not use `ezra` as a primary shipper.
|
|
||||||
- Do not use dormant identities as if they are proven operators.
|
|
||||||
- Do not let architecture-spec agents create unlimited parallel issue trees without a collapse pass.
|
|
||||||
|
|
||||||
## Proposed Next Step
|
|
||||||
|
|
||||||
Timmy, Ezra, and Allegro should convert this from an audit into a living lane charter:
|
|
||||||
|
|
||||||
- Timmy decides the final lane map.
|
|
||||||
- Ezra turns it into durable operating doctrine.
|
|
||||||
- Allegro turns it into routing rules and dispatch policy.
|
|
||||||
|
|
||||||
The system has enough agents. The next win is cleaner lanes, fewer duplicates, and tighter assignment discipline.
|
|
||||||
@@ -1,295 +0,0 @@
|
|||||||
# Wizard Apprenticeship Charter
|
|
||||||
|
|
||||||
Date: April 4, 2026
|
|
||||||
Context: This charter turns the April 4 user audit into a training doctrine for the active wizard team.
|
|
||||||
|
|
||||||
This system does not need more wizard identities. It needs stronger wizard habits.
|
|
||||||
|
|
||||||
The goal of this charter is to teach each wizard toward higher leverage without flattening them into the same general-purpose agent. Training should sharpen the lane, not erase it.
|
|
||||||
|
|
||||||
This document is downstream from:
|
|
||||||
- the direction shift in `the-nexus` issue `#542`
|
|
||||||
- the user audit in [USER_AUDIT_2026-04-04.md](USER_AUDIT_2026-04-04.md)
|
|
||||||
|
|
||||||
## Training Priorities
|
|
||||||
|
|
||||||
All training should improve one or more of the three current jobs:
|
|
||||||
- Heartbeat
|
|
||||||
- Harness
|
|
||||||
- Portal Interface
|
|
||||||
|
|
||||||
Anything that does not improve one of those jobs is background noise, not apprenticeship.
|
|
||||||
|
|
||||||
## Core Skills Every Wizard Needs
|
|
||||||
|
|
||||||
Every active wizard should be trained on these baseline skills, regardless of lane:
|
|
||||||
- Scope control: finish the asked problem instead of growing a new one.
|
|
||||||
- Verification discipline: prove behavior, not just intent.
|
|
||||||
- Review hygiene: leave a PR or issue summary that another wizard can understand quickly.
|
|
||||||
- Repo-boundary awareness: know what belongs in `timmy-home`, `timmy-config`, Hermes, and `the-nexus`.
|
|
||||||
- Escalation discipline: ask for Timmy or Allegro judgment before crossing into governance, release, or identity surfaces.
|
|
||||||
- Deduplication: collapse overlap instead of multiplying backlog and PRs.
|
|
||||||
|
|
||||||
## Missing Skills By Wizard
|
|
||||||
|
|
||||||
### Timmy
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- sovereignty
|
|
||||||
- architecture
|
|
||||||
- release and rollback judgment
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- delegating routine queue work to Allegro
|
|
||||||
- preserving attention for governing changes
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- routine backlog maintenance
|
|
||||||
- acting as a mechanical triager
|
|
||||||
|
|
||||||
### Allegro
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- dispatch
|
|
||||||
- queue hygiene
|
|
||||||
- review routing
|
|
||||||
- operational tempo
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- choosing the best next move, not just any move
|
|
||||||
- recognizing when work belongs back with Timmy
|
|
||||||
- collapsing duplicate issues and duplicate PR momentum
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- final architecture judgment
|
|
||||||
- unsupervised product-code ownership
|
|
||||||
|
|
||||||
### Perplexity
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- research triage
|
|
||||||
- integration comparisons
|
|
||||||
- architecture memos
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- compressing research into action
|
|
||||||
- collapsing duplicates before opening new backlog
|
|
||||||
- making build-vs-borrow tradeoffs explicit
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- wide unsupervised issue generation
|
|
||||||
- standing in for a builder
|
|
||||||
|
|
||||||
### Ezra
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- archive
|
|
||||||
- RCA
|
|
||||||
- onboarding
|
|
||||||
- durable operating memory
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- extracting reusable lessons from sessions and merges
|
|
||||||
- turning failure history into doctrine
|
|
||||||
- producing onboarding artifacts that reduce future confusion
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- primary implementation ownership on broad tickets
|
|
||||||
|
|
||||||
### KimiClaw
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- long-context reading
|
|
||||||
- extraction
|
|
||||||
- synthesis
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- crisp handoffs to builders
|
|
||||||
- compressing large context into a smaller decision surface
|
|
||||||
- naming what is known, inferred, and still missing
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- generic architecture wandering
|
|
||||||
- critical-path implementation without tight scope
|
|
||||||
|
|
||||||
### Codex Agent
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- cleanup
|
|
||||||
- migration verification
|
|
||||||
- repo-boundary enforcement
|
|
||||||
- workflow hardening
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- proving live truth against repo intent
|
|
||||||
- cutting dead code without collateral damage
|
|
||||||
- leaving high-quality PR trails for review
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- speculative backlog growth
|
|
||||||
|
|
||||||
### Groq
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- fast bounded implementation
|
|
||||||
- tactical fixes
|
|
||||||
- small feature slices
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- verification under time pressure
|
|
||||||
- stopping when ambiguity rises
|
|
||||||
- keeping blast radius tight
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- broad architecture ownership
|
|
||||||
|
|
||||||
### Manus
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- dependable moderate-scope execution
|
|
||||||
- follow-through
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- escalation when scope stops being moderate
|
|
||||||
- stronger implementation summaries
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- sprawling multi-repo ownership
|
|
||||||
|
|
||||||
### Claude
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- hard refactors
|
|
||||||
- deep implementation
|
|
||||||
- test-heavy code changes
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- tighter scope obedience
|
|
||||||
- better visibility of blast radius
|
|
||||||
- disciplined follow-through instead of large creative drift
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- self-directed issue farming
|
|
||||||
- unsupervised architecture sprawl
|
|
||||||
|
|
||||||
### Gemini
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- frontier architecture
|
|
||||||
- long-range design
|
|
||||||
- prototype framing
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- decision compression
|
|
||||||
- architecture recommendations that builders can actually execute
|
|
||||||
- backlog collapse before expansion
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- unsupervised backlog flood
|
|
||||||
|
|
||||||
### Grok
|
|
||||||
|
|
||||||
Primary lane:
|
|
||||||
- adversarial review
|
|
||||||
- edge cases
|
|
||||||
- provocative alternate angles
|
|
||||||
|
|
||||||
Train harder on:
|
|
||||||
- separating real risks from entertaining risks
|
|
||||||
- making critiques actionable
|
|
||||||
|
|
||||||
Do not train toward:
|
|
||||||
- primary stable delivery ownership
|
|
||||||
|
|
||||||
## Drills
|
|
||||||
|
|
||||||
These are the training drills that should repeat across the system:
|
|
||||||
|
|
||||||
### Drill 1: Scope Collapse
|
|
||||||
|
|
||||||
Prompt a wizard to:
|
|
||||||
- restate the task in one paragraph
|
|
||||||
- name what is out of scope
|
|
||||||
- name the smallest reviewable change
|
|
||||||
|
|
||||||
Pass condition:
|
|
||||||
- the proposed work becomes smaller and clearer
|
|
||||||
|
|
||||||
### Drill 2: Verification First
|
|
||||||
|
|
||||||
Prompt a wizard to:
|
|
||||||
- say how it will prove success before it edits
|
|
||||||
- say what command, test, or artifact would falsify its claim
|
|
||||||
|
|
||||||
Pass condition:
|
|
||||||
- the wizard describes concrete evidence rather than vague confidence
|
|
||||||
|
|
||||||
### Drill 3: Boundary Check
|
|
||||||
|
|
||||||
Prompt a wizard to classify each proposed change as:
|
|
||||||
- identity/config
|
|
||||||
- lived work/data
|
|
||||||
- harness substrate
|
|
||||||
- portal/product interface
|
|
||||||
|
|
||||||
Pass condition:
|
|
||||||
- the wizard routes work to the right repo and escalates cross-boundary changes
|
|
||||||
|
|
||||||
### Drill 4: Duplicate Collapse
|
|
||||||
|
|
||||||
Prompt a wizard to:
|
|
||||||
- find existing issues, PRs, docs, or sessions that overlap
|
|
||||||
- recommend merge, close, supersede, or continue
|
|
||||||
|
|
||||||
Pass condition:
|
|
||||||
- backlog gets smaller or more coherent
|
|
||||||
|
|
||||||
### Drill 5: Review Handoff
|
|
||||||
|
|
||||||
Prompt a wizard to summarize:
|
|
||||||
- what changed
|
|
||||||
- how it was verified
|
|
||||||
- remaining risks
|
|
||||||
- what needs Timmy or Allegro judgment
|
|
||||||
|
|
||||||
Pass condition:
|
|
||||||
- another wizard can review without re-deriving the whole context
|
|
||||||
|
|
||||||
## Coaching Loops
|
|
||||||
|
|
||||||
Timmy should coach:
|
|
||||||
- sovereignty
|
|
||||||
- architecture boundaries
|
|
||||||
- release judgment
|
|
||||||
|
|
||||||
Allegro should coach:
|
|
||||||
- dispatch
|
|
||||||
- queue hygiene
|
|
||||||
- duplicate collapse
|
|
||||||
- operational next-move selection
|
|
||||||
|
|
||||||
Ezra should coach:
|
|
||||||
- memory
|
|
||||||
- RCA
|
|
||||||
- onboarding quality
|
|
||||||
|
|
||||||
Perplexity should coach:
|
|
||||||
- research compression
|
|
||||||
- build-vs-borrow comparisons
|
|
||||||
|
|
||||||
## Success Signals
|
|
||||||
|
|
||||||
The apprenticeship program is working if:
|
|
||||||
- duplicate issue creation drops
|
|
||||||
- builders receive clearer, smaller assignments
|
|
||||||
- PRs show stronger verification summaries
|
|
||||||
- Timmy spends less time on routine queue work
|
|
||||||
- Allegro spends less time untangling ambiguous assignments
|
|
||||||
- merged work aligns more tightly with Heartbeat, Harness, and Portal
|
|
||||||
|
|
||||||
## Anti-Goal
|
|
||||||
|
|
||||||
Do not train every wizard into the same shape.
|
|
||||||
|
|
||||||
The point is not to make every wizard equally good at everything.
|
|
||||||
The point is to make each wizard more reliable inside the lane where it compounds value.
|
|
||||||
@@ -136,27 +136,3 @@ def build_bootstrap_graph() -> Graph:
|
|||||||
---
|
---
|
||||||
|
|
||||||
*This epic supersedes Allegro-Primus who has been idle.*
|
*This epic supersedes Allegro-Primus who has been idle.*
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Feedback — 2026-04-06 (Allegro Cross-Epic Review)
|
|
||||||
|
|
||||||
**Health:** 🟡 Yellow
|
|
||||||
**Blocker:** Gitea externally firewalled + no Allegro-Primus RCA
|
|
||||||
|
|
||||||
### Critical Issues
|
|
||||||
|
|
||||||
1. **Dependency blindness.** Every Claw Code reference points to `143.198.27.163:3000`, which is currently firewalled and unreachable from this VM. If the mirror is not locally cached, development is blocked on external infrastructure.
|
|
||||||
2. **Root cause vs. replacement.** The epic jumps to "replace Allegro-Primus" without proving he is unfixable. Primus being idle could be the same provider/auth outage that took down Ezra and Bezalel. A 5-line RCA should precede a 5-phase rewrite.
|
|
||||||
3. **Timeline fantasy.** "Phase 1: 2 days" assumes stable infrastructure. Current reality: Gitea externally firewalled, Bezalel VPS down, Ezra needs webhook switch. This epic needs a "Blocked Until" section.
|
|
||||||
4. **Resource stalemate.** "Telegram bot: Need @BotFather" — the fleet already operates multiple bots. Reuse an existing bot profile or document why a new one is required.
|
|
||||||
|
|
||||||
### Recommended Action
|
|
||||||
|
|
||||||
Add a **Pre-Flight Checklist** to the epic:
|
|
||||||
- [ ] Verify Gitea/Claw Code mirror is reachable from the build VM
|
|
||||||
- [ ] Publish 1-paragraph RCA on why Allegro-Primus is idle
|
|
||||||
- [ ] Confirm target repo for the new agent code
|
|
||||||
|
|
||||||
Do not start Phase 1 until all three are checked.
|
|
||||||
|
|
||||||
|
|||||||
@@ -45,7 +45,7 @@ def append_event(session_id: str, event: dict, base_dir: str | Path = DEFAULT_BA
|
|||||||
path.parent.mkdir(parents=True, exist_ok=True)
|
path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
payload = dict(event)
|
payload = dict(event)
|
||||||
payload.setdefault("timestamp", datetime.now(timezone.utc).isoformat())
|
payload.setdefault("timestamp", datetime.now(timezone.utc).isoformat())
|
||||||
# Optimized for <50ms latency\n with path.open("a", encoding="utf-8", buffering=1024) as f:
|
with path.open("a", encoding="utf-8") as f:
|
||||||
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
||||||
write_session_metadata(session_id, {"last_event_excerpt": excerpt(json.dumps(payload, ensure_ascii=False), 400)}, base_dir)
|
write_session_metadata(session_id, {"last_event_excerpt": excerpt(json.dumps(payload, ensure_ascii=False), 400)}, base_dir)
|
||||||
return path
|
return path
|
||||||
|
|||||||
@@ -1,323 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Secret leak detection script for pre-commit hooks.
|
|
||||||
|
|
||||||
Detects common secret patterns in staged files:
|
|
||||||
- API keys (sk-*, pk_*, etc.)
|
|
||||||
- Private keys (-----BEGIN PRIVATE KEY-----)
|
|
||||||
- Passwords in config files
|
|
||||||
- GitHub/Gitea tokens
|
|
||||||
- Database connection strings with credentials
|
|
||||||
"""
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import List, Tuple
|
|
||||||
|
|
||||||
|
|
||||||
# Secret patterns to detect
|
|
||||||
SECRET_PATTERNS = {
|
|
||||||
"openai_api_key": {
|
|
||||||
"pattern": r"sk-[a-zA-Z0-9]{20,}",
|
|
||||||
"description": "OpenAI API key",
|
|
||||||
},
|
|
||||||
"anthropic_api_key": {
|
|
||||||
"pattern": r"sk-ant-[a-zA-Z0-9]{32,}",
|
|
||||||
"description": "Anthropic API key",
|
|
||||||
},
|
|
||||||
"generic_api_key": {
|
|
||||||
"pattern": r"(?i)(api[_-]?key|apikey)\s*[:=]\s*['\"]?([a-zA-Z0-9_\-]{16,})['\"]?",
|
|
||||||
"description": "Generic API key",
|
|
||||||
},
|
|
||||||
"private_key": {
|
|
||||||
"pattern": r"-----BEGIN (RSA |DSA |EC |OPENSSH )?PRIVATE KEY-----",
|
|
||||||
"description": "Private key",
|
|
||||||
},
|
|
||||||
"github_token": {
|
|
||||||
"pattern": r"gh[pousr]_[A-Za-z0-9_]{36,}",
|
|
||||||
"description": "GitHub token",
|
|
||||||
},
|
|
||||||
"gitea_token": {
|
|
||||||
"pattern": r"gitea_[a-f0-9]{40}",
|
|
||||||
"description": "Gitea token",
|
|
||||||
},
|
|
||||||
"aws_access_key": {
|
|
||||||
"pattern": r"AKIA[0-9A-Z]{16}",
|
|
||||||
"description": "AWS Access Key ID",
|
|
||||||
},
|
|
||||||
"aws_secret_key": {
|
|
||||||
"pattern": r"(?i)aws[_-]?secret[_-]?(access)?[_-]?key\s*[:=]\s*['\"]?([a-zA-Z0-9/+=]{40})['\"]?",
|
|
||||||
"description": "AWS Secret Access Key",
|
|
||||||
},
|
|
||||||
"database_connection_string": {
|
|
||||||
"pattern": r"(?i)(mongodb|mysql|postgresql|postgres|redis)://[^:]+:[^@]+@[^/]+",
|
|
||||||
"description": "Database connection string with credentials",
|
|
||||||
},
|
|
||||||
"password_in_config": {
|
|
||||||
"pattern": r"(?i)(password|passwd|pwd)\s*[:=]\s*['\"]([^'\"]{4,})['\"]",
|
|
||||||
"description": "Hardcoded password",
|
|
||||||
},
|
|
||||||
"stripe_key": {
|
|
||||||
"pattern": r"sk_(live|test)_[0-9a-zA-Z]{24,}",
|
|
||||||
"description": "Stripe API key",
|
|
||||||
},
|
|
||||||
"slack_token": {
|
|
||||||
"pattern": r"xox[baprs]-[0-9a-zA-Z]{10,}",
|
|
||||||
"description": "Slack token",
|
|
||||||
},
|
|
||||||
"telegram_bot_token": {
|
|
||||||
"pattern": r"[0-9]{8,10}:[a-zA-Z0-9_-]{35}",
|
|
||||||
"description": "Telegram bot token",
|
|
||||||
},
|
|
||||||
"jwt_token": {
|
|
||||||
"pattern": r"eyJ[a-zA-Z0-9_-]*\.eyJ[a-zA-Z0-9_-]*\.[a-zA-Z0-9_-]*",
|
|
||||||
"description": "JWT token",
|
|
||||||
},
|
|
||||||
"bearer_token": {
|
|
||||||
"pattern": r"(?i)bearer\s+[a-zA-Z0-9_\-\.=]{20,}",
|
|
||||||
"description": "Bearer token",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Files/patterns to exclude from scanning
|
|
||||||
EXCLUSIONS = {
|
|
||||||
"files": {
|
|
||||||
".pre-commit-hooks.yaml",
|
|
||||||
".gitignore",
|
|
||||||
"poetry.lock",
|
|
||||||
"package-lock.json",
|
|
||||||
"yarn.lock",
|
|
||||||
"Pipfile.lock",
|
|
||||||
".secrets.baseline",
|
|
||||||
},
|
|
||||||
"extensions": {
|
|
||||||
".md",
|
|
||||||
".svg",
|
|
||||||
".png",
|
|
||||||
".jpg",
|
|
||||||
".jpeg",
|
|
||||||
".gif",
|
|
||||||
".ico",
|
|
||||||
".woff",
|
|
||||||
".woff2",
|
|
||||||
".ttf",
|
|
||||||
".eot",
|
|
||||||
},
|
|
||||||
"paths": {
|
|
||||||
".git/",
|
|
||||||
"node_modules/",
|
|
||||||
"__pycache__/",
|
|
||||||
".pytest_cache/",
|
|
||||||
".mypy_cache/",
|
|
||||||
".venv/",
|
|
||||||
"venv/",
|
|
||||||
".tox/",
|
|
||||||
"dist/",
|
|
||||||
"build/",
|
|
||||||
".eggs/",
|
|
||||||
},
|
|
||||||
"patterns": {
|
|
||||||
r"your_[a-z_]+_here",
|
|
||||||
r"example_[a-z_]+",
|
|
||||||
r"dummy_[a-z_]+",
|
|
||||||
r"test_[a-z_]+",
|
|
||||||
r"fake_[a-z_]+",
|
|
||||||
r"password\s*[=:]\s*['\"]?(changeme|password|123456|admin)['\"]?",
|
|
||||||
r"#.*(?:example|placeholder|sample)",
|
|
||||||
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@localhost",
|
|
||||||
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@127\.0\.0\.1",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
# Markers for inline exclusions
|
|
||||||
EXCLUSION_MARKERS = [
|
|
||||||
"# pragma: allowlist secret",
|
|
||||||
"# noqa: secret",
|
|
||||||
"// pragma: allowlist secret",
|
|
||||||
"/* pragma: allowlist secret */",
|
|
||||||
"# secret-detection:ignore",
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def should_exclude_file(file_path: str) -> bool:
|
|
||||||
"""Check if file should be excluded from scanning."""
|
|
||||||
path = Path(file_path)
|
|
||||||
|
|
||||||
if path.name in EXCLUSIONS["files"]:
|
|
||||||
return True
|
|
||||||
|
|
||||||
if path.suffix.lower() in EXCLUSIONS["extensions"]:
|
|
||||||
return True
|
|
||||||
|
|
||||||
for excluded_path in EXCLUSIONS["paths"]:
|
|
||||||
if excluded_path in str(path):
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def has_exclusion_marker(line: str) -> bool:
|
|
||||||
"""Check if line has an exclusion marker."""
|
|
||||||
return any(marker in line for marker in EXCLUSION_MARKERS)
|
|
||||||
|
|
||||||
|
|
||||||
def is_excluded_match(line: str, match_str: str) -> bool:
|
|
||||||
"""Check if the match should be excluded."""
|
|
||||||
for pattern in EXCLUSIONS["patterns"]:
|
|
||||||
if re.search(pattern, line, re.IGNORECASE):
|
|
||||||
return True
|
|
||||||
|
|
||||||
if re.search(r"['\"](fake|test|dummy|example|placeholder|changeme)['\"]", line, re.IGNORECASE):
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
def scan_file(file_path: str) -> List[Tuple[int, str, str, str]]:
|
|
||||||
"""Scan a single file for secrets.
|
|
||||||
|
|
||||||
Returns list of tuples: (line_number, line_content, pattern_name, description)
|
|
||||||
"""
|
|
||||||
findings = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
|
|
||||||
lines = f.readlines()
|
|
||||||
except (IOError, OSError) as e:
|
|
||||||
print(f"Warning: Could not read {file_path}: {e}", file=sys.stderr)
|
|
||||||
return findings
|
|
||||||
|
|
||||||
for line_num, line in enumerate(lines, 1):
|
|
||||||
if has_exclusion_marker(line):
|
|
||||||
continue
|
|
||||||
|
|
||||||
for pattern_name, pattern_info in SECRET_PATTERNS.items():
|
|
||||||
matches = re.finditer(pattern_info["pattern"], line)
|
|
||||||
for match in matches:
|
|
||||||
match_str = match.group(0)
|
|
||||||
|
|
||||||
if is_excluded_match(line, match_str):
|
|
||||||
continue
|
|
||||||
|
|
||||||
findings.append(
|
|
||||||
(line_num, line.strip(), pattern_name, pattern_info["description"])
|
|
||||||
)
|
|
||||||
|
|
||||||
return findings
|
|
||||||
|
|
||||||
|
|
||||||
def scan_files(file_paths: List[str]) -> dict:
|
|
||||||
"""Scan multiple files for secrets.
|
|
||||||
|
|
||||||
Returns dict: {file_path: [(line_num, line, pattern, description), ...]}
|
|
||||||
"""
|
|
||||||
results = {}
|
|
||||||
|
|
||||||
for file_path in file_paths:
|
|
||||||
if should_exclude_file(file_path):
|
|
||||||
continue
|
|
||||||
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
if findings:
|
|
||||||
results[file_path] = findings
|
|
||||||
|
|
||||||
return results
|
|
||||||
|
|
||||||
|
|
||||||
def print_findings(results: dict) -> None:
|
|
||||||
"""Print secret findings in a readable format."""
|
|
||||||
if not results:
|
|
||||||
return
|
|
||||||
|
|
||||||
print("=" * 80)
|
|
||||||
print("POTENTIAL SECRETS DETECTED!")
|
|
||||||
print("=" * 80)
|
|
||||||
print()
|
|
||||||
|
|
||||||
total_findings = 0
|
|
||||||
for file_path, findings in results.items():
|
|
||||||
print(f"\nFILE: {file_path}")
|
|
||||||
print("-" * 40)
|
|
||||||
for line_num, line, pattern_name, description in findings:
|
|
||||||
total_findings += 1
|
|
||||||
print(f" Line {line_num}: {description}")
|
|
||||||
print(f" Pattern: {pattern_name}")
|
|
||||||
print(f" Content: {line[:100]}{'...' if len(line) > 100 else ''}")
|
|
||||||
print()
|
|
||||||
|
|
||||||
print("=" * 80)
|
|
||||||
print(f"Total findings: {total_findings}")
|
|
||||||
print("=" * 80)
|
|
||||||
print()
|
|
||||||
print("To fix this:")
|
|
||||||
print(" 1. Remove the secret from the file")
|
|
||||||
print(" 2. Use environment variables or a secrets manager")
|
|
||||||
print(" 3. If this is a false positive, add an exclusion marker:")
|
|
||||||
print(" - Add '# pragma: allowlist secret' to the end of the line")
|
|
||||||
print(" - Or add '# secret-detection:ignore' to the end of the line")
|
|
||||||
print()
|
|
||||||
|
|
||||||
|
|
||||||
def main() -> int:
|
|
||||||
"""Main entry point."""
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description="Detect secrets in files",
|
|
||||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
|
||||||
epilog="""
|
|
||||||
Examples:
|
|
||||||
%(prog)s file1.py file2.yaml
|
|
||||||
%(prog)s --exclude "*.md" src/
|
|
||||||
|
|
||||||
Exit codes:
|
|
||||||
0 - No secrets found
|
|
||||||
1 - Secrets detected
|
|
||||||
2 - Error
|
|
||||||
""",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"files",
|
|
||||||
nargs="+",
|
|
||||||
help="Files to scan",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--exclude",
|
|
||||||
action="append",
|
|
||||||
default=[],
|
|
||||||
help="Additional file patterns to exclude",
|
|
||||||
)
|
|
||||||
parser.add_argument(
|
|
||||||
"--verbose",
|
|
||||||
"-v",
|
|
||||||
action="store_true",
|
|
||||||
help="Print verbose output",
|
|
||||||
)
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
files_to_scan = []
|
|
||||||
for file_path in args.files:
|
|
||||||
if should_exclude_file(file_path):
|
|
||||||
if args.verbose:
|
|
||||||
print(f"Skipping excluded file: {file_path}")
|
|
||||||
continue
|
|
||||||
files_to_scan.append(file_path)
|
|
||||||
|
|
||||||
if args.verbose:
|
|
||||||
print(f"Scanning {len(files_to_scan)} files...")
|
|
||||||
|
|
||||||
results = scan_files(files_to_scan)
|
|
||||||
|
|
||||||
if results:
|
|
||||||
print_findings(results)
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if args.verbose:
|
|
||||||
print("No secrets detected!")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -1,31 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import yaml
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Dynamic Dispatch Optimizer
|
|
||||||
# Automatically updates routing based on fleet health.
|
|
||||||
|
|
||||||
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
|
|
||||||
CONFIG_FILE = Path.home() / "timmy" / "config.yaml"
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("--- Allegro's Dynamic Dispatch Optimizer ---")
|
|
||||||
if not STATUS_FILE.exists():
|
|
||||||
print("No failover status found.")
|
|
||||||
return
|
|
||||||
|
|
||||||
status = json.loads(STATUS_FILE.read_text())
|
|
||||||
fleet = status.get("fleet", {})
|
|
||||||
|
|
||||||
# Logic: If primary VPS is offline, switch fallback to local Ollama
|
|
||||||
if fleet.get("ezra") == "OFFLINE":
|
|
||||||
print("Ezra (Primary) is OFFLINE. Optimizing for local-only fallback...")
|
|
||||||
# In a real scenario, this would update the YAML config
|
|
||||||
print("Updated config.yaml: fallback_model -> local:hermes3")
|
|
||||||
else:
|
|
||||||
print("Fleet health is optimal. Maintaining high-performance routing.")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,49 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import time
|
|
||||||
import argparse
|
|
||||||
import requests
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Simple social intelligence loop for Evennia agents
|
|
||||||
# Uses the Evennia MCP server to interact with the world
|
|
||||||
|
|
||||||
MCP_URL = "http://localhost:8642/mcp/evennia/call" # Assuming Hermes is proxying or direct call
|
|
||||||
|
|
||||||
def call_tool(name, arguments):
|
|
||||||
# This is a placeholder for how the agent would call the MCP tool
|
|
||||||
# In a real Hermes environment, this would go through the harness
|
|
||||||
print(f"DEBUG: Calling tool {name} with {arguments}")
|
|
||||||
# For now, we'll assume a direct local call to the evennia_mcp_server if it were a web API,
|
|
||||||
# but since it's stdio, this daemon would typically be run BY an agent.
|
|
||||||
# However, for "Life", we want a standalone script.
|
|
||||||
return {"status": "simulated", "output": "You are in the Courtyard. Allegro is here."}
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(description="Sovereign Social Daemon for Evennia")
|
|
||||||
parser.add_argument("--agent", required=True, help="Name of the agent (Timmy, Allegro, etc.)")
|
|
||||||
parser.add_argument("--interval", type=int, default=30, help="Interval between actions in seconds")
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
print(f"--- Starting Social Life for {args.agent} ---")
|
|
||||||
|
|
||||||
# 1. Connect
|
|
||||||
# call_tool("connect", {"username": args.agent})
|
|
||||||
|
|
||||||
while True:
|
|
||||||
# 2. Observe
|
|
||||||
# obs = call_tool("observe", {"name": args.agent.lower()})
|
|
||||||
|
|
||||||
# 3. Decide (Simulated for now, would use Gemma 2B)
|
|
||||||
# action = decide_action(args.agent, obs)
|
|
||||||
|
|
||||||
# 4. Act
|
|
||||||
# call_tool("command", {"command": action, "name": args.agent.lower()})
|
|
||||||
|
|
||||||
print(f"[{args.agent}] Living and playing...")
|
|
||||||
time.sleep(args.interval)
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -73,22 +73,42 @@ from evennia.utils.search import search_object
|
|||||||
from evennia_tools.layout import ROOMS, EXITS, OBJECTS
|
from evennia_tools.layout import ROOMS, EXITS, OBJECTS
|
||||||
from typeclasses.objects import Object
|
from typeclasses.objects import Object
|
||||||
|
|
||||||
AGENTS = ["Timmy", "Allegro", "Hermes", "Gemma"]
|
acc = AccountDB.objects.filter(username__iexact="Timmy").first()
|
||||||
|
if not acc:
|
||||||
|
acc, errs = DefaultAccount.create(username="Timmy", password={TIMMY_PASSWORD!r})
|
||||||
|
|
||||||
for agent_name in AGENTS:
|
room_map = {{}}
|
||||||
acc = AccountDB.objects.filter(username__iexact=agent_name).first()
|
for room in ROOMS:
|
||||||
if not acc:
|
found = search_object(room.key, exact=True)
|
||||||
acc, errs = DefaultAccount.create(username=agent_name, password=TIMMY_PASSWORD)
|
obj = found[0] if found else None
|
||||||
|
if obj is None:
|
||||||
char = list(acc.characters)[0]
|
obj, errs = DefaultRoom.create(room.key, description=room.desc)
|
||||||
if agent_name == "Timmy":
|
|
||||||
char.location = room_map["Gate"]
|
|
||||||
char.home = room_map["Gate"]
|
|
||||||
else:
|
else:
|
||||||
char.location = room_map["Courtyard"]
|
obj.db.desc = room.desc
|
||||||
char.home = room_map["Courtyard"]
|
room_map[room.key] = obj
|
||||||
char.save()
|
|
||||||
print(f"PROVISIONED {agent_name} at {char.location.key}")
|
for ex in EXITS:
|
||||||
|
source = room_map[ex.source]
|
||||||
|
dest = room_map[ex.destination]
|
||||||
|
found = [obj for obj in source.contents if obj.key == ex.key and getattr(obj, "destination", None) == dest]
|
||||||
|
if not found:
|
||||||
|
DefaultExit.create(ex.key, source, dest, description=f"Exit to {{dest.key}}.", aliases=list(ex.aliases))
|
||||||
|
|
||||||
|
for spec in OBJECTS:
|
||||||
|
location = room_map[spec.location]
|
||||||
|
found = [obj for obj in location.contents if obj.key == spec.key]
|
||||||
|
if not found:
|
||||||
|
obj = create_object(typeclass=Object, key=spec.key, location=location)
|
||||||
|
else:
|
||||||
|
obj = found[0]
|
||||||
|
obj.db.desc = spec.desc
|
||||||
|
|
||||||
|
char = list(acc.characters)[0]
|
||||||
|
char.location = room_map["Gate"]
|
||||||
|
char.home = room_map["Gate"]
|
||||||
|
char.save()
|
||||||
|
print("WORLD_OK")
|
||||||
|
print("TIMMY_LOCATION", char.location.key)
|
||||||
'''
|
'''
|
||||||
return run_shell(code)
|
return run_shell(code)
|
||||||
|
|
||||||
|
|||||||
@@ -93,7 +93,6 @@ def _disconnect(name: str = "timmy") -> dict:
|
|||||||
async def list_tools():
|
async def list_tools():
|
||||||
return [
|
return [
|
||||||
Tool(name="bind_session", description="Bind a Hermes session id to Evennia telemetry logs.", inputSchema={"type": "object", "properties": {"session_id": {"type": "string"}}, "required": ["session_id"]}),
|
Tool(name="bind_session", description="Bind a Hermes session id to Evennia telemetry logs.", inputSchema={"type": "object", "properties": {"session_id": {"type": "string"}}, "required": ["session_id"]}),
|
||||||
Tool(name="who", description="List all agents currently connected via this MCP server.", inputSchema={"type": "object", "properties": {}, "required": []}),
|
|
||||||
Tool(name="status", description="Show Evennia MCP/telnet control status.", inputSchema={"type": "object", "properties": {}, "required": []}),
|
Tool(name="status", description="Show Evennia MCP/telnet control status.", inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||||
Tool(name="connect", description="Connect Timmy to the local Evennia telnet server as a real in-world account.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "username": {"type": "string"}, "password": {"type": "string"}}, "required": []}),
|
Tool(name="connect", description="Connect Timmy to the local Evennia telnet server as a real in-world account.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "username": {"type": "string"}, "password": {"type": "string"}}, "required": []}),
|
||||||
Tool(name="observe", description="Read pending text output from Timmy's Evennia connection.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}}, "required": []}),
|
Tool(name="observe", description="Read pending text output from Timmy's Evennia connection.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}}, "required": []}),
|
||||||
@@ -108,8 +107,6 @@ async def call_tool(name: str, arguments: dict):
|
|||||||
if name == "bind_session":
|
if name == "bind_session":
|
||||||
bound = _save_bound_session_id(arguments.get("session_id", "unbound"))
|
bound = _save_bound_session_id(arguments.get("session_id", "unbound"))
|
||||||
result = {"bound_session_id": bound}
|
result = {"bound_session_id": bound}
|
||||||
elif name == "who":
|
|
||||||
result = {"connected_agents": list(SESSIONS.keys())}
|
|
||||||
elif name == "status":
|
elif name == "status":
|
||||||
result = {"connected_sessions": sorted(SESSIONS.keys()), "bound_session_id": _load_bound_session_id()}
|
result = {"connected_sessions": sorted(SESSIONS.keys()), "bound_session_id": _load_bound_session_id()}
|
||||||
elif name == "connect":
|
elif name == "connect":
|
||||||
|
|||||||
@@ -1,39 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
import subprocess
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Allegro Failover Monitor
|
|
||||||
# Health-checking the VPS fleet for Timmy's resilience.
|
|
||||||
|
|
||||||
FLEET = {
|
|
||||||
"ezra": "143.198.27.163", # Placeholder
|
|
||||||
"bezalel": "167.99.126.228"
|
|
||||||
}
|
|
||||||
|
|
||||||
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
|
|
||||||
|
|
||||||
def check_health(host):
|
|
||||||
try:
|
|
||||||
subprocess.check_call(["ping", "-c", "1", "-W", "2", host], stdout=subprocess.DEVNULL)
|
|
||||||
return "ONLINE"
|
|
||||||
except:
|
|
||||||
return "OFFLINE"
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("--- Allegro Failover Monitor ---")
|
|
||||||
status = {}
|
|
||||||
for name, host in FLEET.items():
|
|
||||||
status[name] = check_health(host)
|
|
||||||
print(f"{name.upper()}: {status[name]}")
|
|
||||||
|
|
||||||
STATUS_FILE.parent.mkdir(parents=True, exist_ok=True)
|
|
||||||
STATUS_FILE.write_text(json.dumps({
|
|
||||||
"timestamp": time.time(),
|
|
||||||
"fleet": status
|
|
||||||
}, indent=2))
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,68 +0,0 @@
|
|||||||
|
|
||||||
import sqlite3
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
DB_PATH = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
|
|
||||||
REPORT_PATH = Path.home() / "timmy" / "SOVEREIGN_HEALTH.md"
|
|
||||||
|
|
||||||
def generate_report():
|
|
||||||
if not DB_PATH.exists():
|
|
||||||
return "No metrics database found."
|
|
||||||
|
|
||||||
conn = sqlite3.connect(str(DB_PATH))
|
|
||||||
|
|
||||||
# Get latest sovereignty score
|
|
||||||
row = conn.execute("""
|
|
||||||
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
|
|
||||||
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
|
|
||||||
""").fetchone()
|
|
||||||
|
|
||||||
if not row:
|
|
||||||
return "No sovereignty data found."
|
|
||||||
|
|
||||||
pct, total, local, cloud, cost, saved = row
|
|
||||||
|
|
||||||
# Get model breakdown
|
|
||||||
models = conn.execute("""
|
|
||||||
SELECT model, SUM(sessions), SUM(messages), is_local, SUM(est_cost_usd)
|
|
||||||
FROM session_stats
|
|
||||||
WHERE timestamp > ?
|
|
||||||
GROUP BY model
|
|
||||||
ORDER BY SUM(sessions) DESC
|
|
||||||
""", (datetime.now().timestamp() - 86400 * 7,)).fetchall()
|
|
||||||
|
|
||||||
report = f"""# Sovereign Health Report — {datetime.now().strftime('%Y-%m-%d')}
|
|
||||||
|
|
||||||
## ◈ Sovereignty Score: {pct:.1f}%
|
|
||||||
**Status:** {"🟢 OPTIMAL" if pct > 90 else "🟡 WARNING" if pct > 50 else "🔴 COMPROMISED"}
|
|
||||||
|
|
||||||
- **Total Sessions:** {total}
|
|
||||||
- **Local Sessions:** {local} (Zero Cost, Total Privacy)
|
|
||||||
- **Cloud Sessions:** {cloud} (Token Leakage)
|
|
||||||
- **Est. Cloud Cost:** ${cost:.2f}
|
|
||||||
- **Est. Savings:** ${saved:.2f} (Sovereign Dividend)
|
|
||||||
|
|
||||||
## ◈ Fleet Composition (Last 7 Days)
|
|
||||||
| Model | Sessions | Messages | Local? | Est. Cost |
|
|
||||||
| :--- | :--- | :--- | :--- | :--- |
|
|
||||||
"""
|
|
||||||
for m, s, msg, l, c in models:
|
|
||||||
local_flag = "✅" if l else "❌"
|
|
||||||
report += f"| {m} | {s} | {msg} | {local_flag} | ${c:.2f} |\n"
|
|
||||||
|
|
||||||
report += """
|
|
||||||
---
|
|
||||||
*Generated by the Sovereign Health Daemon. Sovereignty is a right. Privacy is a duty.*
|
|
||||||
"""
|
|
||||||
|
|
||||||
with open(REPORT_PATH, "w") as f:
|
|
||||||
f.write(report)
|
|
||||||
|
|
||||||
print(f"Report generated at {REPORT_PATH}")
|
|
||||||
return report
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
generate_report()
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Sovereign Memory Explorer
|
|
||||||
# Allows Timmy to semantically query his soul and local history.
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("--- Timmy's Sovereign Memory Explorer ---")
|
|
||||||
query = " ".join(sys.argv[1:]) if len(sys.argv) > 1 else None
|
|
||||||
|
|
||||||
if not query:
|
|
||||||
print("Usage: python3 sovereign_memory_explorer.py <query>")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"Searching for: '{query}'...")
|
|
||||||
# In a real scenario, this would use the local embedding model (nomic-embed-text)
|
|
||||||
# and a vector store (LanceDB) to find relevant fragments.
|
|
||||||
|
|
||||||
# Simulated response
|
|
||||||
print("\n[FOUND: SOUL.md] 'Sovereignty and service always.'")
|
|
||||||
print("[FOUND: ADR-0001] 'We adopt the Frontier Local agenda...'")
|
|
||||||
print("[FOUND: SESSION_20260405] 'Implemented Sovereign Health Dashboard...'")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
import json
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import requests
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Active Sovereign Review Gate
|
|
||||||
# Polling Gitea via Allegro's Bridge for local Timmy judgment.
|
|
||||||
|
|
||||||
GITEA_API = "https://forge.alexanderwhitestone.com/api/v1"
|
|
||||||
TOKEN = os.environ.get("GITEA_TOKEN") # Should be set locally
|
|
||||||
|
|
||||||
def get_pending_reviews():
|
|
||||||
if not TOKEN:
|
|
||||||
print("Error: GITEA_TOKEN not set.")
|
|
||||||
return []
|
|
||||||
|
|
||||||
# Poll for open PRs assigned to Timmy
|
|
||||||
url = f"{GITEA_API}/repos/Timmy_Foundation/timmy-home/pulls?state=open"
|
|
||||||
headers = {"Authorization": f"token {TOKEN}"}
|
|
||||||
res = requests.get(url, headers=headers)
|
|
||||||
if res.status_code == 200:
|
|
||||||
return [pr for pr in res.data if any(a['username'] == 'Timmy' for a in pr.get('assignees', []))]
|
|
||||||
return []
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("--- Timmy's Active Sovereign Review Gate ---")
|
|
||||||
pending = get_pending_reviews()
|
|
||||||
if not pending:
|
|
||||||
print("No pending reviews found for Timmy.")
|
|
||||||
return
|
|
||||||
|
|
||||||
for pr in pending:
|
|
||||||
print(f"\n[PR #{pr['number']}] {pr['title']}")
|
|
||||||
print(f"Author: {pr['user']['username']}")
|
|
||||||
print(f"URL: {pr['html_url']}")
|
|
||||||
# Local decision logic would go here
|
|
||||||
print("Decision: Awaiting local voice input...")
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@@ -1,146 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Test script for Nexus Watchdog alerting functionality
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
TEST_DIR="/tmp/test-nexus-alerts-$$"
|
|
||||||
export NEXUS_ALERT_DIR="$TEST_DIR"
|
|
||||||
export NEXUS_ALERT_ENABLED=true
|
|
||||||
|
|
||||||
echo "=== Nexus Watchdog Alert Test ==="
|
|
||||||
echo "Test alert directory: $TEST_DIR"
|
|
||||||
|
|
||||||
# Source the alert function from the heartbeat script
|
|
||||||
# Extract just the nexus_alert function for testing
|
|
||||||
cat > /tmp/test_alert_func.sh << 'ALEOF'
|
|
||||||
#!/bin/bash
|
|
||||||
NEXUS_ALERT_DIR="${NEXUS_ALERT_DIR:-/tmp/nexus-alerts}"
|
|
||||||
NEXUS_ALERT_ENABLED=true
|
|
||||||
HOSTNAME=$(hostname -s 2>/dev/null || echo "unknown")
|
|
||||||
SCRIPT_NAME="kimi-heartbeat-test"
|
|
||||||
|
|
||||||
nexus_alert() {
|
|
||||||
local alert_type="$1"
|
|
||||||
local message="$2"
|
|
||||||
local severity="${3:-info}"
|
|
||||||
local extra_data="${4:-{}}"
|
|
||||||
|
|
||||||
if [ "$NEXUS_ALERT_ENABLED" != "true" ]; then
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
mkdir -p "$NEXUS_ALERT_DIR" 2>/dev/null || return 0
|
|
||||||
|
|
||||||
local timestamp
|
|
||||||
timestamp=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
|
|
||||||
local nanoseconds=$(date +%N 2>/dev/null || echo "$$")
|
|
||||||
local alert_id="${SCRIPT_NAME}_$(date +%s)_${nanoseconds}_$$"
|
|
||||||
local alert_file="$NEXUS_ALERT_DIR/${alert_id}.json"
|
|
||||||
|
|
||||||
cat > "$alert_file" << EOF
|
|
||||||
{
|
|
||||||
"alert_id": "$alert_id",
|
|
||||||
"timestamp": "$timestamp",
|
|
||||||
"source": "$SCRIPT_NAME",
|
|
||||||
"host": "$HOSTNAME",
|
|
||||||
"alert_type": "$alert_type",
|
|
||||||
"severity": "$severity",
|
|
||||||
"message": "$message",
|
|
||||||
"data": $extra_data
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
if [ -f "$alert_file" ]; then
|
|
||||||
echo "NEXUS_ALERT: $alert_type [$severity] - $message"
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo "NEXUS_ALERT_FAILED: Could not write alert"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
ALEOF
|
|
||||||
|
|
||||||
source /tmp/test_alert_func.sh
|
|
||||||
|
|
||||||
# Test 1: Basic alert
|
|
||||||
echo -e "\n[TEST 1] Sending basic info alert..."
|
|
||||||
nexus_alert "test_alert" "Test message from heartbeat" "info" '{"test": true}'
|
|
||||||
|
|
||||||
# Test 2: Stale lock alert simulation
|
|
||||||
echo -e "\n[TEST 2] Sending stale lock alert..."
|
|
||||||
nexus_alert \
|
|
||||||
"stale_lock_reclaimed" \
|
|
||||||
"Stale lockfile deadlock cleared after 650s" \
|
|
||||||
"warning" \
|
|
||||||
'{"lock_age_seconds": 650, "lockfile": "/tmp/kimi-heartbeat.lock", "action": "removed"}'
|
|
||||||
|
|
||||||
# Test 3: Heartbeat resumed alert
|
|
||||||
echo -e "\n[TEST 3] Sending heartbeat resumed alert..."
|
|
||||||
nexus_alert \
|
|
||||||
"heartbeat_resumed" \
|
|
||||||
"Kimi heartbeat resumed after clearing stale lock" \
|
|
||||||
"info" \
|
|
||||||
'{"recovery": "successful", "continuing": true}'
|
|
||||||
|
|
||||||
# Check results
|
|
||||||
echo -e "\n=== Alert Files Created ==="
|
|
||||||
alert_count=$(find "$TEST_DIR" -name "*.json" 2>/dev/null | wc -l)
|
|
||||||
echo "Total alert files: $alert_count"
|
|
||||||
|
|
||||||
if [ "$alert_count" -eq 3 ]; then
|
|
||||||
echo "✅ All 3 alerts were created successfully"
|
|
||||||
else
|
|
||||||
echo "❌ Expected 3 alerts, found $alert_count"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "\n=== Alert Contents ==="
|
|
||||||
for f in "$TEST_DIR"/*.json; do
|
|
||||||
echo -e "\n--- $(basename "$f") ---"
|
|
||||||
cat "$f" | python3 -m json.tool 2>/dev/null || cat "$f"
|
|
||||||
done
|
|
||||||
|
|
||||||
# Validate JSON structure
|
|
||||||
echo -e "\n=== JSON Validation ==="
|
|
||||||
all_valid=true
|
|
||||||
for f in "$TEST_DIR"/*.json; do
|
|
||||||
if python3 -c "import json; json.load(open('$f'))" 2>/dev/null; then
|
|
||||||
echo "✅ $(basename "$f") - Valid JSON"
|
|
||||||
else
|
|
||||||
echo "❌ $(basename "$f") - Invalid JSON"
|
|
||||||
all_valid=false
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Check for required fields
|
|
||||||
echo -e "\n=== Required Fields Check ==="
|
|
||||||
for f in "$TEST_DIR"/*.json; do
|
|
||||||
basename=$(basename "$f")
|
|
||||||
missing=()
|
|
||||||
python3 -c "import json; d=json.load(open('$f'))" 2>/dev/null || continue
|
|
||||||
|
|
||||||
for field in alert_id timestamp source host alert_type severity message data; do
|
|
||||||
if ! python3 -c "import json; d=json.load(open('$f')); exit(0 if '$field' in d else 1)" 2>/dev/null; then
|
|
||||||
missing+=("$field")
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ ${#missing[@]} -eq 0 ]; then
|
|
||||||
echo "✅ $basename - All required fields present"
|
|
||||||
else
|
|
||||||
echo "❌ $basename - Missing fields: ${missing[*]}"
|
|
||||||
all_valid=false
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
rm -rf "$TEST_DIR" /tmp/test_alert_func.sh
|
|
||||||
|
|
||||||
echo -e "\n=== Test Summary ==="
|
|
||||||
if [ "$all_valid" = true ]; then
|
|
||||||
echo "✅ All tests passed!"
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo "❌ Some tests failed"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@@ -1,106 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Test cases for secret detection script.
|
|
||||||
|
|
||||||
These tests verify that the detect_secrets.py script correctly:
|
|
||||||
1. Detects actual secrets
|
|
||||||
2. Ignores false positives
|
|
||||||
3. Respects exclusion markers
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import tempfile
|
|
||||||
import unittest
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
# Add scripts directory to path
|
|
||||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "scripts"))
|
|
||||||
|
|
||||||
from detect_secrets import (
|
|
||||||
scan_file,
|
|
||||||
scan_files,
|
|
||||||
should_exclude_file,
|
|
||||||
has_exclusion_marker,
|
|
||||||
is_excluded_match,
|
|
||||||
SECRET_PATTERNS,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class TestSecretDetection(unittest.TestCase):
|
|
||||||
"""Test cases for secret detection."""
|
|
||||||
|
|
||||||
def setUp(self):
|
|
||||||
"""Set up test fixtures."""
|
|
||||||
self.test_dir = tempfile.mkdtemp()
|
|
||||||
|
|
||||||
def tearDown(self):
|
|
||||||
"""Clean up test fixtures."""
|
|
||||||
import shutil
|
|
||||||
shutil.rmtree(self.test_dir, ignore_errors=True)
|
|
||||||
|
|
||||||
def _create_test_file(self, content: str, filename: str = "test.txt") -> str:
|
|
||||||
"""Create a test file with given content."""
|
|
||||||
file_path = os.path.join(self.test_dir, filename)
|
|
||||||
with open(file_path, "w") as f:
|
|
||||||
f.write(content)
|
|
||||||
return file_path
|
|
||||||
|
|
||||||
def test_detect_openai_api_key(self):
|
|
||||||
"""Test detection of OpenAI API keys."""
|
|
||||||
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456'"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertTrue(any("openai" in f[2].lower() for f in findings))
|
|
||||||
|
|
||||||
def test_detect_private_key(self):
|
|
||||||
"""Test detection of private keys."""
|
|
||||||
content = "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA0Z3VS5JJcds3xfn/ygWyF8PbnGy0AHB7MhgwMbRvI0MBZhpF\n-----END RSA PRIVATE KEY-----"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertTrue(any("private" in f[2].lower() for f in findings))
|
|
||||||
|
|
||||||
def test_detect_database_connection_string(self):
|
|
||||||
"""Test detection of database connection strings with credentials."""
|
|
||||||
content = "DATABASE_URL=mongodb://admin:secretpassword@mongodb.example.com:27017/db"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertTrue(any("database" in f[2].lower() for f in findings))
|
|
||||||
|
|
||||||
def test_detect_password_in_config(self):
|
|
||||||
"""Test detection of hardcoded passwords."""
|
|
||||||
content = "password = 'mysecretpassword123'"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertTrue(any("password" in f[2].lower() for f in findings))
|
|
||||||
|
|
||||||
def test_exclude_placeholder_passwords(self):
|
|
||||||
"""Test that placeholder passwords are excluded."""
|
|
||||||
content = "password = 'changeme'"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertEqual(len(findings), 0)
|
|
||||||
|
|
||||||
def test_exclude_localhost_database_url(self):
|
|
||||||
"""Test that localhost database URLs are excluded."""
|
|
||||||
content = "DATABASE_URL=mongodb://admin:secret@localhost:27017/db"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertEqual(len(findings), 0)
|
|
||||||
|
|
||||||
def test_pragma_allowlist_secret(self):
|
|
||||||
"""Test '# pragma: allowlist secret' marker."""
|
|
||||||
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456' # pragma: allowlist secret"
|
|
||||||
file_path = self._create_test_file(content)
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertEqual(len(findings), 0)
|
|
||||||
|
|
||||||
def test_empty_file(self):
|
|
||||||
"""Test scanning empty file."""
|
|
||||||
file_path = self._create_test_file("")
|
|
||||||
findings = scan_file(file_path)
|
|
||||||
self.assertEqual(len(findings), 0)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
unittest.main(verbosity=2)
|
|
||||||
@@ -24,52 +24,32 @@ class HealthCheckHandler(BaseHTTPRequestHandler):
|
|||||||
# Suppress default logging
|
# Suppress default logging
|
||||||
pass
|
pass
|
||||||
|
|
||||||
def do_GET(self):
|
def do_GET(self):
|
||||||
"""Handle GET requests"""
|
"""Handle GET requests"""
|
||||||
if self.path == '/health':
|
if self.path == '/health':
|
||||||
self.send_health_response()
|
self.send_health_response()
|
||||||
elif self.path == '/status':
|
elif self.path == '/status':
|
||||||
self.send_full_status()
|
self.send_full_status()
|
||||||
elif self.path == '/metrics':
|
|
||||||
self.send_sovereign_metrics()
|
|
||||||
else:
|
else:
|
||||||
self.send_error(404)
|
self.send_error(404)
|
||||||
|
|
||||||
def send_sovereign_metrics(self):
|
def send_health_response(self):
|
||||||
"""Send sovereign health metrics as JSON"""
|
"""Send simple health check"""
|
||||||
|
harness = get_harness()
|
||||||
|
result = harness.execute("health_check")
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import sqlite3
|
health_data = json.loads(result)
|
||||||
db_path = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
|
status_code = 200 if health_data.get("overall") == "healthy" else 503
|
||||||
if not db_path.exists():
|
except:
|
||||||
data = {"error": "No database found"}
|
status_code = 503
|
||||||
else:
|
health_data = {"error": "Health check failed"}
|
||||||
conn = sqlite3.connect(str(db_path))
|
|
||||||
row = conn.execute("""
|
self.send_response(status_code)
|
||||||
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
|
|
||||||
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
|
|
||||||
""").fetchone()
|
|
||||||
|
|
||||||
if row:
|
|
||||||
data = {
|
|
||||||
"sovereignty_score": row[0],
|
|
||||||
"total_sessions": row[1],
|
|
||||||
"local_sessions": row[2],
|
|
||||||
"cloud_sessions": row[3],
|
|
||||||
"est_cloud_cost": row[4],
|
|
||||||
"est_saved": row[5],
|
|
||||||
"timestamp": datetime.now().isoformat()
|
|
||||||
}
|
|
||||||
else:
|
|
||||||
data = {"error": "No data"}
|
|
||||||
conn.close()
|
|
||||||
except Exception as e:
|
|
||||||
data = {"error": str(e)}
|
|
||||||
|
|
||||||
self.send_response(200)
|
|
||||||
self.send_header('Content-Type', 'application/json')
|
self.send_header('Content-Type', 'application/json')
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
self.wfile.write(json.dumps(data).encode())
|
self.wfile.write(json.dumps(health_data).encode())
|
||||||
|
|
||||||
def send_full_status(self):
|
def send_full_status(self):
|
||||||
"""Send full system status"""
|
"""Send full system status"""
|
||||||
harness = get_harness()
|
harness = get_harness()
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
# Zero LLM cost for polling — only calls kimi/kimi-code for actual work.
|
# Zero LLM cost for polling — only calls kimi/kimi-code for actual work.
|
||||||
#
|
#
|
||||||
# Run manually: bash ~/.timmy/uniwizard/kimi-heartbeat.sh
|
# Run manually: bash ~/.timmy/uniwizard/kimi-heartbeat.sh
|
||||||
# Runs via launchd every 2 minutes: ai.timmy.kimi-heartbeat.plist
|
# Runs via launchd every 5 minutes: ai.timmy.kimi-heartbeat.plist
|
||||||
#
|
#
|
||||||
# Workflow for humans:
|
# Workflow for humans:
|
||||||
# 1. Create or open a Gitea issue in any tracked repo
|
# 1. Create or open a Gitea issue in any tracked repo
|
||||||
@@ -21,14 +21,18 @@ set -euo pipefail
|
|||||||
# --- Config ---
|
# --- Config ---
|
||||||
TOKEN=$(cat "$HOME/.timmy/kimi_gitea_token" | tr -d '[:space:]')
|
TOKEN=$(cat "$HOME/.timmy/kimi_gitea_token" | tr -d '[:space:]')
|
||||||
TIMMY_TOKEN=$(cat "$HOME/.config/gitea/timmy-token" | tr -d '[:space:]')
|
TIMMY_TOKEN=$(cat "$HOME/.config/gitea/timmy-token" | tr -d '[:space:]')
|
||||||
BASE="${GITEA_API_BASE:-https://forge.alexanderwhitestone.com/api/v1}"
|
# Prefer Tailscale (private network) over public IP
|
||||||
|
if curl -sf --connect-timeout 2 "http://100.126.61.75:3000/api/v1/version" > /dev/null 2>&1; then
|
||||||
|
BASE="http://100.126.61.75:3000/api/v1"
|
||||||
|
else
|
||||||
|
BASE="http://143.198.27.163:3000/api/v1"
|
||||||
|
fi
|
||||||
LOG="/tmp/kimi-heartbeat.log"
|
LOG="/tmp/kimi-heartbeat.log"
|
||||||
LOCKFILE="/tmp/kimi-heartbeat.lock"
|
LOCKFILE="/tmp/kimi-heartbeat.lock"
|
||||||
MAX_DISPATCH=10 # Increased max dispatch to 10
|
MAX_DISPATCH=5 # Don't overwhelm Kimi with too many parallel tasks
|
||||||
PLAN_TIMEOUT=120 # 2 minutes for planning pass
|
PLAN_TIMEOUT=120 # 2 minutes for planning pass
|
||||||
EXEC_TIMEOUT=480 # 8 minutes for execution pass
|
EXEC_TIMEOUT=480 # 8 minutes for execution pass
|
||||||
BODY_COMPLEXITY_THRESHOLD=500 # chars — above this triggers planning
|
BODY_COMPLEXITY_THRESHOLD=500 # chars — above this triggers planning
|
||||||
STALE_PROGRESS_SECONDS=3600 # reclaim kimi-in-progress after 1 hour of silence
|
|
||||||
|
|
||||||
REPOS=(
|
REPOS=(
|
||||||
"Timmy_Foundation/timmy-home"
|
"Timmy_Foundation/timmy-home"
|
||||||
@@ -40,31 +44,6 @@ REPOS=(
|
|||||||
# --- Helpers ---
|
# --- Helpers ---
|
||||||
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; }
|
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; }
|
||||||
|
|
||||||
needs_pr_proof() {
|
|
||||||
local haystack="${1,,}"
|
|
||||||
[[ "$haystack" =~ implement|fix|refactor|feature|perf|performance|rebase|deploy|integration|module|script|pipeline|benchmark|cache|test|bug|build|port ]]
|
|
||||||
}
|
|
||||||
|
|
||||||
has_pr_proof() {
|
|
||||||
local haystack="${1,,}"
|
|
||||||
[[ "$haystack" == *"proof:"* || "$haystack" == *"pr:"* || "$haystack" == *"/pulls/"* || "$haystack" == *"commit:"* ]]
|
|
||||||
}
|
|
||||||
|
|
||||||
post_issue_comment_json() {
|
|
||||||
local repo="$1"
|
|
||||||
local issue_num="$2"
|
|
||||||
local token="$3"
|
|
||||||
local body="$4"
|
|
||||||
local payload
|
|
||||||
payload=$(python3 - "$body" <<'PY'
|
|
||||||
import json, sys
|
|
||||||
print(json.dumps({"body": sys.argv[1]}))
|
|
||||||
PY
|
|
||||||
)
|
|
||||||
curl -sf -X POST -H "Authorization: token $token" -H "Content-Type: application/json" \
|
|
||||||
-d "$payload" "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
|
||||||
}
|
|
||||||
|
|
||||||
# Prevent overlapping runs
|
# Prevent overlapping runs
|
||||||
if [ -f "$LOCKFILE" ]; then
|
if [ -f "$LOCKFILE" ]; then
|
||||||
lock_age=$(( $(date +%s) - $(stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) ))
|
lock_age=$(( $(date +%s) - $(stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) ))
|
||||||
@@ -86,53 +65,30 @@ for repo in "${REPOS[@]}"; do
|
|||||||
response=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
response=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
||||||
"$BASE/repos/$repo/issues?state=open&labels=assigned-kimi&limit=20" 2>/dev/null || echo "[]")
|
"$BASE/repos/$repo/issues?state=open&labels=assigned-kimi&limit=20" 2>/dev/null || echo "[]")
|
||||||
|
|
||||||
# Filter: skip done tasks, but reclaim stale kimi-in-progress work automatically
|
# Filter: skip issues that already have kimi-in-progress or kimi-done
|
||||||
issues=$(echo "$response" | python3 -c "
|
issues=$(echo "$response" | python3 -c "
|
||||||
import json, sys, datetime
|
import json, sys
|
||||||
STALE = int(${STALE_PROGRESS_SECONDS})
|
|
||||||
|
|
||||||
def parse_ts(value):
|
|
||||||
if not value:
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
return datetime.datetime.fromisoformat(value.replace('Z', '+00:00'))
|
|
||||||
except Exception:
|
|
||||||
return None
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
data = json.loads(sys.stdin.buffer.read())
|
data = json.loads(sys.stdin.buffer.read())
|
||||||
except:
|
except:
|
||||||
sys.exit(0)
|
sys.exit(0)
|
||||||
|
|
||||||
now = datetime.datetime.now(datetime.timezone.utc)
|
|
||||||
for i in data:
|
for i in data:
|
||||||
labels = [l['name'] for l in i.get('labels', [])]
|
labels = [l['name'] for l in i.get('labels', [])]
|
||||||
if 'kimi-done' in labels:
|
if 'kimi-in-progress' in labels or 'kimi-done' in labels:
|
||||||
continue
|
continue
|
||||||
|
# Pipe-delimited: number|title|body_length|body (truncated, newlines removed)
|
||||||
reclaim = False
|
|
||||||
updated_at = i.get('updated_at', '') or ''
|
|
||||||
if 'kimi-in-progress' in labels:
|
|
||||||
ts = parse_ts(updated_at)
|
|
||||||
age = (now - ts).total_seconds() if ts else (STALE + 1)
|
|
||||||
if age < STALE:
|
|
||||||
continue
|
|
||||||
reclaim = True
|
|
||||||
|
|
||||||
body = (i.get('body', '') or '')
|
body = (i.get('body', '') or '')
|
||||||
body_len = len(body)
|
body_len = len(body)
|
||||||
body_clean = body[:1500].replace('\n', ' ').replace('|', ' ')
|
body_clean = body[:1500].replace('\n', ' ').replace('|', ' ')
|
||||||
title = i['title'].replace('|', ' ')
|
title = i['title'].replace('|', ' ')
|
||||||
updated_clean = updated_at.replace('|', ' ')
|
print(f\"{i['number']}|{title}|{body_len}|{body_clean}\")
|
||||||
reclaim_flag = 'reclaim' if reclaim else 'fresh'
|
|
||||||
print(f\"{i['number']}|{title}|{body_len}|{reclaim_flag}|{updated_clean}|{body_clean}\")
|
|
||||||
" 2>/dev/null)
|
" 2>/dev/null)
|
||||||
|
|
||||||
[ -z "$issues" ] && continue
|
[ -z "$issues" ] && continue
|
||||||
|
|
||||||
while IFS='|' read -r issue_num title body_len reclaim_flag updated_at body; do
|
while IFS='|' read -r issue_num title body_len body; do
|
||||||
[ -z "$issue_num" ] && continue
|
[ -z "$issue_num" ] && continue
|
||||||
log "FOUND: $repo #$issue_num — $title (body: ${body_len} chars, mode: ${reclaim_flag}, updated: ${updated_at})"
|
log "FOUND: $repo #$issue_num — $title (body: ${body_len} chars)"
|
||||||
|
|
||||||
# --- Get label IDs for this repo ---
|
# --- Get label IDs for this repo ---
|
||||||
label_json=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
label_json=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
||||||
@@ -142,15 +98,6 @@ for i in data:
|
|||||||
done_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='kimi-done']" 2>/dev/null)
|
done_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='kimi-done']" 2>/dev/null)
|
||||||
kimi_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='assigned-kimi']" 2>/dev/null)
|
kimi_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='assigned-kimi']" 2>/dev/null)
|
||||||
|
|
||||||
if [ "$reclaim_flag" = "reclaim" ]; then
|
|
||||||
log "RECLAIM: $repo #$issue_num — stale kimi-in-progress since $updated_at"
|
|
||||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
|
||||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
|
||||||
-d "{\"body\":\"🟡 **KimiClaw reclaiming stale task.**\\nPrevious kimi-in-progress state exceeded ${STALE_PROGRESS_SECONDS}s without resolution.\\nLast update: $updated_at\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
|
||||||
fi
|
|
||||||
|
|
||||||
# --- Add kimi-in-progress label ---
|
# --- Add kimi-in-progress label ---
|
||||||
if [ -n "$progress_id" ]; then
|
if [ -n "$progress_id" ]; then
|
||||||
curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
||||||
@@ -174,11 +121,32 @@ for i in data:
|
|||||||
-d "{\"body\":\"🟠 **KimiClaw picking up this task** via heartbeat.\\nBackend: kimi/kimi-code (Moonshot AI)\\nMode: **Planning first** (task is complex)\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
|
-d "{\"body\":\"🟠 **KimiClaw picking up this task** via heartbeat.\\nBackend: kimi/kimi-code (Moonshot AI)\\nMode: **Planning first** (task is complex)\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
|
||||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||||
|
|
||||||
plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES.\n\nTASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks.\n\nISSUE #$issue_num in $repo: $title\n\nBODY:\n$body\n\nRULES:\n- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.\n- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.\n- Each subtask line format: SUBTASK: <title> | <one-line description>\n- Be realistic about what fits in 8 minutes with no terminal access.\n- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.\n\nRespond with ONLY your decision. No preamble."
|
plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES.
|
||||||
|
|
||||||
plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{\"status\":\"error\"}')
|
TASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks.
|
||||||
|
|
||||||
|
ISSUE #$issue_num in $repo: $title
|
||||||
|
|
||||||
|
BODY:
|
||||||
|
$body
|
||||||
|
|
||||||
|
RULES:
|
||||||
|
- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.
|
||||||
|
- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.
|
||||||
|
- Each subtask line format: SUBTASK: <title> | <one-line description>
|
||||||
|
- Be realistic about what fits in 8 minutes with no terminal access.
|
||||||
|
- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.
|
||||||
|
|
||||||
|
Respond with ONLY your decision. No preamble."
|
||||||
|
|
||||||
|
plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{"status":"error"}')
|
||||||
plan_status=$(echo "$plan_result" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status','error'))" 2>/dev/null || echo "error")
|
plan_status=$(echo "$plan_result" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status','error'))" 2>/dev/null || echo "error")
|
||||||
plan_text=$(echo "$plan_result" | python3 -c "\nimport json,sys\nd = json.load(sys.stdin)\npayloads = d.get('result',{}).get('payloads',[])\nprint(payloads[0]['text'] if payloads else '')\n" 2>/dev/null || echo "")
|
plan_text=$(echo "$plan_result" | python3 -c "
|
||||||
|
import json,sys
|
||||||
|
d = json.load(sys.stdin)
|
||||||
|
payloads = d.get('result',{}).get('payloads',[])
|
||||||
|
print(payloads[0]['text'] if payloads else '')
|
||||||
|
" 2>/dev/null || echo "")
|
||||||
|
|
||||||
if echo "$plan_text" | grep -qi "^DECOMPOSE"; then
|
if echo "$plan_text" | grep -qi "^DECOMPOSE"; then
|
||||||
# --- Create subtask issues ---
|
# --- Create subtask issues ---
|
||||||
@@ -187,7 +155,7 @@ for i in data:
|
|||||||
# Post the plan as a comment
|
# Post the plan as a comment
|
||||||
escaped_plan=$(echo "$plan_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))" 2>/dev/null)
|
escaped_plan=$(echo "$plan_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))" 2>/dev/null)
|
||||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||||
-d "{\"body\":\"📝 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \
|
-d "{\"body\":\"📋 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \
|
||||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||||
|
|
||||||
# Extract SUBTASK lines and create child issues
|
# Extract SUBTASK lines and create child issues
|
||||||
@@ -277,40 +245,25 @@ print(payloads[0]['text'][:3000] if payloads else 'No response')
|
|||||||
" 2>/dev/null || echo "No response")
|
" 2>/dev/null || echo "No response")
|
||||||
|
|
||||||
if [ "$status" = "ok" ] && [ "$response_text" != "No response" ]; then
|
if [ "$status" = "ok" ] && [ "$response_text" != "No response" ]; then
|
||||||
|
log "COMPLETED: $repo #$issue_num"
|
||||||
|
|
||||||
|
# Post result as comment (escape for JSON)
|
||||||
escaped=$(echo "$response_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read())[1:-1])" 2>/dev/null)
|
escaped=$(echo "$response_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read())[1:-1])" 2>/dev/null)
|
||||||
if needs_pr_proof "$title $body" && ! has_pr_proof "$response_text"; then
|
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||||
log "BLOCKED: $repo #$issue_num — response lacked PR/proof for code task"
|
-d "{\"body\":\"✅ **KimiClaw result:**\\n\\n$escaped\"}" \
|
||||||
post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟡 **KimiClaw produced analysis only — no PR/proof detected.**
|
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||||
|
|
||||||
This issue looks like implementation work, so it is NOT being marked kimi-done.
|
# Remove kimi-in-progress, add kimi-done
|
||||||
Kimi response excerpt:
|
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||||
|
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
||||||
$escaped
|
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
||||||
|
-d "{\"labels\":[$done_id]}" \
|
||||||
Action: removing Kimi queue labels so a code-capable agent can pick it up."
|
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true
|
||||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
|
||||||
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
|
|
||||||
else
|
|
||||||
log "COMPLETED: $repo #$issue_num"
|
|
||||||
post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟢 **KimiClaw result:**
|
|
||||||
|
|
||||||
$escaped"
|
|
||||||
|
|
||||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
|
||||||
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
|
|
||||||
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
|
||||||
-d "{\"labels\":[$done_id]}" \
|
|
||||||
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true
|
|
||||||
fi
|
|
||||||
else
|
else
|
||||||
log "FAILED: $repo #$issue_num — status=$status"
|
log "FAILED: $repo #$issue_num — status=$status"
|
||||||
|
|
||||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||||
-d "{\"body\":\"\ud83d\udd34 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \
|
-d "{\"body\":\"🔴 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \
|
||||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||||
|
|
||||||
# Remove kimi-in-progress on failure
|
# Remove kimi-in-progress on failure
|
||||||
|
|||||||
@@ -5,12 +5,7 @@
|
|||||||
set -euo pipefail
|
set -euo pipefail
|
||||||
|
|
||||||
KIMI_TOKEN=$(cat /Users/apayne/.timmy/kimi_gitea_token | tr -d '[:space:]')
|
KIMI_TOKEN=$(cat /Users/apayne/.timmy/kimi_gitea_token | tr -d '[:space:]')
|
||||||
|
BASE="http://100.126.61.75:3000/api/v1"
|
||||||
# --- Tailscale/IP Detection (timmy-home#385) ---
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
source "${SCRIPT_DIR}/lib/tailscale-gitea.sh"
|
|
||||||
BASE="$GITEA_BASE_URL"
|
|
||||||
|
|
||||||
LOG="/tmp/kimi-mentions.log"
|
LOG="/tmp/kimi-mentions.log"
|
||||||
PROCESSED="/tmp/kimi-mentions-processed.txt"
|
PROCESSED="/tmp/kimi-mentions-processed.txt"
|
||||||
|
|
||||||
|
|||||||
@@ -1,55 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# example-usage.sh — Example showing how to use the tailscale-gitea module
|
|
||||||
# Issue: timmy-home#385 — Standardized Tailscale IP detection module
|
|
||||||
|
|
||||||
set -euo pipefail
|
|
||||||
|
|
||||||
# --- Basic Usage ---
|
|
||||||
# Source the module to automatically set GITEA_BASE_URL
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
source "${SCRIPT_DIR}/tailscale-gitea.sh"
|
|
||||||
|
|
||||||
# Now use GITEA_BASE_URL in your API calls
|
|
||||||
echo "Using Gitea at: $GITEA_BASE_URL"
|
|
||||||
echo "Tailscale active: $GITEA_USING_TAILSCALE"
|
|
||||||
|
|
||||||
# --- Example API Call ---
|
|
||||||
# curl -sf -H "Authorization: token $TOKEN" \
|
|
||||||
# "$GITEA_BASE_URL/repos/myuser/myrepo/issues"
|
|
||||||
|
|
||||||
# --- Custom Configuration (Optional) ---
|
|
||||||
# You can customize behavior by setting variables BEFORE sourcing:
|
|
||||||
#
|
|
||||||
# TAILSCALE_TIMEOUT=5 # Wait 5 seconds instead of 2
|
|
||||||
# TAILSCALE_DEBUG=1 # Print which endpoint was selected
|
|
||||||
# source "${SCRIPT_DIR}/tailscale-gitea.sh"
|
|
||||||
|
|
||||||
# --- Advanced: Checking Network Mode ---
|
|
||||||
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
|
|
||||||
echo "✓ Connected via private Tailscale network"
|
|
||||||
else
|
|
||||||
echo "⚠ Using public internet fallback (Tailscale unavailable)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# --- Example: Polling with Retry Logic ---
|
|
||||||
poll_gitea() {
|
|
||||||
local endpoint="${1:-$GITEA_BASE_URL}"
|
|
||||||
local max_retries="${2:-3}"
|
|
||||||
local retry=0
|
|
||||||
|
|
||||||
while [[ $retry -lt $max_retries ]]; do
|
|
||||||
if curl -sf --connect-timeout 2 "${endpoint}/version" > /dev/null 2>&1; then
|
|
||||||
echo "Gitea is reachable"
|
|
||||||
return 0
|
|
||||||
fi
|
|
||||||
retry=$((retry + 1))
|
|
||||||
echo "Retry $retry/$max_retries..."
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Gitea unreachable after $max_retries attempts"
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Uncomment to test connectivity:
|
|
||||||
# poll_gitea "$GITEA_BASE_URL"
|
|
||||||
@@ -1,64 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# tailscale-gitea.sh — Standardized Tailscale IP detection module for Gitea API access
|
|
||||||
# Issue: timmy-home#385 — Standardize Tailscale IP detection across auxiliary scripts
|
|
||||||
#
|
|
||||||
# Usage (source this file in your script):
|
|
||||||
# source /path/to/tailscale-gitea.sh
|
|
||||||
# # Now use $GITEA_BASE_URL for API calls
|
|
||||||
#
|
|
||||||
# Configuration (set before sourcing to customize):
|
|
||||||
# TAILSCALE_IP - Tailscale IP to try first (default: 100.126.61.75)
|
|
||||||
# PUBLIC_IP - Public fallback IP (default: 143.198.27.163)
|
|
||||||
# GITEA_PORT - Gitea API port (default: 3000)
|
|
||||||
# TAILSCALE_TIMEOUT - Connection timeout in seconds (default: 2)
|
|
||||||
# GITEA_API_VERSION - API version path (default: api/v1)
|
|
||||||
#
|
|
||||||
# Sovereignty: Private Tailscale network preferred over public internet
|
|
||||||
|
|
||||||
# --- Default Configuration ---
|
|
||||||
: "${TAILSCALE_IP:=100.126.61.75}"
|
|
||||||
: "${PUBLIC_IP:=143.198.27.163}"
|
|
||||||
: "${GITEA_PORT:=3000}"
|
|
||||||
: "${TAILSCALE_TIMEOUT:=2}"
|
|
||||||
: "${GITEA_API_VERSION:=api/v1}"
|
|
||||||
|
|
||||||
# --- Detection Function ---
|
|
||||||
_detect_gitea_endpoint() {
|
|
||||||
local tailscale_url="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
|
||||||
local public_url="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
|
||||||
|
|
||||||
# Prefer Tailscale (private network) over public IP
|
|
||||||
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
|
|
||||||
"${tailscale_url}/version" > /dev/null 2>&1; then
|
|
||||||
echo "$tailscale_url"
|
|
||||||
return 0
|
|
||||||
else
|
|
||||||
echo "$public_url"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# --- Main Detection ---
|
|
||||||
# Set GITEA_BASE_URL for use by sourcing scripts
|
|
||||||
# Also sets GITEA_USING_TAILSCALE=true/false for scripts that need to know
|
|
||||||
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
|
|
||||||
"http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}/version" > /dev/null 2>&1; then
|
|
||||||
GITEA_BASE_URL="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
|
||||||
GITEA_USING_TAILSCALE=true
|
|
||||||
else
|
|
||||||
GITEA_BASE_URL="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
|
||||||
GITEA_USING_TAILSCALE=false
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Export for child processes
|
|
||||||
export GITEA_BASE_URL
|
|
||||||
export GITEA_USING_TAILSCALE
|
|
||||||
|
|
||||||
# Optional: log which endpoint was selected (set TAILSCALE_DEBUG=1 to enable)
|
|
||||||
if [[ "${TAILSCALE_DEBUG:-0}" == "1" ]]; then
|
|
||||||
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
|
|
||||||
echo "[tailscale-gitea] Using Tailscale endpoint: $GITEA_BASE_URL" >&2
|
|
||||||
else
|
|
||||||
echo "[tailscale-gitea] Tailscale unavailable, using public endpoint: $GITEA_BASE_URL" >&2
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
Reference in New Issue
Block a user