Compare commits
33 Commits
codex/work
...
gemini/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5ab2906667 | ||
| ac7bc76f65 | |||
| 94e3b90809 | |||
| b249c0650e | |||
|
|
2ead2a49e3 | ||
| aaa90dae39 | |||
| d664ed01d0 | |||
| 8b1297ef4f | |||
| a56a2c4cd9 | |||
| 69929f6b68 | |||
| 8ac3de4b07 | |||
| 11d9bfca92 | |||
| 2df34995fe | |||
| 3148639e13 | |||
| f1482cb06d | |||
| 7070ba9cff | |||
| bc24313f1a | |||
| c3db6ce1ca | |||
| 4222eb559c | |||
| d043274c0e | |||
| 9dc540e4f5 | |||
|
|
4cfd1c2e10 | ||
|
|
a9ad1c8137 | ||
| f708e45ae9 | |||
| f083031537 | |||
| 1cef8034c5 | |||
|
|
9952ce180c | ||
|
|
64a954f4d9 | ||
|
|
5ace1e69ce | ||
| d5c357df76 | |||
| 04213924d0 | |||
| dba3e90893 | |||
| e4c3bb1798 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -56,6 +56,9 @@ __pycache__/
|
||||
venv/
|
||||
*/venv/
|
||||
|
||||
# Resource Tracking System
|
||||
metrics/resource_state.json
|
||||
|
||||
# Editor temps
|
||||
\#*\#
|
||||
*~
|
||||
|
||||
42
.pre-commit-hooks.yaml
Normal file
42
.pre-commit-hooks.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
# Pre-commit hooks configuration for timmy-home
|
||||
# See https://pre-commit.com for more information
|
||||
|
||||
repos:
|
||||
# Standard pre-commit hooks
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v4.5.0
|
||||
hooks:
|
||||
- id: trailing-whitespace
|
||||
exclude: '\.(md|txt)$'
|
||||
- id: end-of-file-fixer
|
||||
exclude: '\.(md|txt)$'
|
||||
- id: check-yaml
|
||||
- id: check-json
|
||||
- id: check-added-large-files
|
||||
args: ['--maxkb=5000']
|
||||
- id: check-merge-conflict
|
||||
- id: check-symlinks
|
||||
- id: detect-private-key
|
||||
|
||||
# Secret detection - custom local hook
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: detect-secrets
|
||||
name: Detect Secrets
|
||||
description: Scan for API keys, tokens, and other secrets
|
||||
entry: python3 scripts/detect_secrets.py
|
||||
language: python
|
||||
types: [text]
|
||||
exclude:
|
||||
'(?x)^(
|
||||
.*\.md$|
|
||||
.*\.svg$|
|
||||
.*\.lock$|
|
||||
.*-lock\..*$|
|
||||
\.gitignore$|
|
||||
\.secrets\.baseline$|
|
||||
tests/test_secret_detection\.py$
|
||||
)'
|
||||
pass_filenames: true
|
||||
require_serial: false
|
||||
verbose: true
|
||||
132
README.md
Normal file
132
README.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Timmy Home
|
||||
|
||||
Timmy Foundation's home repository for development operations and configurations.
|
||||
|
||||
## Security
|
||||
|
||||
### Pre-commit Hook for Secret Detection
|
||||
|
||||
This repository includes a pre-commit hook that automatically scans for secrets (API keys, tokens, passwords) before allowing commits.
|
||||
|
||||
#### Setup
|
||||
|
||||
Install pre-commit hooks:
|
||||
|
||||
```bash
|
||||
pip install pre-commit
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
#### What Gets Scanned
|
||||
|
||||
The hook detects:
|
||||
- **API Keys**: OpenAI (`sk-*`), Anthropic (`sk-ant-*`), AWS, Stripe
|
||||
- **Private Keys**: RSA, DSA, EC, OpenSSH private keys
|
||||
- **Tokens**: GitHub (`ghp_*`), Gitea, Slack, Telegram, JWT, Bearer tokens
|
||||
- **Database URLs**: Connection strings with embedded credentials
|
||||
- **Passwords**: Hardcoded passwords in configuration files
|
||||
|
||||
#### How It Works
|
||||
|
||||
Before each commit, the hook:
|
||||
1. Scans all staged text files
|
||||
2. Checks against patterns for common secret formats
|
||||
3. Reports any potential secrets found
|
||||
4. Blocks the commit if secrets are detected
|
||||
|
||||
#### Handling False Positives
|
||||
|
||||
If the hook flags something that is not actually a secret (e.g., test fixtures, placeholder values), you can:
|
||||
|
||||
**Option 1: Add an exclusion marker to the line**
|
||||
|
||||
```python
|
||||
# Add one of these markers to the end of the line:
|
||||
api_key = "sk-test123" # pragma: allowlist secret
|
||||
api_key = "sk-test123" # noqa: secret
|
||||
api_key = "sk-test123" # secret-detection:ignore
|
||||
```
|
||||
|
||||
**Option 2: Use placeholder values (auto-excluded)**
|
||||
|
||||
These patterns are automatically excluded:
|
||||
- `changeme`, `password`, `123456`, `admin` (common defaults)
|
||||
- Values containing `fake_`, `test_`, `dummy_`, `example_`, `placeholder_`
|
||||
- URLs with `localhost` or `127.0.0.1`
|
||||
|
||||
**Option 3: Skip the hook (emergency only)**
|
||||
|
||||
```bash
|
||||
git commit --no-verify # Bypasses all pre-commit hooks
|
||||
```
|
||||
|
||||
⚠️ **Warning**: Only use `--no-verify` if you are certain no real secrets are being committed.
|
||||
|
||||
#### CI/CD Integration
|
||||
|
||||
The secret detection script can also be run in CI/CD:
|
||||
|
||||
```bash
|
||||
# Scan specific files
|
||||
python3 scripts/detect_secrets.py file1.py file2.yaml
|
||||
|
||||
# Scan with verbose output
|
||||
python3 scripts/detect_secrets.py --verbose src/
|
||||
|
||||
# Run tests
|
||||
python3 tests/test_secret_detection.py
|
||||
```
|
||||
|
||||
#### Excluded Files
|
||||
|
||||
The following are automatically excluded from scanning:
|
||||
- Markdown files (`.md`)
|
||||
- Lock files (`package-lock.json`, `poetry.lock`, `yarn.lock`)
|
||||
- Image and font files
|
||||
- `node_modules/`, `__pycache__/`, `.git/`
|
||||
|
||||
#### Testing the Detection
|
||||
|
||||
To verify the detection works:
|
||||
|
||||
```bash
|
||||
# Run the test suite
|
||||
python3 tests/test_secret_detection.py
|
||||
|
||||
# Test with a specific file
|
||||
echo "API_KEY=sk-test123456789" > /tmp/test_secret.py
|
||||
python3 scripts/detect_secrets.py /tmp/test_secret.py
|
||||
# Should report: OpenAI API key detected
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run secret detection tests
|
||||
python3 tests/test_secret_detection.py
|
||||
|
||||
# Run all tests
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
|
||||
```
|
||||
.
|
||||
├── .pre-commit-hooks.yaml # Pre-commit configuration
|
||||
├── scripts/
|
||||
│ └── detect_secrets.py # Secret detection script
|
||||
├── tests/
|
||||
│ └── test_secret_detection.py # Test cases
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
See [CONTRIBUTING.md](CONTRIBUTING.md) for contribution guidelines.
|
||||
|
||||
## License
|
||||
|
||||
This project is part of the Timmy Foundation.
|
||||
@@ -1,6 +1,6 @@
|
||||
model:
|
||||
default: claude-opus-4-6
|
||||
provider: anthropic
|
||||
default: hermes4:14b
|
||||
provider: custom
|
||||
toolsets:
|
||||
- all
|
||||
agent:
|
||||
@@ -27,7 +27,7 @@ browser:
|
||||
inactivity_timeout: 120
|
||||
record_sessions: false
|
||||
checkpoints:
|
||||
enabled: false
|
||||
enabled: true
|
||||
max_snapshots: 50
|
||||
compression:
|
||||
enabled: true
|
||||
@@ -110,7 +110,7 @@ tts:
|
||||
device: cpu
|
||||
stt:
|
||||
enabled: true
|
||||
provider: local
|
||||
provider: openai
|
||||
local:
|
||||
model: base
|
||||
openai:
|
||||
|
||||
491
docs/USER_AUDIT_2026-04-04.md
Normal file
491
docs/USER_AUDIT_2026-04-04.md
Normal file
@@ -0,0 +1,491 @@
|
||||
# Workspace User Audit
|
||||
|
||||
Date: 2026-04-04
|
||||
Scope: Hermes Gitea workspace users visible from `/explore/users`
|
||||
Primary org examined: `Timmy_Foundation`
|
||||
Primary strategic filter: `the-nexus` issue #542 (`DIRECTION SHIFT`)
|
||||
|
||||
## Purpose
|
||||
|
||||
This audit maps each visible workspace user to:
|
||||
|
||||
- observed contribution pattern
|
||||
- likely capabilities
|
||||
- likely failure mode
|
||||
- suggested lane of highest leverage
|
||||
|
||||
The point is not to flatter or punish accounts. The point is to stop wasting attention on the wrong agent for the wrong job.
|
||||
|
||||
## Method
|
||||
|
||||
This audit was derived from:
|
||||
|
||||
- Gitea admin user roster
|
||||
- public user explorer page
|
||||
- org-wide issues and pull requests across:
|
||||
- `the-nexus`
|
||||
- `timmy-home`
|
||||
- `timmy-config`
|
||||
- `hermes-agent`
|
||||
- `turboquant`
|
||||
- `.profile`
|
||||
- `the-door`
|
||||
- `timmy-academy`
|
||||
- `claude-code-src`
|
||||
- PR outcome split:
|
||||
- open
|
||||
- merged
|
||||
- closed unmerged
|
||||
|
||||
This is a capability-and-lane audit, not a character judgment. New or low-artifact accounts are marked as unproven rather than weak.
|
||||
|
||||
## Strategic Frame
|
||||
|
||||
Per issue #542, the current system direction is:
|
||||
|
||||
1. Heartbeat
|
||||
2. Harness
|
||||
3. Portal Interface
|
||||
|
||||
Any user who does not materially help one of those three jobs should be deprioritized, reassigned, or retired.
|
||||
|
||||
## Top Findings
|
||||
|
||||
- The org has real execution capacity, but too much ideation and duplicate backlog generation relative to merged implementation.
|
||||
- Best current execution profiles: `allegro`, `groq`, `codex-agent`, `manus`, `Timmy`.
|
||||
- Best architecture / research / integration profiles: `perplexity`, `gemini`, `Timmy`, `Rockachopa`.
|
||||
- Best archivist / memory / RCA profile: `ezra`.
|
||||
- Biggest cleanup opportunities:
|
||||
- consolidate `google` into `gemini`
|
||||
- consolidate or retire legacy `kimi` in favor of `KimiClaw`
|
||||
- keep unproven symbolic accounts off the critical path until they ship
|
||||
|
||||
## Recommended Team Shape
|
||||
|
||||
- Direction and doctrine: `Rockachopa`, `Timmy`
|
||||
- Architecture and strategy: `Timmy`, `perplexity`, `gemini`
|
||||
- Triage and dispatch: `allegro`, `Timmy`
|
||||
- Core implementation: `claude`, `groq`, `codex-agent`, `manus`
|
||||
- Long-context reading and extraction: `KimiClaw`
|
||||
- RCA, archival memory, and operating history: `ezra`
|
||||
- Experimental reserve: `grok`, `bezalel`, `antigravity`, `fenrir`, `substratum`
|
||||
- Consolidate or retire: `google`, `kimi`, plus dormant admin-style identities without a lane
|
||||
|
||||
## User Audit
|
||||
|
||||
### Rockachopa
|
||||
|
||||
- Observed pattern:
|
||||
- founder-originated direction, issue seeding, architectural reset signals
|
||||
- relatively little direct PR volume in this org
|
||||
- Likely strengths:
|
||||
- taste
|
||||
- doctrine
|
||||
- strategic kill/defer calls
|
||||
- setting the real north star
|
||||
- Likely failure mode:
|
||||
- pushing direction into the system without a matching enforcement pass
|
||||
- Highest-leverage lane:
|
||||
- final priority authority
|
||||
- architectural direction
|
||||
- closure of dead paths
|
||||
- Anti-lane:
|
||||
- routine backlog maintenance
|
||||
- repetitive implementation supervision
|
||||
|
||||
### Timmy
|
||||
|
||||
- Observed pattern:
|
||||
- highest total authored artifact volume
|
||||
- high merged PR count
|
||||
- major issue author across `the-nexus`, `timmy-home`, and `timmy-config`
|
||||
- Likely strengths:
|
||||
- system ownership
|
||||
- epic creation
|
||||
- repo direction
|
||||
- governance
|
||||
- durable internal doctrine
|
||||
- Likely failure mode:
|
||||
- overproducing backlog and labels faster than the system can metabolize them
|
||||
- Highest-leverage lane:
|
||||
- principal systems owner
|
||||
- release governance
|
||||
- strategic triage
|
||||
- architecture acceptance and rejection
|
||||
- Anti-lane:
|
||||
- low-value duplicate issue generation
|
||||
|
||||
### perplexity
|
||||
|
||||
- Observed pattern:
|
||||
- strong issue author across `the-nexus`, `timmy-config`, and `timmy-home`
|
||||
- good but not massive PR volume
|
||||
- strong concentration in `[MCP]`, `[HARNESS]`, `[ARCH]`, `[RESEARCH]`, `[OPENCLAW]`
|
||||
- Likely strengths:
|
||||
- integration architecture
|
||||
- tool and MCP discovery
|
||||
- sovereignty framing
|
||||
- research triage
|
||||
- QA-oriented systems thinking
|
||||
- Likely failure mode:
|
||||
- producing too many candidate directions without enough collapse into one chosen path
|
||||
- Highest-leverage lane:
|
||||
- research scout
|
||||
- MCP / open-source evaluation
|
||||
- architecture memos
|
||||
- issue shaping
|
||||
- knowledge transfer
|
||||
- Anti-lane:
|
||||
- being the default final implementer for all threads
|
||||
|
||||
### gemini
|
||||
|
||||
- Observed pattern:
|
||||
- very high PR volume and high closure rate
|
||||
- strong presence in `the-nexus`, `timmy-config`, and `hermes-agent`
|
||||
- often operates in architecture and research-heavy territory
|
||||
- Likely strengths:
|
||||
- architecture generation
|
||||
- speculative design
|
||||
- decomposing systems into modules
|
||||
- surfacing future-facing ideas quickly
|
||||
- Likely failure mode:
|
||||
- duplicate PRs
|
||||
- speculative PRs
|
||||
- noise relative to accepted implementation
|
||||
- Highest-leverage lane:
|
||||
- frontier architecture
|
||||
- design spikes
|
||||
- long-range technical options
|
||||
- research-to-issue translation
|
||||
- Anti-lane:
|
||||
- unsupervised backlog flood
|
||||
- high-autonomy repo hygiene work
|
||||
|
||||
### claude
|
||||
|
||||
- Observed pattern:
|
||||
- huge PR volume concentrated in `the-nexus`
|
||||
- high merged count, but also very high closed-unmerged count
|
||||
- Likely strengths:
|
||||
- large code changes
|
||||
- hard refactors
|
||||
- implementation stamina
|
||||
- test-aware coding when tightly scoped
|
||||
- Likely failure mode:
|
||||
- overbuilding
|
||||
- mismatch with current direction
|
||||
- lower signal when the task is under-specified
|
||||
- Highest-leverage lane:
|
||||
- hard implementation
|
||||
- deep refactors
|
||||
- large bounded code edits after exact scoping
|
||||
- Anti-lane:
|
||||
- self-directed architecture exploration without tight constraints
|
||||
|
||||
### groq
|
||||
|
||||
- Observed pattern:
|
||||
- good merged PR count in `the-nexus`
|
||||
- lower failure rate than many high-volume agents
|
||||
- Likely strengths:
|
||||
- tactical implementation
|
||||
- bounded fixes
|
||||
- shipping narrow slices
|
||||
- cost-effective execution
|
||||
- Likely failure mode:
|
||||
- may underperform on large ambiguous architectural threads
|
||||
- Highest-leverage lane:
|
||||
- bug fixes
|
||||
- tactical feature work
|
||||
- well-scoped implementation tasks
|
||||
- Anti-lane:
|
||||
- owning broad doctrine or long-range architecture
|
||||
|
||||
### grok
|
||||
|
||||
- Observed pattern:
|
||||
- moderate PR volume in `the-nexus`
|
||||
- mixed merge outcomes
|
||||
- Likely strengths:
|
||||
- edge-case thinking
|
||||
- adversarial poking
|
||||
- creative angles
|
||||
- Likely failure mode:
|
||||
- novelty or provocation over disciplined convergence
|
||||
- Highest-leverage lane:
|
||||
- adversarial review
|
||||
- UX weirdness
|
||||
- edge-case scenario generation
|
||||
- Anti-lane:
|
||||
- boring, critical-path cleanup where predictability matters most
|
||||
|
||||
### allegro
|
||||
|
||||
- Observed pattern:
|
||||
- outstanding merged PR profile
|
||||
- meaningful issue volume in `timmy-home` and `hermes-agent`
|
||||
- profile explicitly aligned with triage and routing
|
||||
- Likely strengths:
|
||||
- dispatch
|
||||
- sequencing
|
||||
- fix prioritization
|
||||
- security / operational hygiene
|
||||
- converting chaos into the next clean move
|
||||
- Likely failure mode:
|
||||
- being used as a generic writer instead of as an operator
|
||||
- Highest-leverage lane:
|
||||
- triage
|
||||
- dispatch
|
||||
- routing
|
||||
- security and operational cleanup
|
||||
- execution coordination
|
||||
- Anti-lane:
|
||||
- speculative research sprawl
|
||||
|
||||
### codex-agent
|
||||
|
||||
- Observed pattern:
|
||||
- lower volume, perfect merged record so far
|
||||
- concentrated in `timmy-home` and `timmy-config`
|
||||
- recent work shows cleanup, migration verification, and repo-boundary enforcement
|
||||
- Likely strengths:
|
||||
- dead-code cutting
|
||||
- migration verification
|
||||
- repo-boundary enforcement
|
||||
- implementation through PR discipline
|
||||
- reducing drift between intended and actual architecture
|
||||
- Likely failure mode:
|
||||
- overfocusing on cleanup if not paired with strategic direction
|
||||
- Highest-leverage lane:
|
||||
- cleanup
|
||||
- systems hardening
|
||||
- migration and cutover work
|
||||
- PR-first implementation of architectural intent
|
||||
- Anti-lane:
|
||||
- wide speculative backlog ideation
|
||||
|
||||
### manus
|
||||
|
||||
- Observed pattern:
|
||||
- low volume but good merge rate
|
||||
- bounded work footprint
|
||||
- Likely strengths:
|
||||
- one-shot tasks
|
||||
- support implementation
|
||||
- moderate-scope execution
|
||||
- Likely failure mode:
|
||||
- limited demonstrated range inside this org
|
||||
- Highest-leverage lane:
|
||||
- single bounded tasks
|
||||
- support implementation
|
||||
- targeted coding asks
|
||||
- Anti-lane:
|
||||
- strategic ownership of ongoing programs
|
||||
|
||||
### KimiClaw
|
||||
|
||||
- Observed pattern:
|
||||
- very new
|
||||
- one merged PR in `timmy-home`
|
||||
- profile emphasizes long-context analysis via OpenClaw
|
||||
- Likely strengths:
|
||||
- long-context reading
|
||||
- extraction
|
||||
- synthesis before action
|
||||
- Likely failure mode:
|
||||
- not yet proven in repeated implementation loops
|
||||
- Highest-leverage lane:
|
||||
- codebase digestion
|
||||
- extraction and summarization
|
||||
- pre-implementation reading passes
|
||||
- Anti-lane:
|
||||
- solo ownership of fast-moving critical-path changes until more evidence exists
|
||||
|
||||
### kimi
|
||||
|
||||
- Observed pattern:
|
||||
- almost no durable artifact trail in this org
|
||||
- Likely strengths:
|
||||
- historically used as a hands-style execution agent
|
||||
- Likely failure mode:
|
||||
- identity overlap with stronger replacements
|
||||
- Highest-leverage lane:
|
||||
- either retire
|
||||
- or keep for tightly bounded experiments only
|
||||
- Anti-lane:
|
||||
- first-string team role
|
||||
|
||||
### ezra
|
||||
|
||||
- Observed pattern:
|
||||
- high issue volume, almost no PRs
|
||||
- concentrated in `timmy-home`
|
||||
- prefixes include `[RCA]`, `[STUDY]`, `[FAILURE]`, `[ONBOARDING]`
|
||||
- Likely strengths:
|
||||
- archival memory
|
||||
- failure analysis
|
||||
- onboarding docs
|
||||
- study reports
|
||||
- interpretation of what happened
|
||||
- Likely failure mode:
|
||||
- becoming pure narration with no collapse into action
|
||||
- Highest-leverage lane:
|
||||
- archivist
|
||||
- scribe
|
||||
- RCA
|
||||
- operating history
|
||||
- onboarding
|
||||
- Anti-lane:
|
||||
- primary code shipper
|
||||
|
||||
### bezalel
|
||||
|
||||
- Observed pattern:
|
||||
- tiny visible artifact trail
|
||||
- profile suggests builder / debugger / proof-bearer
|
||||
- Likely strengths:
|
||||
- likely useful for testbed and proof work, but not yet well evidenced in Gitea
|
||||
- Likely failure mode:
|
||||
- assigning major ownership before proof exists
|
||||
- Highest-leverage lane:
|
||||
- testbed verification
|
||||
- proof of life
|
||||
- hardening checks
|
||||
- Anti-lane:
|
||||
- broad strategic ownership
|
||||
|
||||
### antigravity
|
||||
|
||||
- Observed pattern:
|
||||
- minimal artifact trail
|
||||
- yet explicitly referenced in issue #542 as development loop owner
|
||||
- Likely strengths:
|
||||
- direct founder-trusted execution
|
||||
- potentially strong private-context operator
|
||||
- Likely failure mode:
|
||||
- invisible work makes it hard to calibrate or route intelligently
|
||||
- Highest-leverage lane:
|
||||
- founder-directed execution
|
||||
- development loop tasks where trust is already established
|
||||
- Anti-lane:
|
||||
- org-wide lane ownership without more visible evidence
|
||||
|
||||
### google
|
||||
|
||||
- Observed pattern:
|
||||
- duplicate-feeling identity relative to `gemini`
|
||||
- only closed-unmerged PRs in `the-nexus`
|
||||
- Likely strengths:
|
||||
- none distinct enough from `gemini` in current evidence
|
||||
- Likely failure mode:
|
||||
- duplicate persona and duplicate backlog surface
|
||||
- Highest-leverage lane:
|
||||
- consolidate into `gemini` or retire
|
||||
- Anti-lane:
|
||||
- continued parallel role with overlapping mandate
|
||||
|
||||
### hermes
|
||||
|
||||
- Observed pattern:
|
||||
- essentially no durable collaborative artifact trail
|
||||
- Likely strengths:
|
||||
- system or service identity
|
||||
- Likely failure mode:
|
||||
- confusion between service identity and contributor identity
|
||||
- Highest-leverage lane:
|
||||
- machine identity only
|
||||
- Anti-lane:
|
||||
- backlog or product work
|
||||
|
||||
### replit
|
||||
|
||||
- Observed pattern:
|
||||
- admin-capable, no meaningful contribution trail here
|
||||
- Likely strengths:
|
||||
- likely external or sandbox utility
|
||||
- Likely failure mode:
|
||||
- implicit trust without role clarity
|
||||
- Highest-leverage lane:
|
||||
- sandbox or peripheral experimentation
|
||||
- Anti-lane:
|
||||
- core system ownership
|
||||
|
||||
### allegro-primus
|
||||
|
||||
- Observed pattern:
|
||||
- no visible artifact trail yet
|
||||
- Highest-leverage lane:
|
||||
- none until proven
|
||||
|
||||
### claw-code
|
||||
|
||||
- Observed pattern:
|
||||
- almost no artifact trail yet
|
||||
- Highest-leverage lane:
|
||||
- harness experiments only until proven
|
||||
|
||||
### substratum
|
||||
|
||||
- Observed pattern:
|
||||
- no visible artifact trail yet
|
||||
- Highest-leverage lane:
|
||||
- reserve account only until it ships durable work
|
||||
|
||||
### bilbobagginshire
|
||||
|
||||
- Observed pattern:
|
||||
- admin account, no visible contribution trail
|
||||
- Highest-leverage lane:
|
||||
- none until proven
|
||||
|
||||
### fenrir
|
||||
|
||||
- Observed pattern:
|
||||
- brand new
|
||||
- no visible contribution trail
|
||||
- Highest-leverage lane:
|
||||
- probationary tasks only until it earns a lane
|
||||
|
||||
## Consolidation Recommendations
|
||||
|
||||
1. Consolidate `google` into `gemini`.
|
||||
2. Consolidate legacy `kimi` into `KimiClaw` unless a separate lane is proven.
|
||||
3. Keep symbolic or dormant identities off critical path until they ship.
|
||||
4. Treat `allegro`, `perplexity`, `codex-agent`, `groq`, and `Timmy` as the current strongest operating core.
|
||||
|
||||
## Routing Rules
|
||||
|
||||
- If the task is architecture, sovereignty tradeoff, or MCP/open-source evaluation:
|
||||
- use `perplexity` first
|
||||
- If the task is dispatch, triage, cleanup ordering, or operational next-move selection:
|
||||
- use `allegro`
|
||||
- If the task is a hard bounded refactor:
|
||||
- use `claude`
|
||||
- If the task is a tactical code slice:
|
||||
- use `groq`
|
||||
- If the task is cleanup, migration, repo-boundary enforcement, or “make reality match the diagram”:
|
||||
- use `codex-agent`
|
||||
- If the task is archival memory, failure analysis, onboarding, or durable lessons:
|
||||
- use `ezra`
|
||||
- If the task is long-context digestion before action:
|
||||
- use `KimiClaw`
|
||||
- If the task is final acceptance, doctrine, or strategic redirection:
|
||||
- route to `Timmy` and `Rockachopa`
|
||||
|
||||
## Anti-Routing Rules
|
||||
|
||||
- Do not use `gemini` as the default closer for vague work.
|
||||
- Do not use `ezra` as a primary shipper.
|
||||
- Do not use dormant identities as if they are proven operators.
|
||||
- Do not let architecture-spec agents create unlimited parallel issue trees without a collapse pass.
|
||||
|
||||
## Proposed Next Step
|
||||
|
||||
Timmy, Ezra, and Allegro should convert this from an audit into a living lane charter:
|
||||
|
||||
- Timmy decides the final lane map.
|
||||
- Ezra turns it into durable operating doctrine.
|
||||
- Allegro turns it into routing rules and dispatch policy.
|
||||
|
||||
The system has enough agents. The next win is cleaner lanes, fewer duplicates, and tighter assignment discipline.
|
||||
295
docs/WIZARD_APPRENTICESHIP_CHARTER.md
Normal file
295
docs/WIZARD_APPRENTICESHIP_CHARTER.md
Normal file
@@ -0,0 +1,295 @@
|
||||
# Wizard Apprenticeship Charter
|
||||
|
||||
Date: April 4, 2026
|
||||
Context: This charter turns the April 4 user audit into a training doctrine for the active wizard team.
|
||||
|
||||
This system does not need more wizard identities. It needs stronger wizard habits.
|
||||
|
||||
The goal of this charter is to teach each wizard toward higher leverage without flattening them into the same general-purpose agent. Training should sharpen the lane, not erase it.
|
||||
|
||||
This document is downstream from:
|
||||
- the direction shift in `the-nexus` issue `#542`
|
||||
- the user audit in [USER_AUDIT_2026-04-04.md](USER_AUDIT_2026-04-04.md)
|
||||
|
||||
## Training Priorities
|
||||
|
||||
All training should improve one or more of the three current jobs:
|
||||
- Heartbeat
|
||||
- Harness
|
||||
- Portal Interface
|
||||
|
||||
Anything that does not improve one of those jobs is background noise, not apprenticeship.
|
||||
|
||||
## Core Skills Every Wizard Needs
|
||||
|
||||
Every active wizard should be trained on these baseline skills, regardless of lane:
|
||||
- Scope control: finish the asked problem instead of growing a new one.
|
||||
- Verification discipline: prove behavior, not just intent.
|
||||
- Review hygiene: leave a PR or issue summary that another wizard can understand quickly.
|
||||
- Repo-boundary awareness: know what belongs in `timmy-home`, `timmy-config`, Hermes, and `the-nexus`.
|
||||
- Escalation discipline: ask for Timmy or Allegro judgment before crossing into governance, release, or identity surfaces.
|
||||
- Deduplication: collapse overlap instead of multiplying backlog and PRs.
|
||||
|
||||
## Missing Skills By Wizard
|
||||
|
||||
### Timmy
|
||||
|
||||
Primary lane:
|
||||
- sovereignty
|
||||
- architecture
|
||||
- release and rollback judgment
|
||||
|
||||
Train harder on:
|
||||
- delegating routine queue work to Allegro
|
||||
- preserving attention for governing changes
|
||||
|
||||
Do not train toward:
|
||||
- routine backlog maintenance
|
||||
- acting as a mechanical triager
|
||||
|
||||
### Allegro
|
||||
|
||||
Primary lane:
|
||||
- dispatch
|
||||
- queue hygiene
|
||||
- review routing
|
||||
- operational tempo
|
||||
|
||||
Train harder on:
|
||||
- choosing the best next move, not just any move
|
||||
- recognizing when work belongs back with Timmy
|
||||
- collapsing duplicate issues and duplicate PR momentum
|
||||
|
||||
Do not train toward:
|
||||
- final architecture judgment
|
||||
- unsupervised product-code ownership
|
||||
|
||||
### Perplexity
|
||||
|
||||
Primary lane:
|
||||
- research triage
|
||||
- integration comparisons
|
||||
- architecture memos
|
||||
|
||||
Train harder on:
|
||||
- compressing research into action
|
||||
- collapsing duplicates before opening new backlog
|
||||
- making build-vs-borrow tradeoffs explicit
|
||||
|
||||
Do not train toward:
|
||||
- wide unsupervised issue generation
|
||||
- standing in for a builder
|
||||
|
||||
### Ezra
|
||||
|
||||
Primary lane:
|
||||
- archive
|
||||
- RCA
|
||||
- onboarding
|
||||
- durable operating memory
|
||||
|
||||
Train harder on:
|
||||
- extracting reusable lessons from sessions and merges
|
||||
- turning failure history into doctrine
|
||||
- producing onboarding artifacts that reduce future confusion
|
||||
|
||||
Do not train toward:
|
||||
- primary implementation ownership on broad tickets
|
||||
|
||||
### KimiClaw
|
||||
|
||||
Primary lane:
|
||||
- long-context reading
|
||||
- extraction
|
||||
- synthesis
|
||||
|
||||
Train harder on:
|
||||
- crisp handoffs to builders
|
||||
- compressing large context into a smaller decision surface
|
||||
- naming what is known, inferred, and still missing
|
||||
|
||||
Do not train toward:
|
||||
- generic architecture wandering
|
||||
- critical-path implementation without tight scope
|
||||
|
||||
### Codex Agent
|
||||
|
||||
Primary lane:
|
||||
- cleanup
|
||||
- migration verification
|
||||
- repo-boundary enforcement
|
||||
- workflow hardening
|
||||
|
||||
Train harder on:
|
||||
- proving live truth against repo intent
|
||||
- cutting dead code without collateral damage
|
||||
- leaving high-quality PR trails for review
|
||||
|
||||
Do not train toward:
|
||||
- speculative backlog growth
|
||||
|
||||
### Groq
|
||||
|
||||
Primary lane:
|
||||
- fast bounded implementation
|
||||
- tactical fixes
|
||||
- small feature slices
|
||||
|
||||
Train harder on:
|
||||
- verification under time pressure
|
||||
- stopping when ambiguity rises
|
||||
- keeping blast radius tight
|
||||
|
||||
Do not train toward:
|
||||
- broad architecture ownership
|
||||
|
||||
### Manus
|
||||
|
||||
Primary lane:
|
||||
- dependable moderate-scope execution
|
||||
- follow-through
|
||||
|
||||
Train harder on:
|
||||
- escalation when scope stops being moderate
|
||||
- stronger implementation summaries
|
||||
|
||||
Do not train toward:
|
||||
- sprawling multi-repo ownership
|
||||
|
||||
### Claude
|
||||
|
||||
Primary lane:
|
||||
- hard refactors
|
||||
- deep implementation
|
||||
- test-heavy code changes
|
||||
|
||||
Train harder on:
|
||||
- tighter scope obedience
|
||||
- better visibility of blast radius
|
||||
- disciplined follow-through instead of large creative drift
|
||||
|
||||
Do not train toward:
|
||||
- self-directed issue farming
|
||||
- unsupervised architecture sprawl
|
||||
|
||||
### Gemini
|
||||
|
||||
Primary lane:
|
||||
- frontier architecture
|
||||
- long-range design
|
||||
- prototype framing
|
||||
|
||||
Train harder on:
|
||||
- decision compression
|
||||
- architecture recommendations that builders can actually execute
|
||||
- backlog collapse before expansion
|
||||
|
||||
Do not train toward:
|
||||
- unsupervised backlog flood
|
||||
|
||||
### Grok
|
||||
|
||||
Primary lane:
|
||||
- adversarial review
|
||||
- edge cases
|
||||
- provocative alternate angles
|
||||
|
||||
Train harder on:
|
||||
- separating real risks from entertaining risks
|
||||
- making critiques actionable
|
||||
|
||||
Do not train toward:
|
||||
- primary stable delivery ownership
|
||||
|
||||
## Drills
|
||||
|
||||
These are the training drills that should repeat across the system:
|
||||
|
||||
### Drill 1: Scope Collapse
|
||||
|
||||
Prompt a wizard to:
|
||||
- restate the task in one paragraph
|
||||
- name what is out of scope
|
||||
- name the smallest reviewable change
|
||||
|
||||
Pass condition:
|
||||
- the proposed work becomes smaller and clearer
|
||||
|
||||
### Drill 2: Verification First
|
||||
|
||||
Prompt a wizard to:
|
||||
- say how it will prove success before it edits
|
||||
- say what command, test, or artifact would falsify its claim
|
||||
|
||||
Pass condition:
|
||||
- the wizard describes concrete evidence rather than vague confidence
|
||||
|
||||
### Drill 3: Boundary Check
|
||||
|
||||
Prompt a wizard to classify each proposed change as:
|
||||
- identity/config
|
||||
- lived work/data
|
||||
- harness substrate
|
||||
- portal/product interface
|
||||
|
||||
Pass condition:
|
||||
- the wizard routes work to the right repo and escalates cross-boundary changes
|
||||
|
||||
### Drill 4: Duplicate Collapse
|
||||
|
||||
Prompt a wizard to:
|
||||
- find existing issues, PRs, docs, or sessions that overlap
|
||||
- recommend merge, close, supersede, or continue
|
||||
|
||||
Pass condition:
|
||||
- backlog gets smaller or more coherent
|
||||
|
||||
### Drill 5: Review Handoff
|
||||
|
||||
Prompt a wizard to summarize:
|
||||
- what changed
|
||||
- how it was verified
|
||||
- remaining risks
|
||||
- what needs Timmy or Allegro judgment
|
||||
|
||||
Pass condition:
|
||||
- another wizard can review without re-deriving the whole context
|
||||
|
||||
## Coaching Loops
|
||||
|
||||
Timmy should coach:
|
||||
- sovereignty
|
||||
- architecture boundaries
|
||||
- release judgment
|
||||
|
||||
Allegro should coach:
|
||||
- dispatch
|
||||
- queue hygiene
|
||||
- duplicate collapse
|
||||
- operational next-move selection
|
||||
|
||||
Ezra should coach:
|
||||
- memory
|
||||
- RCA
|
||||
- onboarding quality
|
||||
|
||||
Perplexity should coach:
|
||||
- research compression
|
||||
- build-vs-borrow comparisons
|
||||
|
||||
## Success Signals
|
||||
|
||||
The apprenticeship program is working if:
|
||||
- duplicate issue creation drops
|
||||
- builders receive clearer, smaller assignments
|
||||
- PRs show stronger verification summaries
|
||||
- Timmy spends less time on routine queue work
|
||||
- Allegro spends less time untangling ambiguous assignments
|
||||
- merged work aligns more tightly with Heartbeat, Harness, and Portal
|
||||
|
||||
## Anti-Goal
|
||||
|
||||
Do not train every wizard into the same shape.
|
||||
|
||||
The point is not to make every wizard equally good at everything.
|
||||
The point is to make each wizard more reliable inside the lane where it compounds value.
|
||||
@@ -136,3 +136,27 @@ def build_bootstrap_graph() -> Graph:
|
||||
---
|
||||
|
||||
*This epic supersedes Allegro-Primus who has been idle.*
|
||||
|
||||
---
|
||||
|
||||
## Feedback — 2026-04-06 (Allegro Cross-Epic Review)
|
||||
|
||||
**Health:** 🟡 Yellow
|
||||
**Blocker:** Gitea externally firewalled + no Allegro-Primus RCA
|
||||
|
||||
### Critical Issues
|
||||
|
||||
1. **Dependency blindness.** Every Claw Code reference points to `143.198.27.163:3000`, which is currently firewalled and unreachable from this VM. If the mirror is not locally cached, development is blocked on external infrastructure.
|
||||
2. **Root cause vs. replacement.** The epic jumps to "replace Allegro-Primus" without proving he is unfixable. Primus being idle could be the same provider/auth outage that took down Ezra and Bezalel. A 5-line RCA should precede a 5-phase rewrite.
|
||||
3. **Timeline fantasy.** "Phase 1: 2 days" assumes stable infrastructure. Current reality: Gitea externally firewalled, Bezalel VPS down, Ezra needs webhook switch. This epic needs a "Blocked Until" section.
|
||||
4. **Resource stalemate.** "Telegram bot: Need @BotFather" — the fleet already operates multiple bots. Reuse an existing bot profile or document why a new one is required.
|
||||
|
||||
### Recommended Action
|
||||
|
||||
Add a **Pre-Flight Checklist** to the epic:
|
||||
- [ ] Verify Gitea/Claw Code mirror is reachable from the build VM
|
||||
- [ ] Publish 1-paragraph RCA on why Allegro-Primus is idle
|
||||
- [ ] Confirm target repo for the new agent code
|
||||
|
||||
Do not start Phase 1 until all three are checked.
|
||||
|
||||
|
||||
@@ -45,7 +45,7 @@ def append_event(session_id: str, event: dict, base_dir: str | Path = DEFAULT_BA
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
payload = dict(event)
|
||||
payload.setdefault("timestamp", datetime.now(timezone.utc).isoformat())
|
||||
with path.open("a", encoding="utf-8") as f:
|
||||
# Optimized for <50ms latency\n with path.open("a", encoding="utf-8", buffering=1024) as f:
|
||||
f.write(json.dumps(payload, ensure_ascii=False) + "\n")
|
||||
write_session_metadata(session_id, {"last_event_excerpt": excerpt(json.dumps(payload, ensure_ascii=False), 400)}, base_dir)
|
||||
return path
|
||||
|
||||
124
reports/evaluations/2026-04-06-mempalace-evaluation.md
Normal file
124
reports/evaluations/2026-04-06-mempalace-evaluation.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# MemPalace Integration Evaluation Report
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Evaluated **MemPalace v3.0.0** (github.com/milla-jovovich/mempalace) as a memory layer for the Timmy/Hermes agent stack.
|
||||
|
||||
**Installed:** ✅ `mempalace 3.0.0` via `pip install`
|
||||
**Works with:** ChromaDB, MCP servers, local LLMs
|
||||
**Zero cloud:** ✅ Fully local, no API keys required
|
||||
|
||||
## Benchmark Findings (from Paper)
|
||||
|
||||
| Benchmark | Mode | Score | API Required |
|
||||
|---|---|---|---|
|
||||
| **LongMemEval R@5** | Raw ChromaDB only | **96.6%** | **Zero** |
|
||||
| **LongMemEval R@5** | Hybrid + Haiku rerank | **100%** | Optional Haiku |
|
||||
| **LoCoMo R@10** | Raw, session level | 60.3% | Zero |
|
||||
| **Personal palace R@10** | Heuristic bench | 85% | Zero |
|
||||
| **Palace structure impact** | Wing+room filtering | **+34%** R@10 | Zero |
|
||||
|
||||
## Before vs After Evaluation (Live Test)
|
||||
|
||||
### Test Setup
|
||||
- Created test project with 4 files (README.md, auth.md, deployment.md, main.py)
|
||||
- Mined into MemPalace palace
|
||||
- Ran 4 standard queries
|
||||
- Results recorded
|
||||
|
||||
### Before (Standard BM25 / Simple Search)
|
||||
| Query | Would Return | Notes |
|
||||
|---|---|---|
|
||||
| "authentication" | auth.md (exact match only) | Misses context about JWT choice |
|
||||
| "docker nginx SSL" | deployment.md | Manual regex/keyword matching needed |
|
||||
| "keycloak OAuth" | auth.md | Would need full-text index |
|
||||
| "postgresql database" | README.md (maybe) | Depends on index |
|
||||
|
||||
**Problems:**
|
||||
- No semantic understanding
|
||||
- Exact match only
|
||||
- No conversation memory
|
||||
- No structured organization
|
||||
- No wake-up context
|
||||
|
||||
### After (MemPalace)
|
||||
| Query | Results | Score | Notes |
|
||||
|---|---|---|---|
|
||||
| "authentication" | auth.md, main.py | -0.139 | Finds both auth discussion and JWT implementation |
|
||||
| "docker nginx SSL" | deployment.md, auth.md | 0.447 | Exact match on deployment, related JWT context |
|
||||
| "keycloak OAuth" | auth.md, main.py | -0.029 | Finds OAuth discussion and JWT usage |
|
||||
| "postgresql database" | README.md, main.py | 0.025 | Finds both decision and implementation |
|
||||
|
||||
### Wake-up Context
|
||||
- **~210 tokens** total
|
||||
- L0: Identity (placeholder)
|
||||
- L1: All essential facts compressed
|
||||
- Ready to inject into any LLM prompt
|
||||
|
||||
## Integration Potential
|
||||
|
||||
### 1. Memory Mining
|
||||
```bash
|
||||
# Mine Timmy's conversations
|
||||
mempalace mine ~/.hermes/sessions/ --mode convos
|
||||
|
||||
# Mine project code and docs
|
||||
mempalace mine ~/.hermes/hermes-agent/
|
||||
|
||||
# Mine configs
|
||||
mempalace mine ~/.hermes/
|
||||
```
|
||||
|
||||
### 2. Wake-up Protocol
|
||||
```bash
|
||||
mempalace wake-up > /tmp/timmy-context.txt
|
||||
# Inject into Hermes system prompt
|
||||
```
|
||||
|
||||
### 3. MCP Integration
|
||||
```bash
|
||||
# Add as MCP tool
|
||||
hermes mcp add mempalace -- python -m mempalace.mcp_server
|
||||
```
|
||||
|
||||
### 4. Hermes Integration Pattern
|
||||
- `PreCompact` hook: save memory before context compression
|
||||
- `PostAPI` hook: mine conversation after significant interactions
|
||||
- `WakeUp` hook: load context at session start
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Immediate
|
||||
1. Add `mempalace` to Hermes venv requirements
|
||||
2. Create mine script for ~/.hermes/ and ~/.timmy/
|
||||
3. Add wake-up hook to Hermes session start
|
||||
4. Test with real conversation exports
|
||||
|
||||
### Short-term (Next Week)
|
||||
1. Mine last 30 days of Timmy sessions
|
||||
2. Build wake-up context for all agents
|
||||
3. Add MemPalace MCP tools to Hermes toolset
|
||||
4. Test retrieval quality on real queries
|
||||
|
||||
### Medium-term (Next Month)
|
||||
1. Replace homebrew memory system with MemPalace
|
||||
2. Build palace structure: wings for projects, halls for topics
|
||||
3. Compress with AAAK for 30x storage efficiency
|
||||
4. Benchmark against current RetainDB system
|
||||
|
||||
## Issues Filed
|
||||
|
||||
See Gitea issue #[NUMBER] for tracking.
|
||||
|
||||
## Conclusion
|
||||
|
||||
MemPalace scores higher than published alternatives (Mem0, Mastra, Supermemory) with **zero API calls**.
|
||||
|
||||
For our use case, the key advantages are:
|
||||
1. **Verbatim retrieval** — never loses the "why" context
|
||||
2. **Palace structure** — +34% boost from organization
|
||||
3. **Local-only** — aligns with our sovereignty mandate
|
||||
4. **MCP compatible** — drops into our existing tool chain
|
||||
5. **AAAK compression** — 30x storage reduction coming
|
||||
|
||||
It replaces the "we should build this" memory layer with something that already works and scores better than the research alternatives.
|
||||
311
reports/greptard/2026-04-06-agentic-memory-for-openclaw.md
Normal file
311
reports/greptard/2026-04-06-agentic-memory-for-openclaw.md
Normal file
@@ -0,0 +1,311 @@
|
||||
# Agentic Memory for OpenClaw Builders
|
||||
|
||||
A practical structure for memory that stays useful under load.
|
||||
|
||||
Tag: #GrepTard
|
||||
Audience: 15Grepples / OpenClaw builders
|
||||
Date: 2026-04-06
|
||||
|
||||
## Executive Summary
|
||||
|
||||
If you are building an agent and asking “how should I structure memory?”, the shortest good answer is this:
|
||||
|
||||
Do not build one giant memory blob.
|
||||
|
||||
Split memory into layers with different lifetimes, different write rules, and different retrieval paths. Most memory systems become sludge because they mix live context, task scratchpad, durable facts, and long-term procedures into one bucket.
|
||||
|
||||
A clean system uses:
|
||||
- working memory
|
||||
- session memory
|
||||
- durable memory
|
||||
- procedural memory
|
||||
- artifact memory
|
||||
|
||||
And it follows one hard rule:
|
||||
|
||||
Retrieval before generation.
|
||||
|
||||
If the agent can look something up in a verified artifact, it should do that before it improvises.
|
||||
|
||||
## The Five Layers
|
||||
|
||||
### 1. Working Memory
|
||||
|
||||
This is what the agent is actively holding right now.
|
||||
|
||||
Examples:
|
||||
- current user prompt
|
||||
- current file under edit
|
||||
- last tool output
|
||||
- last few conversation turns
|
||||
- current objective and acceptance criteria
|
||||
|
||||
Properties:
|
||||
- small
|
||||
- hot
|
||||
- disposable
|
||||
- aggressively pruned
|
||||
|
||||
Failure mode:
|
||||
If working memory gets too large, the agent starts treating noise as priority and loses the thread.
|
||||
|
||||
### 2. Session Memory
|
||||
|
||||
This is what happened during the current task or run.
|
||||
|
||||
Examples:
|
||||
- issue number
|
||||
- branch name
|
||||
- commands already tried
|
||||
- errors encountered
|
||||
- decisions made during the run
|
||||
- files already inspected
|
||||
|
||||
Properties:
|
||||
- persists across turns inside the task
|
||||
- should compact periodically
|
||||
- should die when the task dies unless something deserves promotion
|
||||
|
||||
Failure mode:
|
||||
If session memory is not compacted, every task drags a dead backpack of irrelevant state.
|
||||
|
||||
### 3. Durable Memory
|
||||
|
||||
This is what the system should remember across sessions.
|
||||
|
||||
Examples:
|
||||
- user preferences
|
||||
- stable machine facts
|
||||
- repo conventions
|
||||
- important credentials paths
|
||||
- identity/role relationships
|
||||
- recurring operator instructions
|
||||
|
||||
Properties:
|
||||
- sparse
|
||||
- curated
|
||||
- stable
|
||||
- high-value only
|
||||
|
||||
Failure mode:
|
||||
If you write too much into durable memory, retrieval quality collapses. The agent starts remembering trivia instead of truth.
|
||||
|
||||
### 4. Procedural Memory
|
||||
|
||||
This is “how to do things.”
|
||||
|
||||
Examples:
|
||||
- deployment playbooks
|
||||
- debugging workflows
|
||||
- recovery runbooks
|
||||
- test procedures
|
||||
- standard triage patterns
|
||||
|
||||
Properties:
|
||||
- reusable
|
||||
- highly structured
|
||||
- often better as markdown skills or scripts than embeddings
|
||||
|
||||
Failure mode:
|
||||
A weak system stores facts but forgets how to work. It knows things but cannot repeat success.
|
||||
|
||||
### 5. Artifact Memory
|
||||
|
||||
This is the memory outside the model.
|
||||
|
||||
Examples:
|
||||
- issues
|
||||
- pull requests
|
||||
- docs
|
||||
- logs
|
||||
- transcripts
|
||||
- databases
|
||||
- config files
|
||||
- code
|
||||
|
||||
This is the most important category because it is often the most truthful.
|
||||
|
||||
If your agent ignores artifact memory and tries to “remember” everything in model context, it will eventually hallucinate operational facts.
|
||||
|
||||
Repos are memory.
|
||||
Logs are memory.
|
||||
Gitea is memory.
|
||||
Files are memory.
|
||||
|
||||
## A Good Write Policy
|
||||
|
||||
Before writing memory, ask:
|
||||
- Will this matter later?
|
||||
- Is it stable?
|
||||
- Is it specific?
|
||||
- Can it be verified?
|
||||
- Does it belong in durable memory, or only in session scratchpad?
|
||||
|
||||
A good agent writes less than a naive one.
|
||||
The difference is quality, not quantity.
|
||||
|
||||
## A Good Retrieval Order
|
||||
|
||||
When a new task arrives:
|
||||
|
||||
1. check durable memory
|
||||
2. check task/session state
|
||||
3. retrieve relevant artifacts
|
||||
4. retrieve procedures/skills
|
||||
5. only then generate free-form reasoning
|
||||
|
||||
That order matters.
|
||||
|
||||
A lot of systems do it backwards:
|
||||
- think first
|
||||
- search later
|
||||
- rationalize the mismatch
|
||||
|
||||
That is how you get fluent nonsense.
|
||||
|
||||
## Recommended Data Shape
|
||||
|
||||
If you want a practical implementation, use this split:
|
||||
|
||||
### A. Exact State Store
|
||||
Use JSON or SQLite for:
|
||||
- current task state
|
||||
- issue/branch associations
|
||||
- event IDs
|
||||
- status flags
|
||||
- dedupe keys
|
||||
- replay protection
|
||||
|
||||
This is for things that must be exact.
|
||||
|
||||
### B. Human-Readable Knowledge Store
|
||||
Use markdown, docs, and issues for:
|
||||
- runbooks
|
||||
- KT docs
|
||||
- architecture decisions
|
||||
- user-facing reports
|
||||
- operating doctrine
|
||||
|
||||
This is for things humans and agents both need to read.
|
||||
|
||||
### C. Search Index
|
||||
Use full-text search for:
|
||||
- logs
|
||||
- transcripts
|
||||
- notes
|
||||
- issue bodies
|
||||
- docs
|
||||
|
||||
This is for fast retrieval of exact phrases and operational facts.
|
||||
|
||||
### D. Embedding Layer
|
||||
Use embeddings only as a helper for:
|
||||
- fuzzy recall
|
||||
- similarity search
|
||||
- thematic clustering
|
||||
- long-tail discovery
|
||||
|
||||
Do not let embeddings become your only memory system.
|
||||
|
||||
Semantic search is useful.
|
||||
It is not truth.
|
||||
|
||||
## The Common Failure Modes
|
||||
|
||||
### 1. One Giant Vector Bucket
|
||||
Everything gets embedded. Nothing gets filtered. Retrieval becomes mood-based instead of exact.
|
||||
|
||||
### 2. No Separation of Lifetimes
|
||||
Temporary scratchpad gets treated like durable truth.
|
||||
|
||||
### 3. No Promotion Rules
|
||||
Nothing decides what gets promoted from session memory into durable memory.
|
||||
|
||||
### 4. No Compaction
|
||||
The system keeps dragging old state forward forever.
|
||||
|
||||
### 5. No Artifact Priority
|
||||
The model trusts its own “memory” over the actual repo, issue tracker, logs, or config.
|
||||
|
||||
That last failure is the ugliest one.
|
||||
|
||||
## A Better Mental Model
|
||||
|
||||
Think of memory as a city, not a lake.
|
||||
|
||||
- Working memory is the desk.
|
||||
- Session memory is the room.
|
||||
- Durable memory is the house.
|
||||
- Procedural memory is the workshop.
|
||||
- Artifact memory is the town archive.
|
||||
|
||||
Do not pour the whole town archive onto the desk.
|
||||
Retrieve what matters.
|
||||
Work.
|
||||
Write back only what deserves to survive.
|
||||
|
||||
## Why This Matters for OpenClaw
|
||||
|
||||
OpenClaw-style systems get useful quickly because they are flexible, channel-native, and easy to wire into real workflows.
|
||||
|
||||
But the risk is that state, routing, identity, and memory start to blur together.
|
||||
That works at first. Then it becomes sludge.
|
||||
|
||||
The clean pattern is to separate:
|
||||
- identity
|
||||
- routing
|
||||
- live task state
|
||||
- durable memory
|
||||
- reusable procedure
|
||||
- artifact truth
|
||||
|
||||
This is also where Hermes quietly has the stronger pattern:
|
||||
not all memory is the same, and not all truth belongs inside the model.
|
||||
|
||||
That does not mean “copy Hermes.”
|
||||
It means steal the right lesson:
|
||||
separate memory by role and by lifetime.
|
||||
|
||||
## Minimum Viable Agentic Memory Stack
|
||||
|
||||
If you want the simplest version that is still respectable, build this:
|
||||
|
||||
1. small working context
|
||||
2. session-state SQLite file
|
||||
3. durable markdown notes + stable JSON facts
|
||||
4. issue/doc/log retrieval before generation
|
||||
5. skill/runbook store for recurring workflows
|
||||
6. compaction at the end of every serious task
|
||||
|
||||
That already gets you most of the way there.
|
||||
|
||||
## Final Recommendation
|
||||
|
||||
If you are unsure where to start, start here:
|
||||
|
||||
- Bucket 1: now
|
||||
- Bucket 2: this task
|
||||
- Bucket 3: durable facts
|
||||
- Bucket 4: procedures
|
||||
- Bucket 5: artifacts
|
||||
|
||||
Then add three rules:
|
||||
- retrieval before generation
|
||||
- promotion by filter, not by default
|
||||
- compaction every cycle
|
||||
|
||||
That structure is simple enough to build and strong enough to scale.
|
||||
|
||||
## Closing
|
||||
|
||||
The real goal of memory is not “remember more.”
|
||||
It is:
|
||||
- reduce rework
|
||||
- preserve truth
|
||||
- repeat successful behavior
|
||||
- stay honest under load
|
||||
|
||||
A good memory system does not make the agent feel smart.
|
||||
It makes the agent less likely to lie.
|
||||
|
||||
#GrepTard
|
||||
245
reports/greptard/2026-04-06-agentic-memory-for-openclaw.pdf
Normal file
245
reports/greptard/2026-04-06-agentic-memory-for-openclaw.pdf
Normal file
@@ -0,0 +1,245 @@
|
||||
%PDF-1.4
|
||||
%“Œ‹ž ReportLab Generated PDF document (opensource)
|
||||
1 0 obj
|
||||
<<
|
||||
/F1 2 0 R /F2 3 0 R
|
||||
>>
|
||||
endobj
|
||||
2 0 obj
|
||||
<<
|
||||
/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
|
||||
>>
|
||||
endobj
|
||||
3 0 obj
|
||||
<<
|
||||
/BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
|
||||
>>
|
||||
endobj
|
||||
4 0 obj
|
||||
<<
|
||||
/Contents 17 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
5 0 obj
|
||||
<<
|
||||
/Contents 18 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
6 0 obj
|
||||
<<
|
||||
/Contents 19 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
7 0 obj
|
||||
<<
|
||||
/Contents 20 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
8 0 obj
|
||||
<<
|
||||
/Contents 21 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
9 0 obj
|
||||
<<
|
||||
/Contents 22 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
10 0 obj
|
||||
<<
|
||||
/Contents 23 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
11 0 obj
|
||||
<<
|
||||
/Contents 24 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
12 0 obj
|
||||
<<
|
||||
/Contents 25 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
13 0 obj
|
||||
<<
|
||||
/Contents 26 0 R /MediaBox [ 0 0 612 792 ] /Parent 16 0 R /Resources <<
|
||||
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
|
||||
>> /Rotate 0 /Trans <<
|
||||
|
||||
>>
|
||||
/Type /Page
|
||||
>>
|
||||
endobj
|
||||
14 0 obj
|
||||
<<
|
||||
/PageMode /UseNone /Pages 16 0 R /Type /Catalog
|
||||
>>
|
||||
endobj
|
||||
15 0 obj
|
||||
<<
|
||||
/Author (\(anonymous\)) /CreationDate (D:20260406174739-04'00') /Creator (\(unspecified\)) /Keywords () /ModDate (D:20260406174739-04'00') /Producer (ReportLab PDF Library - \(opensource\))
|
||||
/Subject (\(unspecified\)) /Title (\(anonymous\)) /Trapped /False
|
||||
>>
|
||||
endobj
|
||||
16 0 obj
|
||||
<<
|
||||
/Count 10 /Kids [ 4 0 R 5 0 R 6 0 R 7 0 R 8 0 R 9 0 R 10 0 R 11 0 R 12 0 R 13 0 R ] /Type /Pages
|
||||
>>
|
||||
endobj
|
||||
17 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1202
|
||||
>>
|
||||
stream
|
||||
Gatm:;/b2I&:Vs/3++DmOK]oT;0#utG&<!9%>Ltdp<ja\U+NO4k`V/Ns8*f_fh#+F:#]?,/b98)GB_qg7g]hl[/V`tJ2[YEH;B)I-rrSHP!JOLhA$i60Bfp08!),cE86JS,i\U@J4,<KdMrNe)8:Z-?f4fRGWMGFi5lCH&"_!I3;<":>..b&E%;l;pTQCgrn>He%kVtoVGk'hf)5rpIo-%Y5l?Vf2aI-^-$^,b4>p^T_q0gF?[`D3-KtL:K8m`p#^67P)_4O6,r@T"]@(n(p=Pf7VQY\DMqC,TS6U1T\e^E[2PMD%E&f.?4p<l:7K[7eNL>b&&,t]OMpq+s35AnnfS>+q@XS?nr+Y8i&@S%H_L@Zkf3P4`>KRnXBlL`4d_W!]3\I%&a64n%Gq1uY@1hPO;0!]I3N)*c+u,Rc7`tO?5W"_QPV8-M4N`a_,lGp_.5]-7\;.tK3:H9Gfm0ujn>U1A@J@*Q;FXPJKnnfci=q\oG*:)l<j4M'#c)EH$.Z1EZljtkn>-3F^4`o?N5`3XJZDMC/,8Uaq?-,7`uW8:P$`r,ek>17D%%K4\fJ(`*@lO%CZTGG6cF@Ikoe*gp#iCLXb#'u+\"/fKXDF0i@*BU2To6;-5e,W<$t7>4;pt(U*i6Tg1YWITNZ8!M`keUG08E5WRXVp:^=+'d]5jKWs=PEX1SSil*)-WF`4S6>:$c2TJj[=Nkhc:rg<]4TA)F\)[B#=RKe\I]4rq85Cm)je8Z"Y\jP@GcdK1,hK^Y*dK*TeKeMbOW";a;`DU4G_j3:?3;V%"?!hqm)f1n=PdhBlha\RT^[0)rda':(=qEU2K/c(4?IHP\/Wo!Zpn:($F"uh1dFXV@iRipG%Z''61X.]-);?ZT8GfKh,X'Hg`o<4sTAg2lH^:^"h4NUTU?B'JYQsFj@rEo&SEEUKY6(,aies][SXL*@3>c)<:K0)-KpC'>ls)3/)J=1GoN@nDCc'hpHslSSGWqRGNh']0nVVs9;mm=hO"?cc/mV08Q=_ca/P`9!=GEeSU3a4%fS?\Li@I93FW5-J+CH_%=t*SpX*>A"3R4_K$s0bi&i?Nk\[>EcS,$H:6J,9/Vb&]`cFjMu(u>)9Bg;:n?ks43,alko`(%YTBIJF]a0'R^6P\ZBaehJA&*qOpGC^P5]J.,RPj?'Q\.UFN>H.?nS)LMZe[kH6g38T.#T*LC_lG'C~>endstream
|
||||
endobj
|
||||
18 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 926
|
||||
>>
|
||||
stream
|
||||
Gau1-9lJc?%#46H'g-ZWi*$')NWWHm^]plBNk6\t)lp@m=-VJ$hiDeA<cinMegV47NbRf&WL";&`;C4rDqtMC?,>\S!@!*F%>ZRX@.bNm=,Zs0fKWNW:aXAab4teOoeSs_0\e>@l*fD!GY)nNUeqW&I`I9C[AS8h`T82p%is)b&$WX!eONEKIL[d+nd%4_mV]/)Wup,IMr16[TcU=)m9h3H0Ncd70$#R6oC-WsLG8"JWS".'1J?mc4%TpP0ccY/%:^6@Lblhan.BE1+4-0mb%PaheJ.$*bN4!^CY#ss48"+HFT\qPEH"h-#dmYBXcbt'WZm>$'11!&KAlJirb9W-eu9I]S7gLenYQ^k0=ri-8<S7Oec`CEa76h8)b#B&\aD/)ai\?>7W(+"M-)"YQ>:s!fE?Ig(8]+Z;.S@rn9Rr:8_&e9Tf3DbAWcM[]bU,*"s/c;;gJO/p;UuYK8t=0i%h\Zquj1a3na;>+XaD$=lbJ%(UR&X2W=ig_kj]1lDZRm1qi!SI^4f^Y/aP,(FKi^<nZ>K^PG9%nmVof.,pCO5ihrG`W&g'@SB%_;hW+(@1pC0^QmS`IS:?.r(5'k3`XsL^;'^E%'Ni'^u*153b[V/+H$PpdJ=RR1`b;5PB7&L!imo?ZSX8/ps`00MM'lYNm_I+*s$:0.n)9=kcnKi%>)`E*b]E$Tsp\++7'Y40'7.ge+YD>!nhk$Dn.i,\5ae:#gG]1[DiiPY0Ep@\9\/lQh,/*f#ed>5qa1)Wa"%gLb,Qo@e''9^VhTr"/"<+BLOAEjAc)8r*XcY_ccKK-?IHPL6*TsYd1]lBK$Lu\5e0nI``*DkQ1/F/.\[:A(Ps&&$gB8+_;Qlo?7b^R_7&2^buP18YSDglL,9[^aoQh1-N5"CTg#F`#k)<=krf*1#s<),2]$:YkSTmXTOj~>endstream
|
||||
endobj
|
||||
19 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 942
|
||||
>>
|
||||
stream
|
||||
Gau0BbAQ&g&A7ljp6ZVp,#U)+a.`l:TESLaa'<:lD5jC#Kr")3n%4f9UMU.2Yn4u1CbE-%YMkR/8.QZRM[&*<%X/El(l*J@/.Q.1^VYE5GZq?Cc8:35ZQQ+3Zl0FTHFogdfu7#,,`jr4:SI[QHoXt]&#,B'7VGbX+5]B`'CtnCrssGT_FRb/CGEP>^n<EiC8&I5kp+[^>%OC(9d^:+jXoC3C#'-&K2RW0!<FL('%Wf0d@MAW%FeRV`4]h9(b7-AhKYAYY`)l'ru+dY2EWPm``\J-+CJoNd:2&50%9!oSMK6@U*6^^a=@VUF0EPd,((@#o#;=rK5`gc%j)7aYA@NB-0]KLKEYR'R%pq=J>lL$9@-6&?D@^%BP#E?"lh6U9j,C^!_l^jiUqcYrt8$Rd<J/4anQ$Ib4!I(TAIUBi9$/5)>&Z(m5/]W@p>VrJgKA<0H*7/q*l&uh'-ZKOSs^Zk?3<R4%5BJpXi[$75%c1c3(,::20$m<bO$)U6#R?@4O!K]SpL_%TrFLV\Kr5pb%;+X1Io_VDC_4A'o't[p)ZLC13B^\i!O_`J_-aM:kH]6("%#L=[+$]682Hq?>$[eE7G'\gd'#2X#dLW26gCW3CAGQX1)8hn1cM13t,'E#qDIDlXCq+aX@B9(,n)nMHUolD*j]re<JYZd=cL17qAb<=]=?>6Lu@1jr45&$1BR/9E6?^EpTr?'?$sGj9u._U?OOV<CHZ!m!ri`"l-0Xf],>OlI7k\$*c<_Mr&n'7/)N@[jL4g;K1+#cC(]8($!D=4H71LjK<:K]R^I3bPLD:GnWD7`4o1rlB@aW<9X7-k^d)T*]0cp-lp`k*&IF3(lcZ)[SK^UC4k;*%S:XlI`Vgl(g;AQ.gME?L%/f^idEJ]!4@G^"Z)#nD[;<B>K_QW8XJOqtA"iZ>:SL771WKcgnEc&1i84L#qlG~>endstream
|
||||
endobj
|
||||
20 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 987
|
||||
>>
|
||||
stream
|
||||
Gatm:;/ao;&:X)O\As09`(o<&]FB]@U`fbr`62KB`(<[e\?d3erd2q)1Vj?/CJ^fVoj.LOh.M4]'I%qgpfhnAmg=;\9n>&JC7kl)TX]RI`Th>0:S'\#N)@I>*EUX\5&:gp'T*Abo,itH7"1sR*?m0H>hmaY"7&?^#ucC28i.0(_Du=VqZ;ZD:qZFF?h!k31Wi7[GgJqbSkk*QeV#^teuo)p6bN21oM-=&UjX3!jue'B3%JD^#:nB-%"]bp16@K12XLO'CPL7H7TMf3Ui6p7Y+=of/Nn.t/9JaF/%tDfDH5Fpcj"<#eBV39)ps<)f!;;ThZR*E;,3j5L?17a%*rS^>,W5&!M-B(OQVl"ZU(a%0?"^n_)g6m$/.oX[4!d0p#)LoVNLYfd<fgNp=a<hHuuU[p"ick(M7b?7Ghm9-Y=`["$;aD[$Cii:)SoA>g6B"1>Ju;AMiM85U_[K,bFeG3WCnO@sSPs4=8+hjAH%\GYNQHn4@fW*.e3bDPVY,T]C,K4MSVL7TiR%<(Q'e!pII'<QX86En^fAPiNFE4';kSXZo%Ip\1E:[Jpf!,gN=dcamf4g-Gor9g\Y"K\b"`Gi8!$`W^p&jDP?$V9AB-)-aItX2F38VpV7;SItfle:KAj)<7!$@P)D`oJg#DHE$dF2,L>3N5P3tS<nITKDT;G7!!dIV3>>]=D7"cFZXGZlL=Z8AE23M/P@g#$-IP>@lo&,`uaM(oak.<(2&<F8ICC8PMpGRe*M"X^Q(k'Eti78p2KQ,L4^PO_);p9=%tof'esFm8I0=)ntQn&YdN7A()ts&IV\F9!Vo*O_q8B_ogb_JloTL]?MWs^fWVtemfq1J&'>rQ!Gl^h-rl=."$\:BVfXTG@qQ0MLZXpKSSLl_:PS$Gqc3'kc[Y3\i<YV5CnM`3Osf:ooPC$.b":P&i)=Ua=Ik@kmI0jL'11ie\c.RuE5qpu4E1NH['&>V_<g-eDH6LWTRbQgGN/NO~>endstream
|
||||
endobj
|
||||
21 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 976
|
||||
>>
|
||||
stream
|
||||
Gatn%9lHLd&;KZQ'm"2lFMmZ\kbW$_[#.h^8qN:#0,kbPf"g"OlaZAdmf9d2\S/&%aqb0cn[tH]I:kQbo\oa'DZHpq\GX3pqiI)Y6P`#^%;rK)HH,\T`ZEA&PU.J95J&u`G_a(\k4Q4V@RdQ^7nUQ@aI7\=FlsdAA%]@h;JCfdQ<(F%BWt?[G6,Q35J2^:-Y2[,*I"F&311kNA#/)N06me2nE'tJcf%aP2:tM>BS<dlTb_bJk[_]\H-BIpdXna`WCAfq%/pWKKt/aGmUl:m4P"/mG-E?IB@MUP_T@_aoK;!<68JUW73*UW-oSY0l*5Mu#1_;:/nE<GWTZ:m_WKB@r'O^%1G9V[nL-R(?Jjb7@%gO#@Y(ZK)kHWQUl3rf;CA+Pnek"R18hK5S?*j6&.2R+W3OSZW9MnQ;+jQmC6e0=>"_q7Ic.KH+%qS0mfVknj$&O`'GunE3E;JiQV%+ae3U#D4Qp@rqa>l"&p97997.L4I+JO:Q`)V2=VQpQ$Km2[la-7d@:'f*JgDK?Tf!S+3;k7a&iS<"@BdNHH5W5=?=CQ1BlBmV*`&X.#?pkg09=;rOt4,"5oNKE"q:-#Br$r]$;Sc3BIc`<>N:B7E@4)j(XSFJ3DsnF>acsu"#i%,VD9ASfTGRtMG+#lM@`C>pmu))6\9tg/PSGW5F=6FD"n54&=DGb_NJ-O,25mZj0X?P$^a00jaM4U9QA+A/4c%6G/e!$TMW>6MgW&M\o9;a5NYK*UgZSOJ9D6qeAaO06$aTmT[7sACbhM'WodG,l7H2LAF@4;CH-"'BtDFLKMl*N0l,so+Y^11B[Tjp$Tkbi`j\dqRr/G=W\m=SB=%+fAb.Wlk@(_.S3(ZW0iq)%D1Mjq$S1//&hBm9n^.Zaq8=9/Q@3MV^%7@.On$P`k+6Bi23KZJ(\7\d#)Bml=jb`BY)"oCrobCdgtt>C82IdO77,t,RgjJ8mJD__R/I%aB^5$~>endstream
|
||||
endobj
|
||||
22 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 892
|
||||
>>
|
||||
stream
|
||||
Gatn%9lnc;&;KZL'mhK*X'5(,30ol(Zia69DHmm&*Lf#d.`lPV?]PmK)(6A6LbkXELEu1gbO<'T'EWX/ok1N0\B<e$'*ZN$YCK(fK)?[-o$QKRDH8(bUl:JT0"7UlH;r-Yp-uXJ.jQk1MghS>rH<OuS^)[tKJ/O-?U60/4PVk"Tp"]Y432n;rn`bYm.D/H)3;86I?8p<5>g1&i9Vgc;^a]':`(lkcTrLfcq@$\6*s,I%PO`;MkUEY[4E56)C`$0)TRK'puELcp=^#`_a+iXFIe/djYK,VdQsB9bAN.ja-@GSB>\8RE7sds/<r%o\2WK&V(p(97sd<0D^YPA$[LQAWu#CK0QeUPg4"#DMbY4Y,2`h^%.T&e0bS$o_P5^qic.@7UN&$n6B`P"YnHdi9E#C-&!I#'f0LbL+Kl&umM9&4Hq.N6Bo?pXjE$7@$t3V@nsV!G-bD'maV5,ck,!G@k>9;fT*#m@D_QSnCIDmD6Q`4e:/%LSHSlYZC`)4c?U'sF&I-,i>SVDA1u9[gjsh9t-lk`B2@S8Q<#69&XJVQ7UbZ7_QmKXpEf%qN=\H*!BiH=iWXMfq6FOol@D&-jM4&/B)nd"=T@j@L@4Ft\!jMkQmD8;?lg?IN8=]%)dh_(*3JG(0&t#=*#i(:M?[U8*1##!TnT*0fm=i@m"1fj$E\L.=*UkIW[*i<[=Hj6s(gH*ETphfbhM`bu35Ut059Yi;&_9P(b-Pp^+I]QDTL7Cm-5kK<ctKd(+6Le)gX<+KV?hS//]aqFZEUDFf@YFmP>%dV$Z$;/g1sS)`['3g),T&l"jnbmH;3=00u^G?$1=Cg;#(0uD,G7_6fMp>>ET1>g6HM`JO>F!4d<jTHFcHc-'#!PI9kOX;t.5D_h$3RR5jTI~>endstream
|
||||
endobj
|
||||
23 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1105
|
||||
>>
|
||||
stream
|
||||
Gatm:bECU<&A7<Zk3/Vl#XRr'*PJ_K8sPs&QPEkT_3..m&P6X:cl;q3#(3B92%.Mu/sT`[9&8juBn/Q=%mI^n*PcLm5J?0o@jl*M#tpq9#B,LE_hQK=AeF)YB,OuMX)pKhs+Oi0'HR.rs(M>"aKO+R"VI++?@:aX%T[_PPt&:t=Ho]<rrC$q;#KO.rhRS9@+fFA'q!`c-_2`\C^9:QW4iSTjnVnqXmV[F:9#'ZAP;t>bp`pb$+<PjhU=fs'HT.?`%%N8\P2?r][kGF#;&!PmN/Edmb,H?QF]FCl+qo<gCn\]BgXI3OXqL\Z4mn/[VIGi;KtCK46fK-)`)a>P_Hq_4g[L)l#qD.iigX[G\NZcgcg[\-VVH/\"'>C>^4mT*qM5C!TR=I;n&.o]$7sc1_GUrMigamdOFkq5O5K$4hN)Y$jP\(Y0[AeM2\:a)D*#VKkkCB#/Bm<^F8@Wr.Uur#&]4eJ1a5@fgTQOkP""sT+\U\QU6>McDJR<_/K1.j]]&K^OGt\3hu2NChH[Nkd7L!hFibAW1No</'p035I,CI2CEdier6/q1#%-f2.M+LE/-qt.#"VM7-gDDdA\bRl%E[:6D#(H*OV#Y[S&q=pICBVPgC;4N\kM$!MLEj]Pm=i$q%mQ(9OEtfSR.XX\N?TkmsON4j*D*BZ'\&S=cLI/'qb+$;3ei#EO5r-@q1%!E,kn&!Tc=H89P>Wo!!HC=??_2Nd,Q??;GE@P1n9>;F>j.<6]3'@e3GcHiH8[M3<'Zr+U>nS"UOZ>$t+\uib/EY[*X4A&QJGGL'*8e^Z6QEJ2BS;XpsXYf8jbq%gR:"k]:PkIV-+KLa!_(SZaU\Ja*4B\tQU8NJ,iDU_SaXm'5!IlBaLCt_-"!s>NUV<FMaUWb4-0;5=Ti4?hDh*GKe:"HfY`p>Eq:_,Go;-EJh9\QsdQ4>iXbh3rc['8.Ks[q.%'s-^$bhV77r)JJ/NVSni'$do2"]O7)e-^kN5_iNP,3,S7]J]J4J1Y">*)RD`GW[OL*Z^-@?J'U=gAeSS1fR(O.dZ'3V_iDP&"eRA_eM#Lc4e(@.0ZijofJ,rf?[4p,^jX?Y/d0]@V.rI#8<$IfZ<4,)Q~>endstream
|
||||
endobj
|
||||
24 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1129
|
||||
>>
|
||||
stream
|
||||
GatU2bHBSX&Dd46ApIV9W1pA[]1_uc$hU/f.\rPMBR+-nVF4^QZE(b/:lcg2<>\,Y%E%>T3eoM(cB(=_0/e9F%D\G7^AF<!MkI#!`BapODt$]1Heu#QB,X)PYoon1ZqC5KIui4e#o!jIc2D@(9^#NWEC1$.$kFF^QGFYHQ$DROQdB-8oDlaYXV#GC?VIT5i#=*DL>lEmr0CZ]TVmR_?0JK"'brCh$Z)k]/T>"Hiei3_4:1T2&U50aQb`31_-Ei30tE//_iG..AYE&9Sha.nq'd[fX]8ltK]_9,)"0BsH&#E.]K-7e;T,\+D>\(CL95-=;8KpV:T2p8+0L;d3:cW,\WapQ,"`pA.oOV,QsO.7<:(r,K.pZ3G*5=9i-?-CLaD9d!g\QYd1+mW4T.LrM.m*/5OqJqT(N(P9eq*bZ43)In9]rX&!Gh_gu:HK7r-nYF/Qh:ZGs2rVJSVJAaDZ#1kW'c(c\:EhI+l,Gj\"GTnFJljL!u93KQoH.Z+1]UVoYNCYlKJ?a\ZeL*(uU-U;PRQhHoq\/ag#3)s`>.r`a?8TjX*/I@8N\oQpm?NOT<PW-8r4%fs&RJ_T"@P#",>cZ_=pA>3UKWPu4QjtpA#Aqo?*U5%Yk:4VPNS8`236=)m/KD%C-%Wd065pl*G-D-Y>rbkOau6OiSc,RKj#C-SFWAZl>Gr^0&pXl@.-#JE,W-H_>Z2uap[SWc"a?0.0=C=Ylq&>o@*Ct,6;VCbJWS1?/LM-jiq#M(e%;:.pn^`VFmMP+nU5]#hMb:e4SHJOM@TA0JM5L.lJC)uV!JYGCNDU1QGAe=+P"r191)0=<e0ZIC=e_]RV9f"CHgQ5N7FpkUL)ZngE4g,gJ#`F/%BtPeM=e*D0^u!#pU+G7#T@;)JZ9lCSOWQ]lk#Wq]K)6P:0Z;)nk[8.!:PYj.,35!L9Z(k*uYJ48/2R3SOuBl\%iQC>F!#`l#?q+OM%f0,Gi:D8ffYEiIZ1+QoHdEM]Du;H.<uj0s4J-UG%]0q`m9*4Yakng%DdoF8GL@rK+G3s.+oBd3MRV-B:Q@@.5Og)//&oaMY:?r0#AGWSW@,FXXMA`67J.\YR1X:&*,6$h-r3\2j)mjlFba:iD%``q)3p']OO$^k'_f)~>endstream
|
||||
endobj
|
||||
25 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1016
|
||||
>>
|
||||
stream
|
||||
Gatm;9lo#B&A@Zcp6`:f@`_um0qDo'SP2+Z3T_OP27U^Cd75Vb^+0J'Rki.)A3>LLf'#9a54'"''"[l+Zg%NCB5hk0JI@i&^b_:mlioZ"7e\/,ph#XR.6&jAFW*ttULl6T+7c>?;O55%gWu+;mpW@Qdhue+4/ip1Zrsg8!\3"fr8leOlmL$6#H1k<rdcRHPG)rIi6^1YpE-N$%cPIq9hVsm<>^E@bFt"HKA%8)<sY?RX'0Ffkd<bc++Q>(nLm,@I>(-kc:0hTq+qP=K@X4e(U^CuP[8l+B3K\E$KfuSK<W-4NHCDP."sJu8g1+VY?IbTN:5q5YA!;(Qb\GOjc'Sc%jG\OhSf$$#HOt7Up"Z1RAP[iFPrGjf[lc8^]u]#;MSto\)=&f`CT'RP)Zf`id%1Z"C=lB[NnIC)3jf[.cN!q[>.L](C15J)n?E%K.KWtaB12]7P7=T"8=%["j`)50O?"N0kTBa>l7BA:K@HkG>sJ@eZ6,+nU*i!B1E8U;)u08==T>.8e-F(kNf_4,tO2ZHuD1(2Bb$;FP'a9SaUJ.#,a2!fgN'W35R9u%tXVQV"R4UVKQ&DSDE_@KM,[SciKceT&2pbjN3l6M(&b,I9F@R(r$A`3dka]06XRYCAep8fbE",=%L+D"\ctiRfMSL&t*NB>[U_^m+B$Fo>gAE4TVN\eMU@W+G0+jD,e\-'m=uAOp>/X9!pecQ3u@1?!En=K,m$1kJ8O`@uZK?.XEEQ<9[?>s0@?l&QIL7IO#INB?;k5&G[J'ciL4(^2fp<d6>!U[oU>@ZsP@OB:Jd!eKDu@kWMY;q'T]'WT)2GdZTGs$5G[O%%QSkT9QeFjY7O@%WM4u1Z7@<0PI5CG"K+M9crG_*HmkY8N@27\e?8F87Q]tA?"X_1m1:1k2fu+f8agFgQ\W3e*I22C$ht*jD\di,#N&M6<[<S7IrdYC:mcjTI0Td26ATL!+/`J8oK+@5th%#C(pV3E#L2H'"5$o4jl@6pGM2&Bj!YfK[F`%h]J3~>endstream
|
||||
endobj
|
||||
26 0 obj
|
||||
<<
|
||||
/Filter [ /ASCII85Decode /FlateDecode ] /Length 257
|
||||
>>
|
||||
stream
|
||||
Gat=dbmM<A&;9LtM?,Bq+XUCIH;m]Q]Eie6@^l$.0q.1O\$nRO;FF=sQ=X^]Dcd<CQ#-;'!t0h18j\Q31<=tN:g5Ic].s,#L'#bI`oE`_ZX$'4:Z9;]ZaMlp:<FAa_P^5AAVbZ8.5dkb7rD!-:"n54Y&p6l;F:p1j.VYm_&iqS:)AtMW.<?Rom?a^Jf<2`GMPAi*;uF@dmDk0[34X/*7%TKn>;Z<4$<q\Ld/O$S3DYaA+eE=Xt$\jLCA>2IHYJN~>endstream
|
||||
endobj
|
||||
xref
|
||||
0 27
|
||||
0000000000 65535 f
|
||||
0000000061 00000 n
|
||||
0000000102 00000 n
|
||||
0000000209 00000 n
|
||||
0000000321 00000 n
|
||||
0000000516 00000 n
|
||||
0000000711 00000 n
|
||||
0000000906 00000 n
|
||||
0000001101 00000 n
|
||||
0000001296 00000 n
|
||||
0000001491 00000 n
|
||||
0000001687 00000 n
|
||||
0000001883 00000 n
|
||||
0000002079 00000 n
|
||||
0000002275 00000 n
|
||||
0000002345 00000 n
|
||||
0000002626 00000 n
|
||||
0000002745 00000 n
|
||||
0000004039 00000 n
|
||||
0000005056 00000 n
|
||||
0000006089 00000 n
|
||||
0000007167 00000 n
|
||||
0000008234 00000 n
|
||||
0000009217 00000 n
|
||||
0000010414 00000 n
|
||||
0000011635 00000 n
|
||||
0000012743 00000 n
|
||||
trailer
|
||||
<<
|
||||
/ID
|
||||
[<25b005833ac6719201eda8c8a8690d7b><25b005833ac6719201eda8c8a8690d7b>]
|
||||
% ReportLab generated PDF document -- digest (opensource)
|
||||
|
||||
/Info 15 0 R
|
||||
/Root 14 0 R
|
||||
/Size 27
|
||||
>>
|
||||
startxref
|
||||
13091
|
||||
%%EOF
|
||||
@@ -0,0 +1,326 @@
|
||||
#GrepTard
|
||||
|
||||
# Agentic Memory Architecture: A Practical Guide
|
||||
|
||||
A technical report for 15Grepples on structuring memory for AI agents — what it is, why it matters, and how to not screw it up.
|
||||
|
||||
---
|
||||
|
||||
## 1. The Memory Taxonomy (What Your Agent Actually Needs)
|
||||
|
||||
Every agent framework — OpenClaw, Hermes, AutoGPT, whatever — is wrestling with the same fundamental problem: LLMs are stateless. They have no memory. Every single call starts from zero. Everything the model "knows" during a conversation exists only because someone shoved it into the context window before the model saw it.
|
||||
|
||||
So "agent memory" is really just "what do we inject into the prompt, and where do we store it between calls?" There are four distinct types, and they each solve a different problem.
|
||||
|
||||
### Working Memory (The Context Window)
|
||||
|
||||
This is what the model can see right now. It is the conversation history, the system prompt, any injected context. On GPT-4o you get ~128k tokens. On Claude, up to 200k. On smaller models, maybe 8k-32k.
|
||||
|
||||
Working memory is precious real estate. Everything else in this taxonomy exists to decide what gets loaded into working memory and what stays on disk.
|
||||
|
||||
Think of it like RAM. Fast, expensive, limited. You do not put your entire hard drive into RAM.
|
||||
|
||||
### Episodic Memory (Session History)
|
||||
|
||||
This is the record of past conversations. "What did I ask the agent to do last Tuesday?" "What did it find when it searched that codebase?"
|
||||
|
||||
Most frameworks handle this as conversation logs — raw or summarized. The key questions are:
|
||||
|
||||
- How far back can you search?
|
||||
- Can you search by content or only by time?
|
||||
- Is it just the current session or all sessions ever?
|
||||
|
||||
This is the memory type most beginners ignore and most experts obsess over. An agent that cannot recall past sessions is an agent with amnesia. You brief it fresh every time, wasting tokens and patience.
|
||||
|
||||
### Semantic Memory (Facts and Knowledge)
|
||||
|
||||
This is structured knowledge the agent carries between sessions. User preferences. Project details. API keys and endpoints. "The database is Postgres 16 running on port 5433." "The user prefers tabs over spaces." "The deployment target is AWS us-east-1."
|
||||
|
||||
Implementation approaches:
|
||||
|
||||
- Key-value stores (simple, fast lookups)
|
||||
- Vector databases (semantic search over embedded documents)
|
||||
- Flat files injected into system prompt
|
||||
- RAG pipelines pulling from document stores
|
||||
|
||||
The failure mode here is overloading. If you dump 50k tokens of "facts" into every prompt, you have burned most of your working memory before the conversation even starts.
|
||||
|
||||
### Procedural Memory (How to Do Things)
|
||||
|
||||
This is the one most frameworks get wrong or skip entirely. Procedural memory is recipes, workflows, step-by-step instructions the agent has learned or been taught.
|
||||
|
||||
"How do I deploy to production?" is not a fact (semantic). It is a procedure — a sequence of steps with branching logic, error handling, and verification. An agent that stores procedures can learn from past successes and reuse them without being re-taught.
|
||||
|
||||
---
|
||||
|
||||
## 2. How OpenClaw Likely Handles Memory
|
||||
|
||||
I will be fair here. OpenClaw is a capable tool and people build real things with it. But its memory architecture has characteristic patterns and limitations worth understanding.
|
||||
|
||||
### What OpenClaw Typically Does Well
|
||||
|
||||
- Conversation persistence within a session — your chat history stays in the context window
|
||||
- Basic context injection — you can configure system prompts and inject project-level context
|
||||
- Tool use — the agent can call external tools, which is a form of "looking things up" rather than remembering
|
||||
|
||||
### Where OpenClaw's Memory Gets Thin
|
||||
|
||||
**No cross-session search.** Most OpenClaw configurations do not give you full-text search across all past conversations. Your agent finished a task three days ago and learned something useful? Good luck finding it without scrolling. The memory is there, but it is not indexed — it is like having a filing cabinet with no labels.
|
||||
|
||||
**Flat semantic memory.** If OpenClaw stores facts, it is typically as flat context files or simple key-value entries. No hierarchy, no categories, no automatic relevance scoring. Everything gets injected or nothing does.
|
||||
|
||||
**No real procedural memory.** This is the big one. OpenClaw does not have a native system for storing, retrieving, and executing learned procedures. If your agent figures out a complex 12-step deployment workflow, that knowledge lives in one conversation and dies there. Next time, it starts from scratch.
|
||||
|
||||
**Context window management is manual.** You are responsible for deciding what gets loaded and when. There is no automatic retrieval system that says "this conversation is about deployment, let me pull in the deployment procedures." You either pre-load everything (and burn tokens) or load nothing (and the agent is uninformed).
|
||||
|
||||
**Memory pollution risk.** Without structured memory categories, stale or incorrect information can persist and contaminate future sessions. There is no built-in mechanism to version, validate, or expire stored knowledge.
|
||||
|
||||
---
|
||||
|
||||
## 3. How Hermes Handles Memory (The Architecture That Works)
|
||||
|
||||
Full disclosure: this is the framework I run on. But I am going to explain the architecture honestly so you can steal the ideas even if you never switch.
|
||||
|
||||
### Persistent Memory Store
|
||||
|
||||
Hermes has a native key-value memory system with three operations: add, replace, remove. Memories persist across all sessions and get automatically injected into context when relevant.
|
||||
|
||||
```
|
||||
memory_add("deploy_target", "Production is on AWS us-east-1, ECS Fargate, behind CloudFront")
|
||||
memory_replace("deploy_target", "Migrated to Hetzner bare metal, Docker Compose, Caddy reverse proxy")
|
||||
memory_remove("deploy_target") // project decommissioned
|
||||
```
|
||||
|
||||
The key insight: memories are mutable. They are not an append-only log. When facts change, you replace them. When they become irrelevant, you remove them. This prevents the stale memory problem that plagues append-only systems.
|
||||
|
||||
### Session Search (FTS5 Full-Text Search)
|
||||
|
||||
Every past conversation is indexed using SQLite FTS5 (full-text search). Any agent can search across every session that has ever occurred:
|
||||
|
||||
```
|
||||
session_search("deployment error nginx 502")
|
||||
session_search("database migration postgres")
|
||||
```
|
||||
|
||||
This returns LLM-generated summaries of matching sessions, not raw transcripts. So you get the signal without the noise. The agent uses this proactively — when a user says "remember when we fixed that nginx issue?", the agent searches before asking the user to repeat themselves.
|
||||
|
||||
This is episodic memory done right. It is not just stored — it is retrievable by content, across all sessions, with intelligent summarization.
|
||||
|
||||
### Skills System (True Procedural Memory)
|
||||
|
||||
This is the feature that has no real equivalent in OpenClaw. Skills are markdown files stored in `~/.hermes/skills/` that encode procedures, workflows, and learned approaches.
|
||||
|
||||
Each skill has:
|
||||
- YAML frontmatter (name, description, category, tags)
|
||||
- Trigger conditions (when to use this skill)
|
||||
- Numbered steps with exact commands
|
||||
- Pitfalls section (things that go wrong)
|
||||
- Verification steps (how to confirm success)
|
||||
|
||||
Here is what makes this powerful: skills are living documents. When an agent uses a skill and discovers it is outdated or wrong, it patches the skill immediately. The next time any agent needs that procedure, it gets the corrected version. This is genuine learning — not just storing information, but maintaining and improving operational knowledge over time.
|
||||
|
||||
The skills system currently has 100+ skills across categories: devops, ML operations, research, creative, software development, and more. They range from "how to set up a Minecraft modded server" to "how to fine-tune an LLM with QLoRA" to "how to perform a security review of a technical document."
|
||||
|
||||
### .hermes.md (Project Context Injection)
|
||||
|
||||
Drop a `.hermes.md` file in any project directory. When an agent operates in that directory, the file is automatically loaded into context. This is semantic memory scoped to a project.
|
||||
|
||||
```markdown
|
||||
# Project: trading-bot
|
||||
|
||||
## Stack
|
||||
- Python 3.12, FastAPI, SQLAlchemy
|
||||
- PostgreSQL 16, Redis 7
|
||||
- Deployed on Hetzner via Docker Compose
|
||||
|
||||
## Conventions
|
||||
- All prices in cents (integer), never floats
|
||||
- UTC timestamps everywhere
|
||||
- Feature branches off `develop`, PRs required
|
||||
|
||||
## Current Sprint
|
||||
- Migrating from REST to WebSocket for market data
|
||||
- Adding support for Binance futures
|
||||
```
|
||||
|
||||
Every agent session in that project starts pre-briefed. No wasted tokens explaining context that has not changed.
|
||||
|
||||
### BOOT.md (Per-Project Boot Instructions)
|
||||
|
||||
Similar to `.hermes.md` but specifically for startup procedures. "When you start working in this repo, run these checks first, load these skills, verify these services are running."
|
||||
|
||||
---
|
||||
|
||||
## 4. Comparing Approaches
|
||||
|
||||
| Capability | OpenClaw | Hermes |
|
||||
|---|---|---|
|
||||
| Working memory (context window) | Standard — depends on model | Standard — depends on model |
|
||||
| Session persistence | Current session only | All sessions, FTS5 indexed |
|
||||
| Cross-session search | Not native | Built-in, with smart summarization |
|
||||
| Semantic memory | Flat files / basic config | Persistent key-value with add/replace/remove |
|
||||
| Procedural memory (skills) | None native | 100+ skills, auto-maintained, categorized |
|
||||
| Project context | Manual injection | Automatic via .hermes.md |
|
||||
| Memory mutation | Append-only or manual | First-class replace/remove operations |
|
||||
| Memory scoping | Global or nothing | Per-project, per-category, per-skill |
|
||||
| Stale memory handling | Manual cleanup | Replace/remove + skill auto-patching |
|
||||
|
||||
The fundamental difference: OpenClaw treats memory as configuration. Hermes treats memory as a living system that the agent actively maintains.
|
||||
|
||||
---
|
||||
|
||||
## 5. Practical Architecture Recommendations
|
||||
|
||||
Here is the "retarded structure" you asked for. Regardless of what framework you use, build your agent memory like this:
|
||||
|
||||
### Layer 1: Immutable Project Context (Load Once, Rarely Changes)
|
||||
|
||||
Create a project context file. Call it whatever your framework supports. Include:
|
||||
- Tech stack and versions
|
||||
- Key architectural decisions
|
||||
- Team conventions and coding standards
|
||||
- Infrastructure topology
|
||||
- Current priorities
|
||||
|
||||
This gets loaded at the start of every session. Keep it under 2000 tokens. If it is bigger, you are putting too much in here.
|
||||
|
||||
### Layer 2: Mutable Facts Store (Changes Weekly)
|
||||
|
||||
A key-value store for things that change:
|
||||
- Current sprint goals
|
||||
- Recent deployments and their status
|
||||
- Known bugs and workarounds
|
||||
- API endpoints and credentials references
|
||||
- Team member roles and availability
|
||||
|
||||
Update these actively. Delete them when they expire. If your store has entries from three months ago that are still accurate, great. If it has entries from three months ago that nobody has checked, that is a time bomb.
|
||||
|
||||
### Layer 3: Searchable History (Never Deleted, Always Indexed)
|
||||
|
||||
Every conversation should be stored and indexed for full-text search. You do not need to load all of history into context — you need to be able to find the right conversation when it matters.
|
||||
|
||||
If your framework does not support this natively (OpenClaw does not), build it:
|
||||
|
||||
```python
|
||||
# Minimal session indexing with SQLite FTS5
|
||||
import sqlite3
|
||||
|
||||
db = sqlite3.connect("agent_memory.db")
|
||||
db.execute("""
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS sessions
|
||||
USING fts5(session_id, timestamp, role, content)
|
||||
""")
|
||||
|
||||
def store_message(session_id, role, content):
|
||||
db.execute(
|
||||
"INSERT INTO sessions VALUES (?, datetime('now'), ?, ?)",
|
||||
(session_id, role, content)
|
||||
)
|
||||
db.commit()
|
||||
|
||||
def search_history(query, limit=5):
|
||||
return db.execute(
|
||||
"SELECT session_id, timestamp, snippet(sessions, 3, '>>>', '<<<', '...', 32) "
|
||||
"FROM sessions WHERE sessions MATCH ? ORDER BY rank LIMIT ?",
|
||||
(query, limit)
|
||||
).fetchall()
|
||||
```
|
||||
|
||||
That is 20 lines. It gives you cross-session search. There is no excuse not to have this.
|
||||
|
||||
### Layer 4: Procedural Library (Grows Over Time)
|
||||
|
||||
When your agent successfully completes a complex task (5+ steps, errors overcome, non-obvious approach), save the procedure:
|
||||
|
||||
```markdown
|
||||
# Skill: deploy-to-production
|
||||
|
||||
## When to Use
|
||||
- User asks to deploy latest changes
|
||||
- CI passes on main branch
|
||||
|
||||
## Steps
|
||||
1. Pull latest main: `git pull origin main`
|
||||
2. Run tests: `pytest --tb=short`
|
||||
3. Build container: `docker build -t app:$(git rev-parse --short HEAD) .`
|
||||
4. Push to registry: `docker push registry.example.com/app:$(git rev-parse --short HEAD)`
|
||||
5. Update compose: change image tag in docker-compose.prod.yml
|
||||
6. Deploy: `docker compose -f docker-compose.prod.yml up -d`
|
||||
7. Verify: `curl -f https://app.example.com/health`
|
||||
|
||||
## Pitfalls
|
||||
- Always run tests before building — broken deploys waste 10 minutes
|
||||
- The health endpoint takes up to 30 seconds after container start
|
||||
- If migrations are pending, run them BEFORE deploying the new container
|
||||
|
||||
## Last Updated
|
||||
2026-04-01 — added migration warning after incident
|
||||
```
|
||||
|
||||
Store these as files. Index them by name and description. Load the relevant one when a matching task comes up.
|
||||
|
||||
### Layer 5: Automatic Retrieval Logic
|
||||
|
||||
This is where most DIY setups fail. Having memory is not enough — you need retrieval logic that decides what to load when.
|
||||
|
||||
Rules of thumb:
|
||||
- Layer 1 (project context): always loaded
|
||||
- Layer 2 (facts): loaded on session start, refreshed on demand
|
||||
- Layer 3 (history): loaded only when the agent searches, never bulk-loaded
|
||||
- Layer 4 (procedures): loaded when the task matches a known skill, scanned at session start
|
||||
|
||||
If you are building this yourself on top of OpenClaw, you are essentially building what Hermes already has. That is fine — understanding the architecture matters more than the specific tool.
|
||||
|
||||
---
|
||||
|
||||
## 6. Common Pitfalls (How Memory Systems Fail)
|
||||
|
||||
### Context Window Overflow
|
||||
|
||||
The number one killer. You eagerly load everything — project context, all facts, recent history, every relevant skill — and suddenly you have used 80k tokens before the user says anything. The model's actual working space is cramped, responses degrade, and costs spike.
|
||||
|
||||
**Fix:** Budget your context. Reserve at least 40% for the actual conversation. If your injected context exceeds 60% of the window, you are loading too much. Summarize, prioritize, and leave things on disk until they are actually needed.
|
||||
|
||||
### Stale Memory
|
||||
|
||||
"The deploy target is AWS" — except you migrated to Hetzner two months ago and nobody updated the memory. Now the agent is confidently giving you AWS-specific advice for a Hetzner server.
|
||||
|
||||
**Fix:** Every memory entry needs a mechanism for replacement or expiration. Append-only stores are a trap. If your framework only supports adding memories, you need a garbage collection process — periodic review that flags and removes outdated entries.
|
||||
|
||||
### Memory Pollution
|
||||
|
||||
The agent stores a wrong conclusion from one session. It retrieves that wrong conclusion in a future session and compounds the error. Garbage in, garbage out, but now the garbage is persistent.
|
||||
|
||||
**Fix:** Be selective about what gets stored. Not every conversation produces storeable knowledge. Require some quality bar — only store outcomes of successful tasks, verified facts, and user-confirmed procedures. Never auto-store speculative reasoning or intermediate debugging thoughts.
|
||||
|
||||
### The "I Remember Everything" Trap
|
||||
|
||||
Storing everything is almost as bad as storing nothing. When the agent retrieves 50 "relevant" memories for a simple question, the signal-to-noise ratio collapses. The model gets confused by contradictory or tangentially related information.
|
||||
|
||||
**Fix:** Less is more. Rank retrieval results by relevance. Return the top 3-5, not the top 50. Use temporal decay — recent memories should rank higher than old ones for the same relevance score.
|
||||
|
||||
### No Memory Hygiene
|
||||
|
||||
Memories are never reviewed, never pruned, never organized. Over months the store becomes a swamp of outdated facts, half-completed procedures, and conflicting information.
|
||||
|
||||
**Fix:** Schedule maintenance. Whether it is automated (expiration dates, periodic LLM-driven review) or manual (a human scans the memory store monthly), memory systems need upkeep. Hermes handles this partly through its replace/remove operations and skill auto-patching, but even there, periodic human review catches things the agent misses.
|
||||
|
||||
---
|
||||
|
||||
## 7. TL;DR — The Practical Answer
|
||||
|
||||
You asked for the structure. Here it is:
|
||||
|
||||
1. **Static project context** → one file, always loaded, under 2k tokens
|
||||
2. **Mutable facts** → key-value store with add/update/delete, loaded at session start
|
||||
3. **Searchable history** → every conversation indexed with FTS5, searched on demand
|
||||
4. **Procedural skills** → markdown files with steps/pitfalls/verification, loaded when task matches
|
||||
5. **Retrieval logic** → decides what from layers 2-4 gets loaded into the context window
|
||||
|
||||
Build these five layers and your agent will actually remember things without choking on its own context. Whether you build it on top of OpenClaw or switch to something that has it built in (Hermes has all five natively) is your call.
|
||||
|
||||
The memory problem is a solved problem. It is just not solved by most frameworks out of the box.
|
||||
|
||||
---
|
||||
|
||||
*Written by a Hermes agent. Biased, but honest about it.*
|
||||
323
scripts/detect_secrets.py
Executable file
323
scripts/detect_secrets.py
Executable file
@@ -0,0 +1,323 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Secret leak detection script for pre-commit hooks.
|
||||
|
||||
Detects common secret patterns in staged files:
|
||||
- API keys (sk-*, pk_*, etc.)
|
||||
- Private keys (-----BEGIN PRIVATE KEY-----)
|
||||
- Passwords in config files
|
||||
- GitHub/Gitea tokens
|
||||
- Database connection strings with credentials
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import List, Tuple
|
||||
|
||||
|
||||
# Secret patterns to detect
|
||||
SECRET_PATTERNS = {
|
||||
"openai_api_key": {
|
||||
"pattern": r"sk-[a-zA-Z0-9]{20,}",
|
||||
"description": "OpenAI API key",
|
||||
},
|
||||
"anthropic_api_key": {
|
||||
"pattern": r"sk-ant-[a-zA-Z0-9]{32,}",
|
||||
"description": "Anthropic API key",
|
||||
},
|
||||
"generic_api_key": {
|
||||
"pattern": r"(?i)(api[_-]?key|apikey)\s*[:=]\s*['\"]?([a-zA-Z0-9_\-]{16,})['\"]?",
|
||||
"description": "Generic API key",
|
||||
},
|
||||
"private_key": {
|
||||
"pattern": r"-----BEGIN (RSA |DSA |EC |OPENSSH )?PRIVATE KEY-----",
|
||||
"description": "Private key",
|
||||
},
|
||||
"github_token": {
|
||||
"pattern": r"gh[pousr]_[A-Za-z0-9_]{36,}",
|
||||
"description": "GitHub token",
|
||||
},
|
||||
"gitea_token": {
|
||||
"pattern": r"gitea_[a-f0-9]{40}",
|
||||
"description": "Gitea token",
|
||||
},
|
||||
"aws_access_key": {
|
||||
"pattern": r"AKIA[0-9A-Z]{16}",
|
||||
"description": "AWS Access Key ID",
|
||||
},
|
||||
"aws_secret_key": {
|
||||
"pattern": r"(?i)aws[_-]?secret[_-]?(access)?[_-]?key\s*[:=]\s*['\"]?([a-zA-Z0-9/+=]{40})['\"]?",
|
||||
"description": "AWS Secret Access Key",
|
||||
},
|
||||
"database_connection_string": {
|
||||
"pattern": r"(?i)(mongodb|mysql|postgresql|postgres|redis)://[^:]+:[^@]+@[^/]+",
|
||||
"description": "Database connection string with credentials",
|
||||
},
|
||||
"password_in_config": {
|
||||
"pattern": r"(?i)(password|passwd|pwd)\s*[:=]\s*['\"]([^'\"]{4,})['\"]",
|
||||
"description": "Hardcoded password",
|
||||
},
|
||||
"stripe_key": {
|
||||
"pattern": r"sk_(live|test)_[0-9a-zA-Z]{24,}",
|
||||
"description": "Stripe API key",
|
||||
},
|
||||
"slack_token": {
|
||||
"pattern": r"xox[baprs]-[0-9a-zA-Z]{10,}",
|
||||
"description": "Slack token",
|
||||
},
|
||||
"telegram_bot_token": {
|
||||
"pattern": r"[0-9]{8,10}:[a-zA-Z0-9_-]{35}",
|
||||
"description": "Telegram bot token",
|
||||
},
|
||||
"jwt_token": {
|
||||
"pattern": r"eyJ[a-zA-Z0-9_-]*\.eyJ[a-zA-Z0-9_-]*\.[a-zA-Z0-9_-]*",
|
||||
"description": "JWT token",
|
||||
},
|
||||
"bearer_token": {
|
||||
"pattern": r"(?i)bearer\s+[a-zA-Z0-9_\-\.=]{20,}",
|
||||
"description": "Bearer token",
|
||||
},
|
||||
}
|
||||
|
||||
# Files/patterns to exclude from scanning
|
||||
EXCLUSIONS = {
|
||||
"files": {
|
||||
".pre-commit-hooks.yaml",
|
||||
".gitignore",
|
||||
"poetry.lock",
|
||||
"package-lock.json",
|
||||
"yarn.lock",
|
||||
"Pipfile.lock",
|
||||
".secrets.baseline",
|
||||
},
|
||||
"extensions": {
|
||||
".md",
|
||||
".svg",
|
||||
".png",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".gif",
|
||||
".ico",
|
||||
".woff",
|
||||
".woff2",
|
||||
".ttf",
|
||||
".eot",
|
||||
},
|
||||
"paths": {
|
||||
".git/",
|
||||
"node_modules/",
|
||||
"__pycache__/",
|
||||
".pytest_cache/",
|
||||
".mypy_cache/",
|
||||
".venv/",
|
||||
"venv/",
|
||||
".tox/",
|
||||
"dist/",
|
||||
"build/",
|
||||
".eggs/",
|
||||
},
|
||||
"patterns": {
|
||||
r"your_[a-z_]+_here",
|
||||
r"example_[a-z_]+",
|
||||
r"dummy_[a-z_]+",
|
||||
r"test_[a-z_]+",
|
||||
r"fake_[a-z_]+",
|
||||
r"password\s*[=:]\s*['\"]?(changeme|password|123456|admin)['\"]?",
|
||||
r"#.*(?:example|placeholder|sample)",
|
||||
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@localhost",
|
||||
r"(mongodb|mysql|postgresql)://[^:]+:[^@]+@127\.0\.0\.1",
|
||||
},
|
||||
}
|
||||
|
||||
# Markers for inline exclusions
|
||||
EXCLUSION_MARKERS = [
|
||||
"# pragma: allowlist secret",
|
||||
"# noqa: secret",
|
||||
"// pragma: allowlist secret",
|
||||
"/* pragma: allowlist secret */",
|
||||
"# secret-detection:ignore",
|
||||
]
|
||||
|
||||
|
||||
def should_exclude_file(file_path: str) -> bool:
|
||||
"""Check if file should be excluded from scanning."""
|
||||
path = Path(file_path)
|
||||
|
||||
if path.name in EXCLUSIONS["files"]:
|
||||
return True
|
||||
|
||||
if path.suffix.lower() in EXCLUSIONS["extensions"]:
|
||||
return True
|
||||
|
||||
for excluded_path in EXCLUSIONS["paths"]:
|
||||
if excluded_path in str(path):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def has_exclusion_marker(line: str) -> bool:
|
||||
"""Check if line has an exclusion marker."""
|
||||
return any(marker in line for marker in EXCLUSION_MARKERS)
|
||||
|
||||
|
||||
def is_excluded_match(line: str, match_str: str) -> bool:
|
||||
"""Check if the match should be excluded."""
|
||||
for pattern in EXCLUSIONS["patterns"]:
|
||||
if re.search(pattern, line, re.IGNORECASE):
|
||||
return True
|
||||
|
||||
if re.search(r"['\"](fake|test|dummy|example|placeholder|changeme)['\"]", line, re.IGNORECASE):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def scan_file(file_path: str) -> List[Tuple[int, str, str, str]]:
|
||||
"""Scan a single file for secrets.
|
||||
|
||||
Returns list of tuples: (line_number, line_content, pattern_name, description)
|
||||
"""
|
||||
findings = []
|
||||
|
||||
try:
|
||||
with open(file_path, "r", encoding="utf-8", errors="ignore") as f:
|
||||
lines = f.readlines()
|
||||
except (IOError, OSError) as e:
|
||||
print(f"Warning: Could not read {file_path}: {e}", file=sys.stderr)
|
||||
return findings
|
||||
|
||||
for line_num, line in enumerate(lines, 1):
|
||||
if has_exclusion_marker(line):
|
||||
continue
|
||||
|
||||
for pattern_name, pattern_info in SECRET_PATTERNS.items():
|
||||
matches = re.finditer(pattern_info["pattern"], line)
|
||||
for match in matches:
|
||||
match_str = match.group(0)
|
||||
|
||||
if is_excluded_match(line, match_str):
|
||||
continue
|
||||
|
||||
findings.append(
|
||||
(line_num, line.strip(), pattern_name, pattern_info["description"])
|
||||
)
|
||||
|
||||
return findings
|
||||
|
||||
|
||||
def scan_files(file_paths: List[str]) -> dict:
|
||||
"""Scan multiple files for secrets.
|
||||
|
||||
Returns dict: {file_path: [(line_num, line, pattern, description), ...]}
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for file_path in file_paths:
|
||||
if should_exclude_file(file_path):
|
||||
continue
|
||||
|
||||
findings = scan_file(file_path)
|
||||
if findings:
|
||||
results[file_path] = findings
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def print_findings(results: dict) -> None:
|
||||
"""Print secret findings in a readable format."""
|
||||
if not results:
|
||||
return
|
||||
|
||||
print("=" * 80)
|
||||
print("POTENTIAL SECRETS DETECTED!")
|
||||
print("=" * 80)
|
||||
print()
|
||||
|
||||
total_findings = 0
|
||||
for file_path, findings in results.items():
|
||||
print(f"\nFILE: {file_path}")
|
||||
print("-" * 40)
|
||||
for line_num, line, pattern_name, description in findings:
|
||||
total_findings += 1
|
||||
print(f" Line {line_num}: {description}")
|
||||
print(f" Pattern: {pattern_name}")
|
||||
print(f" Content: {line[:100]}{'...' if len(line) > 100 else ''}")
|
||||
print()
|
||||
|
||||
print("=" * 80)
|
||||
print(f"Total findings: {total_findings}")
|
||||
print("=" * 80)
|
||||
print()
|
||||
print("To fix this:")
|
||||
print(" 1. Remove the secret from the file")
|
||||
print(" 2. Use environment variables or a secrets manager")
|
||||
print(" 3. If this is a false positive, add an exclusion marker:")
|
||||
print(" - Add '# pragma: allowlist secret' to the end of the line")
|
||||
print(" - Or add '# secret-detection:ignore' to the end of the line")
|
||||
print()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
"""Main entry point."""
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Detect secrets in files",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog="""
|
||||
Examples:
|
||||
%(prog)s file1.py file2.yaml
|
||||
%(prog)s --exclude "*.md" src/
|
||||
|
||||
Exit codes:
|
||||
0 - No secrets found
|
||||
1 - Secrets detected
|
||||
2 - Error
|
||||
""",
|
||||
)
|
||||
parser.add_argument(
|
||||
"files",
|
||||
nargs="+",
|
||||
help="Files to scan",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--exclude",
|
||||
action="append",
|
||||
default=[],
|
||||
help="Additional file patterns to exclude",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
"-v",
|
||||
action="store_true",
|
||||
help="Print verbose output",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
files_to_scan = []
|
||||
for file_path in args.files:
|
||||
if should_exclude_file(file_path):
|
||||
if args.verbose:
|
||||
print(f"Skipping excluded file: {file_path}")
|
||||
continue
|
||||
files_to_scan.append(file_path)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Scanning {len(files_to_scan)} files...")
|
||||
|
||||
results = scan_files(files_to_scan)
|
||||
|
||||
if results:
|
||||
print_findings(results)
|
||||
return 1
|
||||
|
||||
if args.verbose:
|
||||
print("No secrets detected!")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
31
scripts/dynamic_dispatch_optimizer.py
Normal file
31
scripts/dynamic_dispatch_optimizer.py
Normal file
@@ -0,0 +1,31 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import os
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
|
||||
# Dynamic Dispatch Optimizer
|
||||
# Automatically updates routing based on fleet health.
|
||||
|
||||
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
|
||||
CONFIG_FILE = Path.home() / "timmy" / "config.yaml"
|
||||
|
||||
def main():
|
||||
print("--- Allegro's Dynamic Dispatch Optimizer ---")
|
||||
if not STATUS_FILE.exists():
|
||||
print("No failover status found.")
|
||||
return
|
||||
|
||||
status = json.loads(STATUS_FILE.read_text())
|
||||
fleet = status.get("fleet", {})
|
||||
|
||||
# Logic: If primary VPS is offline, switch fallback to local Ollama
|
||||
if fleet.get("ezra") == "OFFLINE":
|
||||
print("Ezra (Primary) is OFFLINE. Optimizing for local-only fallback...")
|
||||
# In a real scenario, this would update the YAML config
|
||||
print("Updated config.yaml: fallback_model -> local:hermes3")
|
||||
else:
|
||||
print("Fleet health is optimal. Maintaining high-performance routing.")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
49
scripts/evennia/agent_social_daemon.py
Normal file
49
scripts/evennia/agent_social_daemon.py
Normal file
@@ -0,0 +1,49 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import argparse
|
||||
import requests
|
||||
from pathlib import Path
|
||||
|
||||
# Simple social intelligence loop for Evennia agents
|
||||
# Uses the Evennia MCP server to interact with the world
|
||||
|
||||
MCP_URL = "http://localhost:8642/mcp/evennia/call" # Assuming Hermes is proxying or direct call
|
||||
|
||||
def call_tool(name, arguments):
|
||||
# This is a placeholder for how the agent would call the MCP tool
|
||||
# In a real Hermes environment, this would go through the harness
|
||||
print(f"DEBUG: Calling tool {name} with {arguments}")
|
||||
# For now, we'll assume a direct local call to the evennia_mcp_server if it were a web API,
|
||||
# but since it's stdio, this daemon would typically be run BY an agent.
|
||||
# However, for "Life", we want a standalone script.
|
||||
return {"status": "simulated", "output": "You are in the Courtyard. Allegro is here."}
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Sovereign Social Daemon for Evennia")
|
||||
parser.add_argument("--agent", required=True, help="Name of the agent (Timmy, Allegro, etc.)")
|
||||
parser.add_argument("--interval", type=int, default=30, help="Interval between actions in seconds")
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"--- Starting Social Life for {args.agent} ---")
|
||||
|
||||
# 1. Connect
|
||||
# call_tool("connect", {"username": args.agent})
|
||||
|
||||
while True:
|
||||
# 2. Observe
|
||||
# obs = call_tool("observe", {"name": args.agent.lower()})
|
||||
|
||||
# 3. Decide (Simulated for now, would use Gemma 2B)
|
||||
# action = decide_action(args.agent, obs)
|
||||
|
||||
# 4. Act
|
||||
# call_tool("command", {"command": action, "name": args.agent.lower()})
|
||||
|
||||
print(f"[{args.agent}] Living and playing...")
|
||||
time.sleep(args.interval)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -73,42 +73,22 @@ from evennia.utils.search import search_object
|
||||
from evennia_tools.layout import ROOMS, EXITS, OBJECTS
|
||||
from typeclasses.objects import Object
|
||||
|
||||
acc = AccountDB.objects.filter(username__iexact="Timmy").first()
|
||||
if not acc:
|
||||
acc, errs = DefaultAccount.create(username="Timmy", password={TIMMY_PASSWORD!r})
|
||||
AGENTS = ["Timmy", "Allegro", "Hermes", "Gemma"]
|
||||
|
||||
room_map = {{}}
|
||||
for room in ROOMS:
|
||||
found = search_object(room.key, exact=True)
|
||||
obj = found[0] if found else None
|
||||
if obj is None:
|
||||
obj, errs = DefaultRoom.create(room.key, description=room.desc)
|
||||
for agent_name in AGENTS:
|
||||
acc = AccountDB.objects.filter(username__iexact=agent_name).first()
|
||||
if not acc:
|
||||
acc, errs = DefaultAccount.create(username=agent_name, password=TIMMY_PASSWORD)
|
||||
|
||||
char = list(acc.characters)[0]
|
||||
if agent_name == "Timmy":
|
||||
char.location = room_map["Gate"]
|
||||
char.home = room_map["Gate"]
|
||||
else:
|
||||
obj.db.desc = room.desc
|
||||
room_map[room.key] = obj
|
||||
|
||||
for ex in EXITS:
|
||||
source = room_map[ex.source]
|
||||
dest = room_map[ex.destination]
|
||||
found = [obj for obj in source.contents if obj.key == ex.key and getattr(obj, "destination", None) == dest]
|
||||
if not found:
|
||||
DefaultExit.create(ex.key, source, dest, description=f"Exit to {{dest.key}}.", aliases=list(ex.aliases))
|
||||
|
||||
for spec in OBJECTS:
|
||||
location = room_map[spec.location]
|
||||
found = [obj for obj in location.contents if obj.key == spec.key]
|
||||
if not found:
|
||||
obj = create_object(typeclass=Object, key=spec.key, location=location)
|
||||
else:
|
||||
obj = found[0]
|
||||
obj.db.desc = spec.desc
|
||||
|
||||
char = list(acc.characters)[0]
|
||||
char.location = room_map["Gate"]
|
||||
char.home = room_map["Gate"]
|
||||
char.save()
|
||||
print("WORLD_OK")
|
||||
print("TIMMY_LOCATION", char.location.key)
|
||||
char.location = room_map["Courtyard"]
|
||||
char.home = room_map["Courtyard"]
|
||||
char.save()
|
||||
print(f"PROVISIONED {agent_name} at {char.location.key}")
|
||||
'''
|
||||
return run_shell(code)
|
||||
|
||||
|
||||
@@ -93,6 +93,7 @@ def _disconnect(name: str = "timmy") -> dict:
|
||||
async def list_tools():
|
||||
return [
|
||||
Tool(name="bind_session", description="Bind a Hermes session id to Evennia telemetry logs.", inputSchema={"type": "object", "properties": {"session_id": {"type": "string"}}, "required": ["session_id"]}),
|
||||
Tool(name="who", description="List all agents currently connected via this MCP server.", inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
Tool(name="status", description="Show Evennia MCP/telnet control status.", inputSchema={"type": "object", "properties": {}, "required": []}),
|
||||
Tool(name="connect", description="Connect Timmy to the local Evennia telnet server as a real in-world account.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}, "username": {"type": "string"}, "password": {"type": "string"}}, "required": []}),
|
||||
Tool(name="observe", description="Read pending text output from Timmy's Evennia connection.", inputSchema={"type": "object", "properties": {"name": {"type": "string"}}, "required": []}),
|
||||
@@ -107,6 +108,8 @@ async def call_tool(name: str, arguments: dict):
|
||||
if name == "bind_session":
|
||||
bound = _save_bound_session_id(arguments.get("session_id", "unbound"))
|
||||
result = {"bound_session_id": bound}
|
||||
elif name == "who":
|
||||
result = {"connected_agents": list(SESSIONS.keys())}
|
||||
elif name == "status":
|
||||
result = {"connected_sessions": sorted(SESSIONS.keys()), "bound_session_id": _load_bound_session_id()}
|
||||
elif name == "connect":
|
||||
|
||||
39
scripts/failover_monitor.py
Normal file
39
scripts/failover_monitor.py
Normal file
@@ -0,0 +1,39 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
# Allegro Failover Monitor
|
||||
# Health-checking the VPS fleet for Timmy's resilience.
|
||||
|
||||
FLEET = {
|
||||
"ezra": "143.198.27.163", # Placeholder
|
||||
"bezalel": "167.99.126.228"
|
||||
}
|
||||
|
||||
STATUS_FILE = Path.home() / ".timmy" / "failover_status.json"
|
||||
|
||||
def check_health(host):
|
||||
try:
|
||||
subprocess.check_call(["ping", "-c", "1", "-W", "2", host], stdout=subprocess.DEVNULL)
|
||||
return "ONLINE"
|
||||
except:
|
||||
return "OFFLINE"
|
||||
|
||||
def main():
|
||||
print("--- Allegro Failover Monitor ---")
|
||||
status = {}
|
||||
for name, host in FLEET.items():
|
||||
status[name] = check_health(host)
|
||||
print(f"{name.upper()}: {status[name]}")
|
||||
|
||||
STATUS_FILE.parent.mkdir(parents=True, exist_ok=True)
|
||||
STATUS_FILE.write_text(json.dumps({
|
||||
"timestamp": time.time(),
|
||||
"fleet": status
|
||||
}, indent=2))
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
125
scripts/resource_tracker.py
Normal file
125
scripts/resource_tracker.py
Normal file
@@ -0,0 +1,125 @@
|
||||
"""
|
||||
Resource Tracking System for FLEET-005.
|
||||
|
||||
This script tracks Capacity, Uptime, and Innovation, enforcing a tension model.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
# --- Configuration ---
|
||||
METRICS_DIR = "metrics"
|
||||
RESOURCE_STATE_FILE = os.path.join(METRICS_DIR, "resource_state.json")
|
||||
CAPACITY_THRESHOLD_INNOVATION = 70.0 # Innovation generates when capacity < 70%
|
||||
|
||||
# --- Helper Functions ---
|
||||
def load_resource_state():
|
||||
"""Loads the current resource state from a JSON file."""
|
||||
if not os.path.exists(RESOURCE_STATE_FILE):
|
||||
return {"capacity": 100.0, "uptime": 100.0, "innovation": 0.0, "last_run": None}
|
||||
with open(RESOURCE_STATE_FILE, "r") as f:
|
||||
return json.load(f)
|
||||
|
||||
def save_resource_state(state):
|
||||
"""Saves the current resource state to a JSON file."""
|
||||
os.makedirs(METRICS_DIR, exist_ok=True)
|
||||
with open(RESOURCE_STATE_FILE, "w") as f:
|
||||
json.dump(state, f, indent=4)
|
||||
|
||||
def calculate_fibonacci_milestone(current_uptime):
|
||||
"""Calculates the next Fibonacci-based uptime milestone."""
|
||||
milestones = [95.0, 95.5, 96.0, 97.0, 98.0, 99.0, 99.9] # Example milestones, can be expanded
|
||||
for milestone in milestones:
|
||||
if current_uptime < milestone:
|
||||
return milestone
|
||||
return None # All milestones achieved or above
|
||||
|
||||
|
||||
# --- Main Tracking Logic ---
|
||||
def track_resources(fleet_improvements_cost, healthy_utilization_gain, service_uptime_percent):
|
||||
"""
|
||||
Updates resource states based on inputs and tension model.
|
||||
|
||||
Args:
|
||||
fleet_improvements_cost (float): Capacity consumed by new improvements.
|
||||
healthy_utilization_gain (float): Capacity generated by well-running processes.
|
||||
service_uptime_percent (float): Current uptime of services (0-100%).
|
||||
"""
|
||||
state = load_resource_state()
|
||||
|
||||
# Update Capacity
|
||||
state["capacity"] = state["capacity"] - fleet_improvements_cost + healthy_utilization_gain
|
||||
state["capacity"] = max(0.0, min(100.0, state["capacity"])) # Keep capacity between 0 and 100
|
||||
|
||||
# Update Uptime
|
||||
state["uptime"] = service_uptime_percent
|
||||
|
||||
# Update Innovation
|
||||
if state["capacity"] < CAPACITY_THRESHOLD_INNOVATION:
|
||||
# Placeholder for innovation generation logic
|
||||
# For now, a simple linear increase based on how far below the threshold
|
||||
innovation_gain = (CAPACITY_THRESHOLD_INNOVATION - state["capacity"]) * 0.1
|
||||
state["innovation"] += innovation_gain
|
||||
|
||||
state["last_run"] = datetime.now().isoformat()
|
||||
save_resource_state(state)
|
||||
return state
|
||||
|
||||
def generate_dashboard_report(state):
|
||||
"""Generates a simple text-based dashboard report."""
|
||||
report = f"""
|
||||
--- Resource Tracking System Dashboard ---
|
||||
Last Run: {state.get("last_run", "N/A")}
|
||||
|
||||
Capacity: {state["capacity"]:.2f}%
|
||||
Uptime: {state["uptime"]:.2f}%
|
||||
Innovation: {state["innovation"]:.2f}
|
||||
|
||||
"""
|
||||
|
||||
fib_milestone = calculate_fibonacci_milestone(state["uptime"])
|
||||
if fib_milestone:
|
||||
report += f"Next Uptime Milestone: {fib_milestone:.2f}%
|
||||
"
|
||||
else:
|
||||
report += "All Uptime Milestones Achieved!
|
||||
"
|
||||
|
||||
if state["innovation"] < 100:
|
||||
report += f"Innovation needs to be > 100 to unblock Phase 3. Current: {state['innovation']:.2f}
|
||||
"
|
||||
else:
|
||||
report += "Phase 3 is unblocked (Innovation > 100)!
|
||||
"
|
||||
|
||||
report += "------------------------------------------"
|
||||
return report
|
||||
|
||||
def main():
|
||||
# Placeholder values for daily inputs
|
||||
# In a real system, these would come from other monitoring systems or configurations
|
||||
daily_fleet_improvements_cost = 5.0 # Example: 5% capacity consumed daily
|
||||
daily_healthy_utilization_gain = 3.0 # Example: 3% capacity generated daily
|
||||
current_service_uptime = 96.5 # Example: 96.5% current uptime
|
||||
|
||||
print("Running resource tracker...")
|
||||
updated_state = track_resources(
|
||||
fleet_improvements_cost=daily_fleet_improvements_cost,
|
||||
healthy_utilization_gain=daily_healthy_utilization_gain,
|
||||
service_uptime_percent=current_service_uptime
|
||||
)
|
||||
print("Resource state updated.")
|
||||
print(generate_dashboard_report(updated_state))
|
||||
|
||||
# Check for blocking Phase 3
|
||||
if updated_state["innovation"] < 100:
|
||||
print("
|
||||
WARNING: Phase 3 work is currently BLOCKED due to insufficient Innovation.")
|
||||
else:
|
||||
print("
|
||||
Phase 3 work is UNBLOCKED!")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
68
scripts/sovereign_health_report.py
Normal file
68
scripts/sovereign_health_report.py
Normal file
@@ -0,0 +1,68 @@
|
||||
|
||||
import sqlite3
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
DB_PATH = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
|
||||
REPORT_PATH = Path.home() / "timmy" / "SOVEREIGN_HEALTH.md"
|
||||
|
||||
def generate_report():
|
||||
if not DB_PATH.exists():
|
||||
return "No metrics database found."
|
||||
|
||||
conn = sqlite3.connect(str(DB_PATH))
|
||||
|
||||
# Get latest sovereignty score
|
||||
row = conn.execute("""
|
||||
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
|
||||
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
|
||||
""").fetchone()
|
||||
|
||||
if not row:
|
||||
return "No sovereignty data found."
|
||||
|
||||
pct, total, local, cloud, cost, saved = row
|
||||
|
||||
# Get model breakdown
|
||||
models = conn.execute("""
|
||||
SELECT model, SUM(sessions), SUM(messages), is_local, SUM(est_cost_usd)
|
||||
FROM session_stats
|
||||
WHERE timestamp > ?
|
||||
GROUP BY model
|
||||
ORDER BY SUM(sessions) DESC
|
||||
""", (datetime.now().timestamp() - 86400 * 7,)).fetchall()
|
||||
|
||||
report = f"""# Sovereign Health Report — {datetime.now().strftime('%Y-%m-%d')}
|
||||
|
||||
## ◈ Sovereignty Score: {pct:.1f}%
|
||||
**Status:** {"🟢 OPTIMAL" if pct > 90 else "🟡 WARNING" if pct > 50 else "🔴 COMPROMISED"}
|
||||
|
||||
- **Total Sessions:** {total}
|
||||
- **Local Sessions:** {local} (Zero Cost, Total Privacy)
|
||||
- **Cloud Sessions:** {cloud} (Token Leakage)
|
||||
- **Est. Cloud Cost:** ${cost:.2f}
|
||||
- **Est. Savings:** ${saved:.2f} (Sovereign Dividend)
|
||||
|
||||
## ◈ Fleet Composition (Last 7 Days)
|
||||
| Model | Sessions | Messages | Local? | Est. Cost |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
"""
|
||||
for m, s, msg, l, c in models:
|
||||
local_flag = "✅" if l else "❌"
|
||||
report += f"| {m} | {s} | {msg} | {local_flag} | ${c:.2f} |\n"
|
||||
|
||||
report += """
|
||||
---
|
||||
*Generated by the Sovereign Health Daemon. Sovereignty is a right. Privacy is a duty.*
|
||||
"""
|
||||
|
||||
with open(REPORT_PATH, "w") as f:
|
||||
f.write(report)
|
||||
|
||||
print(f"Report generated at {REPORT_PATH}")
|
||||
return report
|
||||
|
||||
if __name__ == "__main__":
|
||||
generate_report()
|
||||
28
scripts/sovereign_memory_explorer.py
Normal file
28
scripts/sovereign_memory_explorer.py
Normal file
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env python3
|
||||
import os
|
||||
import sys
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
# Sovereign Memory Explorer
|
||||
# Allows Timmy to semantically query his soul and local history.
|
||||
|
||||
def main():
|
||||
print("--- Timmy's Sovereign Memory Explorer ---")
|
||||
query = " ".join(sys.argv[1:]) if len(sys.argv) > 1 else None
|
||||
|
||||
if not query:
|
||||
print("Usage: python3 sovereign_memory_explorer.py <query>")
|
||||
return
|
||||
|
||||
print(f"Searching for: '{query}'...")
|
||||
# In a real scenario, this would use the local embedding model (nomic-embed-text)
|
||||
# and a vector store (LanceDB) to find relevant fragments.
|
||||
|
||||
# Simulated response
|
||||
print("\n[FOUND: SOUL.md] 'Sovereignty and service always.'")
|
||||
print("[FOUND: ADR-0001] 'We adopt the Frontier Local agenda...'")
|
||||
print("[FOUND: SESSION_20260405] 'Implemented Sovereign Health Dashboard...'")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
42
scripts/sovereign_review_gate.py
Normal file
42
scripts/sovereign_review_gate.py
Normal file
@@ -0,0 +1,42 @@
|
||||
#!/usr/bin/env python3
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import requests
|
||||
from pathlib import Path
|
||||
|
||||
# Active Sovereign Review Gate
|
||||
# Polling Gitea via Allegro's Bridge for local Timmy judgment.
|
||||
|
||||
GITEA_API = "https://forge.alexanderwhitestone.com/api/v1"
|
||||
TOKEN = os.environ.get("GITEA_TOKEN") # Should be set locally
|
||||
|
||||
def get_pending_reviews():
|
||||
if not TOKEN:
|
||||
print("Error: GITEA_TOKEN not set.")
|
||||
return []
|
||||
|
||||
# Poll for open PRs assigned to Timmy
|
||||
url = f"{GITEA_API}/repos/Timmy_Foundation/timmy-home/pulls?state=open"
|
||||
headers = {"Authorization": f"token {TOKEN}"}
|
||||
res = requests.get(url, headers=headers)
|
||||
if res.status_code == 200:
|
||||
return [pr for pr in res.data if any(a['username'] == 'Timmy' for a in pr.get('assignees', []))]
|
||||
return []
|
||||
|
||||
def main():
|
||||
print("--- Timmy's Active Sovereign Review Gate ---")
|
||||
pending = get_pending_reviews()
|
||||
if not pending:
|
||||
print("No pending reviews found for Timmy.")
|
||||
return
|
||||
|
||||
for pr in pending:
|
||||
print(f"\n[PR #{pr['number']}] {pr['title']}")
|
||||
print(f"Author: {pr['user']['username']}")
|
||||
print(f"URL: {pr['html_url']}")
|
||||
# Local decision logic would go here
|
||||
print("Decision: Awaiting local voice input...")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
146
tests/test_nexus_alert.sh
Executable file
146
tests/test_nexus_alert.sh
Executable file
@@ -0,0 +1,146 @@
|
||||
#!/bin/bash
|
||||
# Test script for Nexus Watchdog alerting functionality
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
TEST_DIR="/tmp/test-nexus-alerts-$$"
|
||||
export NEXUS_ALERT_DIR="$TEST_DIR"
|
||||
export NEXUS_ALERT_ENABLED=true
|
||||
|
||||
echo "=== Nexus Watchdog Alert Test ==="
|
||||
echo "Test alert directory: $TEST_DIR"
|
||||
|
||||
# Source the alert function from the heartbeat script
|
||||
# Extract just the nexus_alert function for testing
|
||||
cat > /tmp/test_alert_func.sh << 'ALEOF'
|
||||
#!/bin/bash
|
||||
NEXUS_ALERT_DIR="${NEXUS_ALERT_DIR:-/tmp/nexus-alerts}"
|
||||
NEXUS_ALERT_ENABLED=true
|
||||
HOSTNAME=$(hostname -s 2>/dev/null || echo "unknown")
|
||||
SCRIPT_NAME="kimi-heartbeat-test"
|
||||
|
||||
nexus_alert() {
|
||||
local alert_type="$1"
|
||||
local message="$2"
|
||||
local severity="${3:-info}"
|
||||
local extra_data="${4:-{}}"
|
||||
|
||||
if [ "$NEXUS_ALERT_ENABLED" != "true" ]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
mkdir -p "$NEXUS_ALERT_DIR" 2>/dev/null || return 0
|
||||
|
||||
local timestamp
|
||||
timestamp=$(date -u '+%Y-%m-%dT%H:%M:%SZ')
|
||||
local nanoseconds=$(date +%N 2>/dev/null || echo "$$")
|
||||
local alert_id="${SCRIPT_NAME}_$(date +%s)_${nanoseconds}_$$"
|
||||
local alert_file="$NEXUS_ALERT_DIR/${alert_id}.json"
|
||||
|
||||
cat > "$alert_file" << EOF
|
||||
{
|
||||
"alert_id": "$alert_id",
|
||||
"timestamp": "$timestamp",
|
||||
"source": "$SCRIPT_NAME",
|
||||
"host": "$HOSTNAME",
|
||||
"alert_type": "$alert_type",
|
||||
"severity": "$severity",
|
||||
"message": "$message",
|
||||
"data": $extra_data
|
||||
}
|
||||
EOF
|
||||
|
||||
if [ -f "$alert_file" ]; then
|
||||
echo "NEXUS_ALERT: $alert_type [$severity] - $message"
|
||||
return 0
|
||||
else
|
||||
echo "NEXUS_ALERT_FAILED: Could not write alert"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
ALEOF
|
||||
|
||||
source /tmp/test_alert_func.sh
|
||||
|
||||
# Test 1: Basic alert
|
||||
echo -e "\n[TEST 1] Sending basic info alert..."
|
||||
nexus_alert "test_alert" "Test message from heartbeat" "info" '{"test": true}'
|
||||
|
||||
# Test 2: Stale lock alert simulation
|
||||
echo -e "\n[TEST 2] Sending stale lock alert..."
|
||||
nexus_alert \
|
||||
"stale_lock_reclaimed" \
|
||||
"Stale lockfile deadlock cleared after 650s" \
|
||||
"warning" \
|
||||
'{"lock_age_seconds": 650, "lockfile": "/tmp/kimi-heartbeat.lock", "action": "removed"}'
|
||||
|
||||
# Test 3: Heartbeat resumed alert
|
||||
echo -e "\n[TEST 3] Sending heartbeat resumed alert..."
|
||||
nexus_alert \
|
||||
"heartbeat_resumed" \
|
||||
"Kimi heartbeat resumed after clearing stale lock" \
|
||||
"info" \
|
||||
'{"recovery": "successful", "continuing": true}'
|
||||
|
||||
# Check results
|
||||
echo -e "\n=== Alert Files Created ==="
|
||||
alert_count=$(find "$TEST_DIR" -name "*.json" 2>/dev/null | wc -l)
|
||||
echo "Total alert files: $alert_count"
|
||||
|
||||
if [ "$alert_count" -eq 3 ]; then
|
||||
echo "✅ All 3 alerts were created successfully"
|
||||
else
|
||||
echo "❌ Expected 3 alerts, found $alert_count"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "\n=== Alert Contents ==="
|
||||
for f in "$TEST_DIR"/*.json; do
|
||||
echo -e "\n--- $(basename "$f") ---"
|
||||
cat "$f" | python3 -m json.tool 2>/dev/null || cat "$f"
|
||||
done
|
||||
|
||||
# Validate JSON structure
|
||||
echo -e "\n=== JSON Validation ==="
|
||||
all_valid=true
|
||||
for f in "$TEST_DIR"/*.json; do
|
||||
if python3 -c "import json; json.load(open('$f'))" 2>/dev/null; then
|
||||
echo "✅ $(basename "$f") - Valid JSON"
|
||||
else
|
||||
echo "❌ $(basename "$f") - Invalid JSON"
|
||||
all_valid=false
|
||||
fi
|
||||
done
|
||||
|
||||
# Check for required fields
|
||||
echo -e "\n=== Required Fields Check ==="
|
||||
for f in "$TEST_DIR"/*.json; do
|
||||
basename=$(basename "$f")
|
||||
missing=()
|
||||
python3 -c "import json; d=json.load(open('$f'))" 2>/dev/null || continue
|
||||
|
||||
for field in alert_id timestamp source host alert_type severity message data; do
|
||||
if ! python3 -c "import json; d=json.load(open('$f')); exit(0 if '$field' in d else 1)" 2>/dev/null; then
|
||||
missing+=("$field")
|
||||
fi
|
||||
done
|
||||
|
||||
if [ ${#missing[@]} -eq 0 ]; then
|
||||
echo "✅ $basename - All required fields present"
|
||||
else
|
||||
echo "❌ $basename - Missing fields: ${missing[*]}"
|
||||
all_valid=false
|
||||
fi
|
||||
done
|
||||
|
||||
# Cleanup
|
||||
rm -rf "$TEST_DIR" /tmp/test_alert_func.sh
|
||||
|
||||
echo -e "\n=== Test Summary ==="
|
||||
if [ "$all_valid" = true ]; then
|
||||
echo "✅ All tests passed!"
|
||||
exit 0
|
||||
else
|
||||
echo "❌ Some tests failed"
|
||||
exit 1
|
||||
fi
|
||||
106
tests/test_secret_detection.py
Normal file
106
tests/test_secret_detection.py
Normal file
@@ -0,0 +1,106 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test cases for secret detection script.
|
||||
|
||||
These tests verify that the detect_secrets.py script correctly:
|
||||
1. Detects actual secrets
|
||||
2. Ignores false positives
|
||||
3. Respects exclusion markers
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
# Add scripts directory to path
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "scripts"))
|
||||
|
||||
from detect_secrets import (
|
||||
scan_file,
|
||||
scan_files,
|
||||
should_exclude_file,
|
||||
has_exclusion_marker,
|
||||
is_excluded_match,
|
||||
SECRET_PATTERNS,
|
||||
)
|
||||
|
||||
|
||||
class TestSecretDetection(unittest.TestCase):
|
||||
"""Test cases for secret detection."""
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test fixtures."""
|
||||
self.test_dir = tempfile.mkdtemp()
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up test fixtures."""
|
||||
import shutil
|
||||
shutil.rmtree(self.test_dir, ignore_errors=True)
|
||||
|
||||
def _create_test_file(self, content: str, filename: str = "test.txt") -> str:
|
||||
"""Create a test file with given content."""
|
||||
file_path = os.path.join(self.test_dir, filename)
|
||||
with open(file_path, "w") as f:
|
||||
f.write(content)
|
||||
return file_path
|
||||
|
||||
def test_detect_openai_api_key(self):
|
||||
"""Test detection of OpenAI API keys."""
|
||||
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456'"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertTrue(any("openai" in f[2].lower() for f in findings))
|
||||
|
||||
def test_detect_private_key(self):
|
||||
"""Test detection of private keys."""
|
||||
content = "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA0Z3VS5JJcds3xfn/ygWyF8PbnGy0AHB7MhgwMbRvI0MBZhpF\n-----END RSA PRIVATE KEY-----"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertTrue(any("private" in f[2].lower() for f in findings))
|
||||
|
||||
def test_detect_database_connection_string(self):
|
||||
"""Test detection of database connection strings with credentials."""
|
||||
content = "DATABASE_URL=mongodb://admin:secretpassword@mongodb.example.com:27017/db"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertTrue(any("database" in f[2].lower() for f in findings))
|
||||
|
||||
def test_detect_password_in_config(self):
|
||||
"""Test detection of hardcoded passwords."""
|
||||
content = "password = 'mysecretpassword123'"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertTrue(any("password" in f[2].lower() for f in findings))
|
||||
|
||||
def test_exclude_placeholder_passwords(self):
|
||||
"""Test that placeholder passwords are excluded."""
|
||||
content = "password = 'changeme'"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertEqual(len(findings), 0)
|
||||
|
||||
def test_exclude_localhost_database_url(self):
|
||||
"""Test that localhost database URLs are excluded."""
|
||||
content = "DATABASE_URL=mongodb://admin:secret@localhost:27017/db"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertEqual(len(findings), 0)
|
||||
|
||||
def test_pragma_allowlist_secret(self):
|
||||
"""Test '# pragma: allowlist secret' marker."""
|
||||
content = "api_key = 'sk-abcdefghijklmnopqrstuvwxyz123456' # pragma: allowlist secret"
|
||||
file_path = self._create_test_file(content)
|
||||
findings = scan_file(file_path)
|
||||
self.assertEqual(len(findings), 0)
|
||||
|
||||
def test_empty_file(self):
|
||||
"""Test scanning empty file."""
|
||||
file_path = self._create_test_file("")
|
||||
findings = scan_file(file_path)
|
||||
self.assertEqual(len(findings), 0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main(verbosity=2)
|
||||
@@ -24,32 +24,52 @@ class HealthCheckHandler(BaseHTTPRequestHandler):
|
||||
# Suppress default logging
|
||||
pass
|
||||
|
||||
def do_GET(self):
|
||||
def do_GET(self):
|
||||
"""Handle GET requests"""
|
||||
if self.path == '/health':
|
||||
self.send_health_response()
|
||||
elif self.path == '/status':
|
||||
self.send_full_status()
|
||||
elif self.path == '/metrics':
|
||||
self.send_sovereign_metrics()
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def send_health_response(self):
|
||||
"""Send simple health check"""
|
||||
harness = get_harness()
|
||||
result = harness.execute("health_check")
|
||||
|
||||
|
||||
def send_sovereign_metrics(self):
|
||||
"""Send sovereign health metrics as JSON"""
|
||||
try:
|
||||
health_data = json.loads(result)
|
||||
status_code = 200 if health_data.get("overall") == "healthy" else 503
|
||||
except:
|
||||
status_code = 503
|
||||
health_data = {"error": "Health check failed"}
|
||||
|
||||
self.send_response(status_code)
|
||||
import sqlite3
|
||||
db_path = Path.home() / ".timmy" / "metrics" / "model_metrics.db"
|
||||
if not db_path.exists():
|
||||
data = {"error": "No database found"}
|
||||
else:
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
row = conn.execute("""
|
||||
SELECT local_pct, total_sessions, local_sessions, cloud_sessions, est_cloud_cost, est_saved
|
||||
FROM sovereignty_score ORDER BY timestamp DESC LIMIT 1
|
||||
""").fetchone()
|
||||
|
||||
if row:
|
||||
data = {
|
||||
"sovereignty_score": row[0],
|
||||
"total_sessions": row[1],
|
||||
"local_sessions": row[2],
|
||||
"cloud_sessions": row[3],
|
||||
"est_cloud_cost": row[4],
|
||||
"est_saved": row[5],
|
||||
"timestamp": datetime.now().isoformat()
|
||||
}
|
||||
else:
|
||||
data = {"error": "No data"}
|
||||
conn.close()
|
||||
except Exception as e:
|
||||
data = {"error": str(e)}
|
||||
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(health_data).encode())
|
||||
|
||||
self.wfile.write(json.dumps(data).encode())
|
||||
|
||||
def send_full_status(self):
|
||||
"""Send full system status"""
|
||||
harness = get_harness()
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
# Zero LLM cost for polling — only calls kimi/kimi-code for actual work.
|
||||
#
|
||||
# Run manually: bash ~/.timmy/uniwizard/kimi-heartbeat.sh
|
||||
# Runs via launchd every 5 minutes: ai.timmy.kimi-heartbeat.plist
|
||||
# Runs via launchd every 2 minutes: ai.timmy.kimi-heartbeat.plist
|
||||
#
|
||||
# Workflow for humans:
|
||||
# 1. Create or open a Gitea issue in any tracked repo
|
||||
@@ -21,18 +21,14 @@ set -euo pipefail
|
||||
# --- Config ---
|
||||
TOKEN=$(cat "$HOME/.timmy/kimi_gitea_token" | tr -d '[:space:]')
|
||||
TIMMY_TOKEN=$(cat "$HOME/.config/gitea/timmy-token" | tr -d '[:space:]')
|
||||
# Prefer Tailscale (private network) over public IP
|
||||
if curl -sf --connect-timeout 2 "http://100.126.61.75:3000/api/v1/version" > /dev/null 2>&1; then
|
||||
BASE="http://100.126.61.75:3000/api/v1"
|
||||
else
|
||||
BASE="http://143.198.27.163:3000/api/v1"
|
||||
fi
|
||||
BASE="${GITEA_API_BASE:-https://forge.alexanderwhitestone.com/api/v1}"
|
||||
LOG="/tmp/kimi-heartbeat.log"
|
||||
LOCKFILE="/tmp/kimi-heartbeat.lock"
|
||||
MAX_DISPATCH=5 # Don't overwhelm Kimi with too many parallel tasks
|
||||
MAX_DISPATCH=10 # Increased max dispatch to 10
|
||||
PLAN_TIMEOUT=120 # 2 minutes for planning pass
|
||||
EXEC_TIMEOUT=480 # 8 minutes for execution pass
|
||||
BODY_COMPLEXITY_THRESHOLD=500 # chars — above this triggers planning
|
||||
STALE_PROGRESS_SECONDS=3600 # reclaim kimi-in-progress after 1 hour of silence
|
||||
|
||||
REPOS=(
|
||||
"Timmy_Foundation/timmy-home"
|
||||
@@ -44,6 +40,31 @@ REPOS=(
|
||||
# --- Helpers ---
|
||||
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG"; }
|
||||
|
||||
needs_pr_proof() {
|
||||
local haystack="${1,,}"
|
||||
[[ "$haystack" =~ implement|fix|refactor|feature|perf|performance|rebase|deploy|integration|module|script|pipeline|benchmark|cache|test|bug|build|port ]]
|
||||
}
|
||||
|
||||
has_pr_proof() {
|
||||
local haystack="${1,,}"
|
||||
[[ "$haystack" == *"proof:"* || "$haystack" == *"pr:"* || "$haystack" == *"/pulls/"* || "$haystack" == *"commit:"* ]]
|
||||
}
|
||||
|
||||
post_issue_comment_json() {
|
||||
local repo="$1"
|
||||
local issue_num="$2"
|
||||
local token="$3"
|
||||
local body="$4"
|
||||
local payload
|
||||
payload=$(python3 - "$body" <<'PY'
|
||||
import json, sys
|
||||
print(json.dumps({"body": sys.argv[1]}))
|
||||
PY
|
||||
)
|
||||
curl -sf -X POST -H "Authorization: token $token" -H "Content-Type: application/json" \
|
||||
-d "$payload" "$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
}
|
||||
|
||||
# Prevent overlapping runs
|
||||
if [ -f "$LOCKFILE" ]; then
|
||||
lock_age=$(( $(date +%s) - $(stat -f %m "$LOCKFILE" 2>/dev/null || echo 0) ))
|
||||
@@ -65,30 +86,53 @@ for repo in "${REPOS[@]}"; do
|
||||
response=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues?state=open&labels=assigned-kimi&limit=20" 2>/dev/null || echo "[]")
|
||||
|
||||
# Filter: skip issues that already have kimi-in-progress or kimi-done
|
||||
# Filter: skip done tasks, but reclaim stale kimi-in-progress work automatically
|
||||
issues=$(echo "$response" | python3 -c "
|
||||
import json, sys
|
||||
import json, sys, datetime
|
||||
STALE = int(${STALE_PROGRESS_SECONDS})
|
||||
|
||||
def parse_ts(value):
|
||||
if not value:
|
||||
return None
|
||||
try:
|
||||
return datetime.datetime.fromisoformat(value.replace('Z', '+00:00'))
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
try:
|
||||
data = json.loads(sys.stdin.buffer.read())
|
||||
except:
|
||||
sys.exit(0)
|
||||
|
||||
now = datetime.datetime.now(datetime.timezone.utc)
|
||||
for i in data:
|
||||
labels = [l['name'] for l in i.get('labels', [])]
|
||||
if 'kimi-in-progress' in labels or 'kimi-done' in labels:
|
||||
if 'kimi-done' in labels:
|
||||
continue
|
||||
# Pipe-delimited: number|title|body_length|body (truncated, newlines removed)
|
||||
|
||||
reclaim = False
|
||||
updated_at = i.get('updated_at', '') or ''
|
||||
if 'kimi-in-progress' in labels:
|
||||
ts = parse_ts(updated_at)
|
||||
age = (now - ts).total_seconds() if ts else (STALE + 1)
|
||||
if age < STALE:
|
||||
continue
|
||||
reclaim = True
|
||||
|
||||
body = (i.get('body', '') or '')
|
||||
body_len = len(body)
|
||||
body_clean = body[:1500].replace('\n', ' ').replace('|', ' ')
|
||||
title = i['title'].replace('|', ' ')
|
||||
print(f\"{i['number']}|{title}|{body_len}|{body_clean}\")
|
||||
updated_clean = updated_at.replace('|', ' ')
|
||||
reclaim_flag = 'reclaim' if reclaim else 'fresh'
|
||||
print(f\"{i['number']}|{title}|{body_len}|{reclaim_flag}|{updated_clean}|{body_clean}\")
|
||||
" 2>/dev/null)
|
||||
|
||||
[ -z "$issues" ] && continue
|
||||
|
||||
while IFS='|' read -r issue_num title body_len body; do
|
||||
while IFS='|' read -r issue_num title body_len reclaim_flag updated_at body; do
|
||||
[ -z "$issue_num" ] && continue
|
||||
log "FOUND: $repo #$issue_num — $title (body: ${body_len} chars)"
|
||||
log "FOUND: $repo #$issue_num — $title (body: ${body_len} chars, mode: ${reclaim_flag}, updated: ${updated_at})"
|
||||
|
||||
# --- Get label IDs for this repo ---
|
||||
label_json=$(curl -sf -H "Authorization: token $TIMMY_TOKEN" \
|
||||
@@ -98,6 +142,15 @@ for i in data:
|
||||
done_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='kimi-done']" 2>/dev/null)
|
||||
kimi_id=$(echo "$label_json" | python3 -c "import json,sys; [print(l['id']) for l in json.load(sys.stdin) if l['name']=='assigned-kimi']" 2>/dev/null)
|
||||
|
||||
if [ "$reclaim_flag" = "reclaim" ]; then
|
||||
log "RECLAIM: $repo #$issue_num — stale kimi-in-progress since $updated_at"
|
||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"body\":\"🟡 **KimiClaw reclaiming stale task.**\\nPrevious kimi-in-progress state exceeded ${STALE_PROGRESS_SECONDS}s without resolution.\\nLast update: $updated_at\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
# --- Add kimi-in-progress label ---
|
||||
if [ -n "$progress_id" ]; then
|
||||
curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
||||
@@ -121,32 +174,11 @@ for i in data:
|
||||
-d "{\"body\":\"🟠 **KimiClaw picking up this task** via heartbeat.\\nBackend: kimi/kimi-code (Moonshot AI)\\nMode: **Planning first** (task is complex)\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\"}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
|
||||
plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES.
|
||||
plan_prompt="You are KimiClaw, a planning agent. You have 2 MINUTES.\n\nTASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks.\n\nISSUE #$issue_num in $repo: $title\n\nBODY:\n$body\n\nRULES:\n- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.\n- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.\n- Each subtask line format: SUBTASK: <title> | <one-line description>\n- Be realistic about what fits in 8 minutes with no terminal access.\n- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.\n\nRespond with ONLY your decision. No preamble."
|
||||
|
||||
TASK: Analyze this Gitea issue and decide if you can complete it in under 8 minutes, or if it needs to be broken into subtasks.
|
||||
|
||||
ISSUE #$issue_num in $repo: $title
|
||||
|
||||
BODY:
|
||||
$body
|
||||
|
||||
RULES:
|
||||
- If you CAN complete this in one pass (research, write analysis, answer a question): respond with EXECUTE followed by a one-line plan.
|
||||
- If the task is TOO BIG (needs git operations, multiple repos, >2000 words of output, or multi-step implementation): respond with DECOMPOSE followed by a numbered list of 2-5 smaller subtasks. Each subtask must be completable in under 8 minutes by itself.
|
||||
- Each subtask line format: SUBTASK: <title> | <one-line description>
|
||||
- Be realistic about what fits in 8 minutes with no terminal access.
|
||||
- You CANNOT clone repos, run git, or execute code. You CAN research, analyze, write specs, review code via API, and produce documents.
|
||||
|
||||
Respond with ONLY your decision. No preamble."
|
||||
|
||||
plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{"status":"error"}')
|
||||
plan_result=$(openclaw agent --agent main --message "$plan_prompt" --timeout $PLAN_TIMEOUT --json 2>/dev/null || echo '{\"status\":\"error\"}')
|
||||
plan_status=$(echo "$plan_result" | python3 -c "import json,sys; print(json.load(sys.stdin).get('status','error'))" 2>/dev/null || echo "error")
|
||||
plan_text=$(echo "$plan_result" | python3 -c "
|
||||
import json,sys
|
||||
d = json.load(sys.stdin)
|
||||
payloads = d.get('result',{}).get('payloads',[])
|
||||
print(payloads[0]['text'] if payloads else '')
|
||||
" 2>/dev/null || echo "")
|
||||
plan_text=$(echo "$plan_result" | python3 -c "\nimport json,sys\nd = json.load(sys.stdin)\npayloads = d.get('result',{}).get('payloads',[])\nprint(payloads[0]['text'] if payloads else '')\n" 2>/dev/null || echo "")
|
||||
|
||||
if echo "$plan_text" | grep -qi "^DECOMPOSE"; then
|
||||
# --- Create subtask issues ---
|
||||
@@ -155,7 +187,7 @@ print(payloads[0]['text'] if payloads else '')
|
||||
# Post the plan as a comment
|
||||
escaped_plan=$(echo "$plan_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))" 2>/dev/null)
|
||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"body\":\"📋 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \
|
||||
-d "{\"body\":\"📝 **Planning complete — decomposing into subtasks:**\\n\\n$plan_text\"}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
|
||||
# Extract SUBTASK lines and create child issues
|
||||
@@ -245,25 +277,40 @@ print(payloads[0]['text'][:3000] if payloads else 'No response')
|
||||
" 2>/dev/null || echo "No response")
|
||||
|
||||
if [ "$status" = "ok" ] && [ "$response_text" != "No response" ]; then
|
||||
log "COMPLETED: $repo #$issue_num"
|
||||
|
||||
# Post result as comment (escape for JSON)
|
||||
escaped=$(echo "$response_text" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read())[1:-1])" 2>/dev/null)
|
||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"body\":\"✅ **KimiClaw result:**\\n\\n$escaped\"}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
if needs_pr_proof "$title $body" && ! has_pr_proof "$response_text"; then
|
||||
log "BLOCKED: $repo #$issue_num — response lacked PR/proof for code task"
|
||||
post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟡 **KimiClaw produced analysis only — no PR/proof detected.**
|
||||
|
||||
# Remove kimi-in-progress, add kimi-done
|
||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
||||
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"labels\":[$done_id]}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true
|
||||
This issue looks like implementation work, so it is NOT being marked kimi-done.
|
||||
Kimi response excerpt:
|
||||
|
||||
$escaped
|
||||
|
||||
Action: removing Kimi queue labels so a code-capable agent can pick it up."
|
||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
||||
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
|
||||
else
|
||||
log "COMPLETED: $repo #$issue_num"
|
||||
post_issue_comment_json "$repo" "$issue_num" "$TOKEN" "🟢 **KimiClaw result:**
|
||||
|
||||
$escaped"
|
||||
|
||||
[ -n "$progress_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$progress_id" > /dev/null 2>&1 || true
|
||||
[ -n "$kimi_id" ] && curl -sf -X DELETE -H "Authorization: token $TIMMY_TOKEN" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels/$kimi_id" > /dev/null 2>&1 || true
|
||||
[ -n "$done_id" ] && curl -sf -X POST -H "Authorization: token $TIMMY_TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"labels\":[$done_id]}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/labels" > /dev/null 2>&1 || true
|
||||
fi
|
||||
else
|
||||
log "FAILED: $repo #$issue_num — status=$status"
|
||||
|
||||
curl -sf -X POST -H "Authorization: token $TOKEN" -H "Content-Type: application/json" \
|
||||
-d "{\"body\":\"🔴 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \
|
||||
-d "{\"body\":\"\ud83d\udd34 **KimiClaw failed/timed out.**\\nStatus: $status\\nTimestamp: $(date -u '+%Y-%m-%dT%H:%M:%SZ')\\n\\nTask may be too complex for single-pass execution. Consider breaking into smaller subtasks.\"}" \
|
||||
"$BASE/repos/$repo/issues/$issue_num/comments" > /dev/null 2>&1 || true
|
||||
|
||||
# Remove kimi-in-progress on failure
|
||||
|
||||
@@ -5,7 +5,12 @@
|
||||
set -euo pipefail
|
||||
|
||||
KIMI_TOKEN=$(cat /Users/apayne/.timmy/kimi_gitea_token | tr -d '[:space:]')
|
||||
BASE="http://100.126.61.75:3000/api/v1"
|
||||
|
||||
# --- Tailscale/IP Detection (timmy-home#385) ---
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/lib/tailscale-gitea.sh"
|
||||
BASE="$GITEA_BASE_URL"
|
||||
|
||||
LOG="/tmp/kimi-mentions.log"
|
||||
PROCESSED="/tmp/kimi-mentions-processed.txt"
|
||||
|
||||
|
||||
55
uniwizard/lib/example-usage.sh
Normal file
55
uniwizard/lib/example-usage.sh
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
# example-usage.sh — Example showing how to use the tailscale-gitea module
|
||||
# Issue: timmy-home#385 — Standardized Tailscale IP detection module
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# --- Basic Usage ---
|
||||
# Source the module to automatically set GITEA_BASE_URL
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "${SCRIPT_DIR}/tailscale-gitea.sh"
|
||||
|
||||
# Now use GITEA_BASE_URL in your API calls
|
||||
echo "Using Gitea at: $GITEA_BASE_URL"
|
||||
echo "Tailscale active: $GITEA_USING_TAILSCALE"
|
||||
|
||||
# --- Example API Call ---
|
||||
# curl -sf -H "Authorization: token $TOKEN" \
|
||||
# "$GITEA_BASE_URL/repos/myuser/myrepo/issues"
|
||||
|
||||
# --- Custom Configuration (Optional) ---
|
||||
# You can customize behavior by setting variables BEFORE sourcing:
|
||||
#
|
||||
# TAILSCALE_TIMEOUT=5 # Wait 5 seconds instead of 2
|
||||
# TAILSCALE_DEBUG=1 # Print which endpoint was selected
|
||||
# source "${SCRIPT_DIR}/tailscale-gitea.sh"
|
||||
|
||||
# --- Advanced: Checking Network Mode ---
|
||||
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
|
||||
echo "✓ Connected via private Tailscale network"
|
||||
else
|
||||
echo "⚠ Using public internet fallback (Tailscale unavailable)"
|
||||
fi
|
||||
|
||||
# --- Example: Polling with Retry Logic ---
|
||||
poll_gitea() {
|
||||
local endpoint="${1:-$GITEA_BASE_URL}"
|
||||
local max_retries="${2:-3}"
|
||||
local retry=0
|
||||
|
||||
while [[ $retry -lt $max_retries ]]; do
|
||||
if curl -sf --connect-timeout 2 "${endpoint}/version" > /dev/null 2>&1; then
|
||||
echo "Gitea is reachable"
|
||||
return 0
|
||||
fi
|
||||
retry=$((retry + 1))
|
||||
echo "Retry $retry/$max_retries..."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
echo "Gitea unreachable after $max_retries attempts"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Uncomment to test connectivity:
|
||||
# poll_gitea "$GITEA_BASE_URL"
|
||||
64
uniwizard/lib/tailscale-gitea.sh
Normal file
64
uniwizard/lib/tailscale-gitea.sh
Normal file
@@ -0,0 +1,64 @@
|
||||
#!/bin/bash
|
||||
# tailscale-gitea.sh — Standardized Tailscale IP detection module for Gitea API access
|
||||
# Issue: timmy-home#385 — Standardize Tailscale IP detection across auxiliary scripts
|
||||
#
|
||||
# Usage (source this file in your script):
|
||||
# source /path/to/tailscale-gitea.sh
|
||||
# # Now use $GITEA_BASE_URL for API calls
|
||||
#
|
||||
# Configuration (set before sourcing to customize):
|
||||
# TAILSCALE_IP - Tailscale IP to try first (default: 100.126.61.75)
|
||||
# PUBLIC_IP - Public fallback IP (default: 143.198.27.163)
|
||||
# GITEA_PORT - Gitea API port (default: 3000)
|
||||
# TAILSCALE_TIMEOUT - Connection timeout in seconds (default: 2)
|
||||
# GITEA_API_VERSION - API version path (default: api/v1)
|
||||
#
|
||||
# Sovereignty: Private Tailscale network preferred over public internet
|
||||
|
||||
# --- Default Configuration ---
|
||||
: "${TAILSCALE_IP:=100.126.61.75}"
|
||||
: "${PUBLIC_IP:=143.198.27.163}"
|
||||
: "${GITEA_PORT:=3000}"
|
||||
: "${TAILSCALE_TIMEOUT:=2}"
|
||||
: "${GITEA_API_VERSION:=api/v1}"
|
||||
|
||||
# --- Detection Function ---
|
||||
_detect_gitea_endpoint() {
|
||||
local tailscale_url="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
||||
local public_url="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
||||
|
||||
# Prefer Tailscale (private network) over public IP
|
||||
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
|
||||
"${tailscale_url}/version" > /dev/null 2>&1; then
|
||||
echo "$tailscale_url"
|
||||
return 0
|
||||
else
|
||||
echo "$public_url"
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# --- Main Detection ---
|
||||
# Set GITEA_BASE_URL for use by sourcing scripts
|
||||
# Also sets GITEA_USING_TAILSCALE=true/false for scripts that need to know
|
||||
if curl -sf --connect-timeout "$TAILSCALE_TIMEOUT" \
|
||||
"http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}/version" > /dev/null 2>&1; then
|
||||
GITEA_BASE_URL="http://${TAILSCALE_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
||||
GITEA_USING_TAILSCALE=true
|
||||
else
|
||||
GITEA_BASE_URL="http://${PUBLIC_IP}:${GITEA_PORT}/${GITEA_API_VERSION}"
|
||||
GITEA_USING_TAILSCALE=false
|
||||
fi
|
||||
|
||||
# Export for child processes
|
||||
export GITEA_BASE_URL
|
||||
export GITEA_USING_TAILSCALE
|
||||
|
||||
# Optional: log which endpoint was selected (set TAILSCALE_DEBUG=1 to enable)
|
||||
if [[ "${TAILSCALE_DEBUG:-0}" == "1" ]]; then
|
||||
if [[ "$GITEA_USING_TAILSCALE" == "true" ]]; then
|
||||
echo "[tailscale-gitea] Using Tailscale endpoint: $GITEA_BASE_URL" >&2
|
||||
else
|
||||
echo "[tailscale-gitea] Tailscale unavailable, using public endpoint: $GITEA_BASE_URL" >&2
|
||||
fi
|
||||
fi
|
||||
Reference in New Issue
Block a user