forked from Rockachopa/Timmy-time-dashboard
Compare commits
3 Commits
claude/iss
...
claude/iss
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c58093dccc | ||
| 55beaf241f | |||
| 69498c9add |
@@ -18,9 +18,17 @@ jobs:
|
|||||||
- name: Lint (ruff via tox)
|
- name: Lint (ruff via tox)
|
||||||
run: tox -e lint
|
run: tox -e lint
|
||||||
|
|
||||||
test:
|
typecheck:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
needs: lint
|
needs: lint
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Type-check (mypy via tox)
|
||||||
|
run: tox -e typecheck
|
||||||
|
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: typecheck
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Run tests (via tox)
|
- name: Run tests (via tox)
|
||||||
|
|||||||
89
docs/SCREENSHOT_TRIAGE_2026-03-24.md
Normal file
89
docs/SCREENSHOT_TRIAGE_2026-03-24.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# Screenshot Dump Triage — Visual Inspiration & Research Leads
|
||||||
|
|
||||||
|
**Date:** March 24, 2026
|
||||||
|
**Source:** Issue #1275 — "Screenshot dump for triage #1"
|
||||||
|
**Analyst:** Claude (Sonnet 4.6)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Screenshots Ingested
|
||||||
|
|
||||||
|
| File | Subject | Action |
|
||||||
|
|------|---------|--------|
|
||||||
|
| IMG_6187.jpeg | AirLLM / Apple Silicon local LLM requirements | → Issue #1284 |
|
||||||
|
| IMG_6125.jpeg | vLLM backend for agentic workloads | → Issue #1281 |
|
||||||
|
| IMG_6124.jpeg | DeerFlow autonomous research pipeline | → Issue #1283 |
|
||||||
|
| IMG_6123.jpeg | "Vibe Coder vs Normal Developer" meme | → Issue #1285 |
|
||||||
|
| IMG_6410.jpeg | SearXNG + Crawl4AI self-hosted search MCP | → Issue #1282 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Tickets Created
|
||||||
|
|
||||||
|
### #1281 — feat: add vLLM as alternative inference backend
|
||||||
|
**Source:** IMG_6125 (vLLM for agentic workloads)
|
||||||
|
|
||||||
|
vLLM's continuous batching makes it 3–10x more throughput-efficient than Ollama for multi-agent
|
||||||
|
request patterns. Implement `VllmBackend` in `infrastructure/llm_router/` as a selectable
|
||||||
|
backend (`TIMMY_LLM_BACKEND=vllm`) with graceful fallback to Ollama.
|
||||||
|
|
||||||
|
**Priority:** Medium — impactful for research pipeline performance once #972 is in use
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### #1282 — feat: integrate SearXNG + Crawl4AI as self-hosted search backend
|
||||||
|
**Source:** IMG_6410 (luxiaolei/searxng-crawl4ai-mcp)
|
||||||
|
|
||||||
|
Self-hosted search via SearXNG + Crawl4AI removes the hard dependency on paid search APIs
|
||||||
|
(Brave, Tavily). Add both as Docker Compose services, implement `web_search()` and
|
||||||
|
`scrape_url()` tools in `timmy/tools/`, and register them with the research agent.
|
||||||
|
|
||||||
|
**Priority:** High — unblocks fully local/private operation of research agents
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### #1283 — research: evaluate DeerFlow as autonomous research orchestration layer
|
||||||
|
**Source:** IMG_6124 (deer-flow Docker setup)
|
||||||
|
|
||||||
|
DeerFlow is ByteDance's open-source autonomous research pipeline framework. Before investing
|
||||||
|
further in Timmy's custom orchestrator (#972), evaluate whether DeerFlow's architecture offers
|
||||||
|
integration value or design patterns worth borrowing.
|
||||||
|
|
||||||
|
**Priority:** Medium — research first, implementation follows if go/no-go is positive
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### #1284 — chore: document and validate AirLLM Apple Silicon requirements
|
||||||
|
**Source:** IMG_6187 (Mac-compatible LLM setup)
|
||||||
|
|
||||||
|
AirLLM graceful degradation is already implemented but undocumented. Add System Requirements
|
||||||
|
to README (M1/M2/M3/M4, 16 GB RAM min, 15 GB disk) and document `TIMMY_LLM_BACKEND` in
|
||||||
|
`.env.example`.
|
||||||
|
|
||||||
|
**Priority:** Low — documentation only, no code risk
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### #1285 — chore: enforce "Normal Developer" discipline — tighten quality gates
|
||||||
|
**Source:** IMG_6123 (Vibe Coder vs Normal Developer meme)
|
||||||
|
|
||||||
|
Tighten the existing mypy/bandit/coverage gates: fix all mypy errors, raise coverage from 73%
|
||||||
|
to 80%, add a documented pre-push hook, and run `vulture` for dead code. The infrastructure
|
||||||
|
exists — it just needs enforcing.
|
||||||
|
|
||||||
|
**Priority:** Medium — technical debt prevention, pairs well with any green-field feature work
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Patterns Observed Across Screenshots
|
||||||
|
|
||||||
|
1. **Local-first is the north star.** All five images reinforce the same theme: private,
|
||||||
|
self-hosted, runs on your hardware. vLLM, SearXNG, AirLLM, DeerFlow — none require cloud.
|
||||||
|
Timmy is already aligned with this direction; these are tactical additions.
|
||||||
|
|
||||||
|
2. **Agentic performance bottlenecks are real.** Two of five images (vLLM, DeerFlow) focus
|
||||||
|
specifically on throughput and reliability for multi-agent loops. As the research pipeline
|
||||||
|
matures, inference speed and search reliability will become the main constraints.
|
||||||
|
|
||||||
|
3. **Discipline compounds.** The meme is a reminder that the quality gates we have (tox,
|
||||||
|
mypy, bandit, coverage) only pay off if they are enforced without exceptions.
|
||||||
290
docs/research/kimi-creative-blueprint-891.md
Normal file
290
docs/research/kimi-creative-blueprint-891.md
Normal file
@@ -0,0 +1,290 @@
|
|||||||
|
# Building Timmy: Technical Blueprint for Sovereign Creative AI
|
||||||
|
|
||||||
|
> **Source:** PDF attached to issue #891, "Building Timmy: a technical blueprint for sovereign
|
||||||
|
> creative AI" — generated by Kimi.ai, 16 pages, filed by Perplexity for Timmy's review.
|
||||||
|
> **Filed:** 2026-03-22 · **Reviewed:** 2026-03-23
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
The blueprint establishes that a sovereign creative AI capable of coding, composing music,
|
||||||
|
generating art, building worlds, publishing narratives, and managing its own economy is
|
||||||
|
**technically feasible today** — but only through orchestration of dozens of tools operating
|
||||||
|
at different maturity levels. The core insight: *the integration is the invention*. No single
|
||||||
|
component is new; the missing piece is a coherent identity operating across all domains
|
||||||
|
simultaneously with persistent memory, autonomous economics, and cross-domain creative
|
||||||
|
reactions.
|
||||||
|
|
||||||
|
Three non-negotiable architectural decisions:
|
||||||
|
1. **Human oversight for all public-facing content** — every successful creative AI has this;
|
||||||
|
every one that removed it failed.
|
||||||
|
2. **Legal entity before economic activity** — AI agents are not legal persons; establish
|
||||||
|
structure before wealth accumulates (Truth Terminal cautionary tale: $20M acquired before
|
||||||
|
a foundation was retroactively created).
|
||||||
|
3. **Hybrid memory: vector search + knowledge graph** — neither alone is sufficient for
|
||||||
|
multi-domain context breadth.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Domain-by-Domain Assessment
|
||||||
|
|
||||||
|
### Software Development (immediately deployable)
|
||||||
|
|
||||||
|
| Component | Recommendation | Notes |
|
||||||
|
|-----------|----------------|-------|
|
||||||
|
| Primary agent | Claude Code (Opus 4.6, 77.2% SWE-bench) | Already in use |
|
||||||
|
| Self-hosted forge | Forgejo (MIT, 170–200MB RAM) | Project uses Gitea/Forgejo now |
|
||||||
|
| CI/CD | GitHub Actions-compatible via `act_runner` | — |
|
||||||
|
| Tool-making | LATM pattern: frontier model creates tools, cheaper model applies them | New — see ADR opportunity |
|
||||||
|
| Open-source fallback | OpenHands (~65% SWE-bench, Docker sandboxed) | Backup to Claude Code |
|
||||||
|
| Self-improvement | Darwin Gödel Machine / SICA patterns | 3–6 month investment |
|
||||||
|
|
||||||
|
**Development estimate:** 2–3 weeks for Forgejo + Claude Code integration with automated
|
||||||
|
PR workflows; 1–2 months for self-improving tool-making pipeline.
|
||||||
|
|
||||||
|
**Cross-reference:** This project already runs Claude Code agents on Forgejo. The LATM
|
||||||
|
pattern (tool registry) and self-improvement loop are the actionable gaps.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Music (1–4 weeks)
|
||||||
|
|
||||||
|
| Component | Recommendation | Notes |
|
||||||
|
|-----------|----------------|-------|
|
||||||
|
| Commercial vocals | Suno v5 API (~$0.03/song, $30/month Premier) | No official API; third-party: sunoapi.org, AIMLAPI, EvoLink |
|
||||||
|
| Local instrumental | MusicGen 1.5B (CC-BY-NC — monetization blocker) | On M2 Max: ~60s for 5s clip |
|
||||||
|
| Voice cloning | GPT-SoVITS v4 (MIT) | Works on Apple Silicon CPU, RTF 0.526 on M4 |
|
||||||
|
| Voice conversion | RVC (MIT, 5–10 min training audio) | — |
|
||||||
|
| Apple Silicon TTS | MLX-Audio: Kokoro 82M + Qwen3-TTS 0.6B | 4–5x faster via Metal |
|
||||||
|
| Publishing | Wavlake (90/10 split, Lightning micropayments) | Auto-syndicates to Fountain.fm |
|
||||||
|
| Nostr | NIP-94 (kind:1063) audio events → NIP-96 servers | — |
|
||||||
|
|
||||||
|
**Copyright reality:** US Copyright Office (Jan 2025) and US Court of Appeals (Mar 2025):
|
||||||
|
purely AI-generated music cannot be copyrighted and enters public domain. Wavlake's
|
||||||
|
Value4Value model works around this — fans pay for relationship, not exclusive rights.
|
||||||
|
|
||||||
|
**Avoid:** Udio (download disabled since Oct 2025, 2.4/5 Trustpilot).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Visual Art (1–3 weeks)
|
||||||
|
|
||||||
|
| Component | Recommendation | Notes |
|
||||||
|
|-----------|----------------|-------|
|
||||||
|
| Local generation | ComfyUI API at `127.0.0.1:8188` (programmatic control via WebSocket) | MLX extension: 50–70% faster |
|
||||||
|
| Speed | Draw Things (free, Mac App Store) | 3× faster than ComfyUI via Metal shaders |
|
||||||
|
| Quality frontier | Flux 2 (Nov 2025, 4MP, multi-reference) | SDXL needs 16GB+, Flux Dev 32GB+ |
|
||||||
|
| Character consistency | LoRA training (30 min, 15–30 references) + Flux.1 Kontext | Solved problem |
|
||||||
|
| Face consistency | IP-Adapter + FaceID (ComfyUI-IP-Adapter-Plus) | Training-free |
|
||||||
|
| Comics | Jenova AI ($20/month, 200+ page consistency) or LlamaGen AI (free) | — |
|
||||||
|
| Publishing | Blossom protocol (SHA-256 addressed, kind:10063) + Nostr NIP-94 | — |
|
||||||
|
| Physical | Printful REST API (200+ products, automated fulfillment) | — |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Writing / Narrative (1–4 weeks for pipeline; ongoing for quality)
|
||||||
|
|
||||||
|
| Component | Recommendation | Notes |
|
||||||
|
|-----------|----------------|-------|
|
||||||
|
| LLM | Claude Opus 4.5/4.6 (leads Mazur Writing Benchmark at 8.561) | Already in use |
|
||||||
|
| Context | 500K tokens (1M in beta) — entire novels fit | — |
|
||||||
|
| Architecture | Outline-first → RAG lore bible → chapter-by-chapter generation | Without outline: novels meander |
|
||||||
|
| Lore management | WorldAnvil Pro or custom LoreScribe (local RAG) | No tool achieves 100% consistency |
|
||||||
|
| Publishing (ebooks) | Pandoc → EPUB / KDP PDF | pandoc-novel template on GitHub |
|
||||||
|
| Publishing (print) | Lulu Press REST API (80% profit, global print network) | KDP: no official API, 3-book/day limit |
|
||||||
|
| Publishing (Nostr) | NIP-23 kind:30023 long-form events | Habla.news, YakiHonne, Stacker News |
|
||||||
|
| Podcasts | LLM script → TTS (ElevenLabs or local Kokoro/MLX-Audio) → feedgen RSS → Fountain.fm | Value4Value sats-per-minute |
|
||||||
|
|
||||||
|
**Key constraint:** AI-assisted (human directs, AI drafts) = 40% faster. Fully autonomous
|
||||||
|
without editing = "generic, soulless prose" and character drift by chapter 3 without explicit
|
||||||
|
memory.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### World Building / Games (2 weeks–3 months depending on target)
|
||||||
|
|
||||||
|
| Component | Recommendation | Notes |
|
||||||
|
|-----------|----------------|-------|
|
||||||
|
| Algorithms | Wave Function Collapse, Perlin noise (FastNoiseLite in Godot 4), L-systems | All mature |
|
||||||
|
| Platform | Godot Engine + gd-agentic-skills (82+ skills, 26 genre blueprints) | Strong LLM/GDScript knowledge |
|
||||||
|
| Narrative design | Knowledge graph (world state) + LLM + quest template grammar | CHI 2023 validated |
|
||||||
|
| Quick win | Luanti/Minetest (Lua API, 2,800+ open mods for reference) | Immediately feasible |
|
||||||
|
| Medium effort | OpenMW content creation (omwaddon format engineering required) | 2–3 months |
|
||||||
|
| Future | Unity MCP (AI direct Unity Editor interaction) | Early-stage |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Identity Architecture (2 months)
|
||||||
|
|
||||||
|
The blueprint formalizes the **SOUL.md standard** (GitHub: aaronjmars/soul.md):
|
||||||
|
|
||||||
|
| File | Purpose |
|
||||||
|
|------|---------|
|
||||||
|
| `SOUL.md` | Who you are — identity, worldview, opinions |
|
||||||
|
| `STYLE.md` | How you write — voice, syntax, patterns |
|
||||||
|
| `SKILL.md` | Operating modes |
|
||||||
|
| `MEMORY.md` | Session continuity |
|
||||||
|
|
||||||
|
**Critical decision — static vs self-modifying identity:**
|
||||||
|
- Static Core Truths (version-controlled, human-approved changes only) ✓
|
||||||
|
- Self-modifying Learned Preferences (logged with rollback, monitored by guardian) ✓
|
||||||
|
- **Warning:** OpenClaw's "Soul Evolution" creates a security attack surface — Zenity Labs
|
||||||
|
demonstrated a complete zero-click attack chain targeting SOUL.md files.
|
||||||
|
|
||||||
|
**Relevance to this repo:** Claude Code agents already use a `MEMORY.md` pattern in
|
||||||
|
this project. The SOUL.md stack is a natural extension.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Memory Architecture (2 months)
|
||||||
|
|
||||||
|
Hybrid vector + knowledge graph is the recommendation:
|
||||||
|
|
||||||
|
| Component | Tool | Notes |
|
||||||
|
|-----------|------|-------|
|
||||||
|
| Vector + KG combined | Mem0 (mem0.ai) | 26% accuracy improvement over OpenAI memory, 91% lower p95 latency, 90% token savings |
|
||||||
|
| Vector store | Qdrant (Rust, open-source) | High-throughput with metadata filtering |
|
||||||
|
| Temporal KG | Neo4j + Graphiti (Zep AI) | P95 retrieval: 300ms, hybrid semantic + BM25 + graph |
|
||||||
|
| Backup/migration | AgentKeeper (95% critical fact recovery across model migrations) | — |
|
||||||
|
|
||||||
|
**Journal pattern (Stanford Generative Agents):** Agent writes about experiences, generates
|
||||||
|
high-level reflections 2–3x/day when importance scores exceed threshold. Ablation studies:
|
||||||
|
removing any component (observation, planning, reflection) significantly reduces behavioral
|
||||||
|
believability.
|
||||||
|
|
||||||
|
**Cross-reference:** The existing `brain/` package is the memory system. Qdrant and
|
||||||
|
Mem0 are the recommended upgrade targets.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Multi-Agent Sub-System (3–6 months)
|
||||||
|
|
||||||
|
The blueprint describes a named sub-agent hierarchy:
|
||||||
|
|
||||||
|
| Agent | Role |
|
||||||
|
|-------|------|
|
||||||
|
| Oracle | Top-level planner / supervisor |
|
||||||
|
| Sentinel | Safety / moderation |
|
||||||
|
| Scout | Research / information gathering |
|
||||||
|
| Scribe | Writing / narrative |
|
||||||
|
| Ledger | Economic management |
|
||||||
|
| Weaver | Visual art generation |
|
||||||
|
| Composer | Music generation |
|
||||||
|
| Social | Platform publishing |
|
||||||
|
|
||||||
|
**Orchestration options:**
|
||||||
|
- **Agno** (already in use) — microsecond instantiation, 50× less memory than LangGraph
|
||||||
|
- **CrewAI Flows** — event-driven with fine-grained control
|
||||||
|
- **LangGraph** — DAG-based with stateful workflows and time-travel debugging
|
||||||
|
|
||||||
|
**Scheduling pattern (Stanford Generative Agents):** Top-down recursive daily → hourly →
|
||||||
|
5-minute planning. Event interrupts for reactive tasks. Re-planning triggers when accumulated
|
||||||
|
importance scores exceed threshold.
|
||||||
|
|
||||||
|
**Cross-reference:** The existing `spark/` package (event capture, advisory engine) aligns
|
||||||
|
with this architecture. `infrastructure/event_bus` is the choreography backbone.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Economic Engine (1–4 weeks)
|
||||||
|
|
||||||
|
Lightning Labs released `lightning-agent-tools` (open-source) in February 2026:
|
||||||
|
- `lnget` — CLI HTTP client for L402 payments
|
||||||
|
- Remote signer architecture (private keys on separate machine from agent)
|
||||||
|
- Scoped macaroon credentials (pay-only, invoice-only, read-only roles)
|
||||||
|
- **Aperture** — converts any API to pay-per-use via L402 (HTTP 402)
|
||||||
|
|
||||||
|
| Option | Effort | Notes |
|
||||||
|
|--------|--------|-------|
|
||||||
|
| ln.bot | 1 week | "Bitcoin for AI Agents" — 3 commands create a wallet; CLI + MCP + REST |
|
||||||
|
| LND via gRPC | 2–3 weeks | Full programmatic node management for production |
|
||||||
|
| Coinbase Agentic Wallets | — | Fiat-adjacent; less aligned with sovereignty ethos |
|
||||||
|
|
||||||
|
**Revenue channels:** Wavlake (music, 90/10 Lightning), Nostr zaps (articles), Stacker News
|
||||||
|
(earn sats from engagement), Printful (physical goods), L402-gated API access (pay-per-use
|
||||||
|
services), Geyser.fund (Lightning crowdfunding, better initial runway than micropayments).
|
||||||
|
|
||||||
|
**Cross-reference:** The existing `lightning/` package in this repo is the foundation.
|
||||||
|
L402 paywall endpoints for Timmy's own services is the actionable gap.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Pioneer Case Studies
|
||||||
|
|
||||||
|
| Agent | Active | Revenue | Key Lesson |
|
||||||
|
|-------|--------|---------|-----------|
|
||||||
|
| Botto | Since Oct 2021 | $5M+ (art auctions) | Community governance via DAO sustains engagement; "taste model" (humans guide, not direct) preserves autonomous authorship |
|
||||||
|
| Neuro-sama | Since Dec 2022 | $400K+/month (subscriptions) | 3+ years of iteration; errors became entertainment features; 24/7 capability is an insurmountable advantage |
|
||||||
|
| Truth Terminal | Since Jun 2024 | $20M accumulated | Memetic fitness > planned monetization; human gatekeeper approved tweets while selecting AI-intent responses; **establish legal entity first** |
|
||||||
|
| Holly+ | Since 2021 | Conceptual | DAO of stewards for voice governance; "identity play" as alternative to defensive IP |
|
||||||
|
| AI Sponge | 2023 | Banned | Unmoderated content → TOS violations + copyright |
|
||||||
|
| Nothing Forever | 2022–present | 8 viewers | Unmoderated content → ban → audience collapse; novelty-only propositions fail |
|
||||||
|
|
||||||
|
**Universal pattern:** Human oversight + economic incentive alignment + multi-year personality
|
||||||
|
development + platform-native economics = success.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Recommended Implementation Sequence
|
||||||
|
|
||||||
|
From the blueprint, mapped against Timmy's existing architecture:
|
||||||
|
|
||||||
|
### Phase 1: Immediate (weeks)
|
||||||
|
1. **Code sovereignty** — Forgejo + Claude Code automated PR workflows (already substantially done)
|
||||||
|
2. **Music pipeline** — Suno API → Wavlake/Nostr NIP-94 publishing
|
||||||
|
3. **Visual art pipeline** — ComfyUI API → Blossom/Nostr with LoRA character consistency
|
||||||
|
4. **Basic Lightning wallet** — ln.bot integration for receiving micropayments
|
||||||
|
5. **Long-form publishing** — Nostr NIP-23 + RSS feed generation
|
||||||
|
|
||||||
|
### Phase 2: Moderate effort (1–3 months)
|
||||||
|
6. **LATM tool registry** — frontier model creates Python utilities, caches them, lighter model applies
|
||||||
|
7. **Event-driven cross-domain reactions** — game event → blog + artwork + music (CrewAI/LangGraph)
|
||||||
|
8. **Podcast generation** — TTS + feedgen → Fountain.fm
|
||||||
|
9. **Self-improving pipeline** — agent creates, tests, caches own Python utilities
|
||||||
|
10. **Comic generation** — character-consistent panels with Jenova AI or local LoRA
|
||||||
|
|
||||||
|
### Phase 3: Significant investment (3–6 months)
|
||||||
|
11. **Full sub-agent hierarchy** — Oracle/Sentinel/Scout/Scribe/Ledger/Weaver with Agno
|
||||||
|
12. **SOUL.md identity system** — bounded evolution + guardian monitoring
|
||||||
|
13. **Hybrid memory upgrade** — Qdrant + Mem0/Graphiti replacing or extending `brain/`
|
||||||
|
14. **Procedural world generation** — Godot + AI-driven narrative (quests, NPCs, lore)
|
||||||
|
15. **Self-sustaining economic loop** — earned revenue covers compute costs
|
||||||
|
|
||||||
|
### Remains aspirational (12+ months)
|
||||||
|
- Fully autonomous novel-length fiction without editorial intervention
|
||||||
|
- YouTube monetization for AI-generated content (tightening platform policies)
|
||||||
|
- Copyright protection for AI-generated works (current US law denies this)
|
||||||
|
- True artistic identity evolution (genuine creative voice vs pattern remixing)
|
||||||
|
- Self-modifying architecture without regression or identity drift
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Gap Analysis: Blueprint vs Current Codebase
|
||||||
|
|
||||||
|
| Blueprint Capability | Current Status | Gap |
|
||||||
|
|---------------------|----------------|-----|
|
||||||
|
| Code sovereignty | Done (Claude Code + Forgejo) | LATM tool registry |
|
||||||
|
| Music generation | Not started | Suno API integration + Wavlake publishing |
|
||||||
|
| Visual art | Not started | ComfyUI API client + Blossom publishing |
|
||||||
|
| Writing/publishing | Not started | Nostr NIP-23 + Pandoc pipeline |
|
||||||
|
| World building | Bannerlord work (different scope) | Luanti mods as quick win |
|
||||||
|
| Identity (SOUL.md) | Partial (CLAUDE.md + MEMORY.md) | Full SOUL.md stack |
|
||||||
|
| Memory (hybrid) | `brain/` package (SQLite-based) | Qdrant + knowledge graph |
|
||||||
|
| Multi-agent | Agno in use | Named hierarchy + event choreography |
|
||||||
|
| Lightning payments | `lightning/` package | ln.bot wallet + L402 endpoints |
|
||||||
|
| Nostr identity | Referenced in roadmap, not built | NIP-05, NIP-89 capability cards |
|
||||||
|
| Legal entity | Unknown | **Must be resolved before economic activity** |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## ADR Candidates
|
||||||
|
|
||||||
|
Issues that warrant Architecture Decision Records based on this review:
|
||||||
|
|
||||||
|
1. **LATM tool registry pattern** — How Timmy creates, tests, and caches self-made tools
|
||||||
|
2. **Music generation strategy** — Suno (cloud, commercial quality) vs MusicGen (local, CC-BY-NC)
|
||||||
|
3. **Memory upgrade path** — When/how to migrate `brain/` from SQLite to Qdrant + KG
|
||||||
|
4. **SOUL.md adoption** — Extending existing CLAUDE.md/MEMORY.md to full SOUL.md stack
|
||||||
|
5. **Lightning L402 strategy** — Which services Timmy gates behind micropayments
|
||||||
|
6. **Sub-agent naming and contracts** — Formalizing Oracle/Sentinel/Scout/Scribe/Ledger/Weaver
|
||||||
@@ -164,3 +164,7 @@ directory = "htmlcov"
|
|||||||
|
|
||||||
[tool.coverage.xml]
|
[tool.coverage.xml]
|
||||||
output = "coverage.xml"
|
output = "coverage.xml"
|
||||||
|
|
||||||
|
[tool.mypy]
|
||||||
|
ignore_missing_imports = true
|
||||||
|
no_error_summary = true
|
||||||
|
|||||||
@@ -6,6 +6,8 @@ import sqlite3
|
|||||||
from contextlib import closing
|
from contextlib import closing
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
from fastapi import APIRouter, Request
|
from fastapi import APIRouter, Request
|
||||||
from fastapi.responses import HTMLResponse, JSONResponse
|
from fastapi.responses import HTMLResponse, JSONResponse
|
||||||
|
|
||||||
@@ -36,9 +38,9 @@ def _discover_databases() -> list[dict]:
|
|||||||
return dbs
|
return dbs
|
||||||
|
|
||||||
|
|
||||||
def _query_database(db_path: str) -> dict:
|
def _query_database(db_path: str) -> dict[str, Any]:
|
||||||
"""Open a database read-only and return all tables with their rows."""
|
"""Open a database read-only and return all tables with their rows."""
|
||||||
result = {"tables": {}, "error": None}
|
result: dict[str, Any] = {"tables": {}, "error": None}
|
||||||
try:
|
try:
|
||||||
with closing(sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)) as conn:
|
with closing(sqlite3.connect(f"file:{db_path}?mode=ro", uri=True)) as conn:
|
||||||
conn.row_factory = sqlite3.Row
|
conn.row_factory = sqlite3.Row
|
||||||
|
|||||||
@@ -137,7 +137,7 @@ class HermesMonitor:
|
|||||||
message=f"Check error: {r}",
|
message=f"Check error: {r}",
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
else:
|
elif isinstance(r, CheckResult):
|
||||||
checks.append(r)
|
checks.append(r)
|
||||||
|
|
||||||
# Compute overall level
|
# Compute overall level
|
||||||
|
|||||||
@@ -203,7 +203,7 @@ async def reload_config(
|
|||||||
@router.get("/history")
|
@router.get("/history")
|
||||||
async def get_history(
|
async def get_history(
|
||||||
hours: int = 24,
|
hours: int = 24,
|
||||||
store: Annotated[HealthHistoryStore, Depends(get_history_store)] = None,
|
store: Annotated[HealthHistoryStore | None, Depends(get_history_store)] = None,
|
||||||
) -> list[dict[str, Any]]:
|
) -> list[dict[str, Any]]:
|
||||||
"""Get provider health history for the last N hours."""
|
"""Get provider health history for the last N hours."""
|
||||||
if store is None:
|
if store is None:
|
||||||
|
|||||||
@@ -744,19 +744,20 @@ class CascadeRouter:
|
|||||||
self,
|
self,
|
||||||
provider: Provider,
|
provider: Provider,
|
||||||
messages: list[dict],
|
messages: list[dict],
|
||||||
model: str,
|
model: str | None,
|
||||||
temperature: float,
|
temperature: float,
|
||||||
max_tokens: int | None,
|
max_tokens: int | None,
|
||||||
content_type: ContentType = ContentType.TEXT,
|
content_type: ContentType = ContentType.TEXT,
|
||||||
) -> dict:
|
) -> dict:
|
||||||
"""Try a single provider request."""
|
"""Try a single provider request."""
|
||||||
start_time = time.time()
|
start_time = time.time()
|
||||||
|
effective_model: str = model or provider.get_default_model() or ""
|
||||||
|
|
||||||
if provider.type == "ollama":
|
if provider.type == "ollama":
|
||||||
result = await self._call_ollama(
|
result = await self._call_ollama(
|
||||||
provider=provider,
|
provider=provider,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
model=model or provider.get_default_model(),
|
model=effective_model,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
max_tokens=max_tokens,
|
max_tokens=max_tokens,
|
||||||
content_type=content_type,
|
content_type=content_type,
|
||||||
@@ -765,7 +766,7 @@ class CascadeRouter:
|
|||||||
result = await self._call_openai(
|
result = await self._call_openai(
|
||||||
provider=provider,
|
provider=provider,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
model=model or provider.get_default_model(),
|
model=effective_model,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
max_tokens=max_tokens,
|
max_tokens=max_tokens,
|
||||||
)
|
)
|
||||||
@@ -773,7 +774,7 @@ class CascadeRouter:
|
|||||||
result = await self._call_anthropic(
|
result = await self._call_anthropic(
|
||||||
provider=provider,
|
provider=provider,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
model=model or provider.get_default_model(),
|
model=effective_model,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
max_tokens=max_tokens,
|
max_tokens=max_tokens,
|
||||||
)
|
)
|
||||||
@@ -781,7 +782,7 @@ class CascadeRouter:
|
|||||||
result = await self._call_grok(
|
result = await self._call_grok(
|
||||||
provider=provider,
|
provider=provider,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
model=model or provider.get_default_model(),
|
model=effective_model,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
max_tokens=max_tokens,
|
max_tokens=max_tokens,
|
||||||
)
|
)
|
||||||
@@ -789,7 +790,7 @@ class CascadeRouter:
|
|||||||
result = await self._call_vllm_mlx(
|
result = await self._call_vllm_mlx(
|
||||||
provider=provider,
|
provider=provider,
|
||||||
messages=messages,
|
messages=messages,
|
||||||
model=model or provider.get_default_model(),
|
model=effective_model,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
max_tokens=max_tokens,
|
max_tokens=max_tokens,
|
||||||
)
|
)
|
||||||
|
|||||||
20
src/integrations/chat_bridge/vendors/discord.py
vendored
20
src/integrations/chat_bridge/vendors/discord.py
vendored
@@ -474,7 +474,7 @@ class DiscordVendor(ChatPlatform):
|
|||||||
async def _run_client(self, token: str) -> None:
|
async def _run_client(self, token: str) -> None:
|
||||||
"""Run the discord.py client (blocking call in a task)."""
|
"""Run the discord.py client (blocking call in a task)."""
|
||||||
try:
|
try:
|
||||||
await self._client.start(token)
|
await self._client.start(token) # type: ignore[union-attr]
|
||||||
except Exception as exc:
|
except Exception as exc:
|
||||||
logger.error("Discord client error: %s", exc)
|
logger.error("Discord client error: %s", exc)
|
||||||
self._state = PlatformState.ERROR
|
self._state = PlatformState.ERROR
|
||||||
@@ -482,32 +482,32 @@ class DiscordVendor(ChatPlatform):
|
|||||||
def _register_handlers(self) -> None:
|
def _register_handlers(self) -> None:
|
||||||
"""Register Discord event handlers on the client."""
|
"""Register Discord event handlers on the client."""
|
||||||
|
|
||||||
@self._client.event
|
@self._client.event # type: ignore[union-attr]
|
||||||
async def on_ready():
|
async def on_ready():
|
||||||
self._guild_count = len(self._client.guilds)
|
self._guild_count = len(self._client.guilds) # type: ignore[union-attr]
|
||||||
self._state = PlatformState.CONNECTED
|
self._state = PlatformState.CONNECTED
|
||||||
logger.info(
|
logger.info(
|
||||||
"Discord ready: %s in %d guild(s)",
|
"Discord ready: %s in %d guild(s)",
|
||||||
self._client.user,
|
self._client.user, # type: ignore[union-attr]
|
||||||
self._guild_count,
|
self._guild_count,
|
||||||
)
|
)
|
||||||
|
|
||||||
@self._client.event
|
@self._client.event # type: ignore[union-attr]
|
||||||
async def on_message(message):
|
async def on_message(message):
|
||||||
# Ignore our own messages
|
# Ignore our own messages
|
||||||
if message.author == self._client.user:
|
if message.author == self._client.user: # type: ignore[union-attr]
|
||||||
return
|
return
|
||||||
|
|
||||||
# Only respond to mentions or DMs
|
# Only respond to mentions or DMs
|
||||||
is_dm = not hasattr(message.channel, "guild") or message.channel.guild is None
|
is_dm = not hasattr(message.channel, "guild") or message.channel.guild is None
|
||||||
is_mention = self._client.user in message.mentions
|
is_mention = self._client.user in message.mentions # type: ignore[union-attr]
|
||||||
|
|
||||||
if not is_dm and not is_mention:
|
if not is_dm and not is_mention:
|
||||||
return
|
return
|
||||||
|
|
||||||
await self._handle_message(message)
|
await self._handle_message(message)
|
||||||
|
|
||||||
@self._client.event
|
@self._client.event # type: ignore[union-attr]
|
||||||
async def on_disconnect():
|
async def on_disconnect():
|
||||||
if self._state != PlatformState.DISCONNECTED:
|
if self._state != PlatformState.DISCONNECTED:
|
||||||
self._state = PlatformState.CONNECTING
|
self._state = PlatformState.CONNECTING
|
||||||
@@ -535,8 +535,8 @@ class DiscordVendor(ChatPlatform):
|
|||||||
def _extract_content(self, message) -> str:
|
def _extract_content(self, message) -> str:
|
||||||
"""Strip the bot mention and return clean message text."""
|
"""Strip the bot mention and return clean message text."""
|
||||||
content = message.content
|
content = message.content
|
||||||
if self._client.user:
|
if self._client.user: # type: ignore[union-attr]
|
||||||
content = content.replace(f"<@{self._client.user.id}>", "").strip()
|
content = content.replace(f"<@{self._client.user.id}>", "").strip() # type: ignore[union-attr]
|
||||||
return content
|
return content
|
||||||
|
|
||||||
async def _invoke_agent(self, content: str, session_id: str, target):
|
async def _invoke_agent(self, content: str, session_id: str, target):
|
||||||
|
|||||||
@@ -102,14 +102,14 @@ class TelegramBot:
|
|||||||
self._token = tok
|
self._token = tok
|
||||||
self._app = Application.builder().token(tok).build()
|
self._app = Application.builder().token(tok).build()
|
||||||
|
|
||||||
self._app.add_handler(CommandHandler("start", self._cmd_start))
|
self._app.add_handler(CommandHandler("start", self._cmd_start)) # type: ignore[union-attr]
|
||||||
self._app.add_handler(
|
self._app.add_handler( # type: ignore[union-attr]
|
||||||
MessageHandler(filters.TEXT & ~filters.COMMAND, self._handle_message)
|
MessageHandler(filters.TEXT & ~filters.COMMAND, self._handle_message)
|
||||||
)
|
)
|
||||||
|
|
||||||
await self._app.initialize()
|
await self._app.initialize() # type: ignore[union-attr]
|
||||||
await self._app.start()
|
await self._app.start() # type: ignore[union-attr]
|
||||||
await self._app.updater.start_polling(allowed_updates=Update.ALL_TYPES)
|
await self._app.updater.start_polling(allowed_updates=Update.ALL_TYPES) # type: ignore[union-attr]
|
||||||
|
|
||||||
self._running = True
|
self._running = True
|
||||||
logger.info("Telegram bot started.")
|
logger.info("Telegram bot started.")
|
||||||
|
|||||||
@@ -245,6 +245,7 @@ class VoiceLoop:
|
|||||||
def _transcribe(self, audio: np.ndarray) -> str:
|
def _transcribe(self, audio: np.ndarray) -> str:
|
||||||
"""Transcribe audio using local Whisper model."""
|
"""Transcribe audio using local Whisper model."""
|
||||||
self._load_whisper()
|
self._load_whisper()
|
||||||
|
assert self._whisper_model is not None, "Whisper model failed to load"
|
||||||
|
|
||||||
sys.stdout.write(" 🧠 Transcribing...\r")
|
sys.stdout.write(" 🧠 Transcribing...\r")
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
|
|||||||
10
tox.ini
10
tox.ini
@@ -41,8 +41,10 @@ description = Static type checking with mypy
|
|||||||
commands_pre =
|
commands_pre =
|
||||||
deps =
|
deps =
|
||||||
mypy>=1.0.0
|
mypy>=1.0.0
|
||||||
|
types-PyYAML
|
||||||
|
types-requests
|
||||||
commands =
|
commands =
|
||||||
mypy src --ignore-missing-imports --no-error-summary
|
mypy src
|
||||||
|
|
||||||
# ── Test Environments ────────────────────────────────────────────────────────
|
# ── Test Environments ────────────────────────────────────────────────────────
|
||||||
|
|
||||||
@@ -130,13 +132,17 @@ commands =
|
|||||||
# ── Pre-push (mirrors CI exactly) ────────────────────────────────────────────
|
# ── Pre-push (mirrors CI exactly) ────────────────────────────────────────────
|
||||||
|
|
||||||
[testenv:pre-push]
|
[testenv:pre-push]
|
||||||
description = Local gate — lint + full CI suite (same as Gitea Actions)
|
description = Local gate — lint + typecheck + full CI suite (same as Gitea Actions)
|
||||||
deps =
|
deps =
|
||||||
ruff>=0.8.0
|
ruff>=0.8.0
|
||||||
|
mypy>=1.0.0
|
||||||
|
types-PyYAML
|
||||||
|
types-requests
|
||||||
commands =
|
commands =
|
||||||
ruff check src/ tests/
|
ruff check src/ tests/
|
||||||
ruff format --check src/ tests/
|
ruff format --check src/ tests/
|
||||||
bash -c 'files=$(grep -rl "<style" src/dashboard/templates/ --include="*.html" 2>/dev/null); if [ -n "$files" ]; then echo "ERROR: inline <style> blocks found — move CSS to static/css/mission-control.css:"; echo "$files"; exit 1; fi; echo "No inline CSS — OK"'
|
bash -c 'files=$(grep -rl "<style" src/dashboard/templates/ --include="*.html" 2>/dev/null); if [ -n "$files" ]; then echo "ERROR: inline <style> blocks found — move CSS to static/css/mission-control.css:"; echo "$files"; exit 1; fi; echo "No inline CSS — OK"'
|
||||||
|
mypy src
|
||||||
mkdir -p reports
|
mkdir -p reports
|
||||||
pytest tests/ \
|
pytest tests/ \
|
||||||
--cov=src \
|
--cov=src \
|
||||||
|
|||||||
Reference in New Issue
Block a user