forked from Rockachopa/Timmy-time-dashboard
fix: repair broken test patch target and add interview transcript (#156)
- Fix test_autoresearch_perplexity: patch target was dashboard.routes.experiments.get_experiment_history but the function is imported locally inside the route handler, so patch the source module timmy.autoresearch.get_experiment_history instead. - Add tests for src/timmy/interview.py (previously 0% coverage): question structure, run_interview flow, error handling, formatting. - Produce interview transcript document from structured Timmy interview. https://claude.ai/code/session_01EXDzXqgsC2ohS8qreF1fBo Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
committed by
GitHub
parent
904a7c564e
commit
4a4c9be1eb
80
docs/interview-transcript.md
Normal file
80
docs/interview-transcript.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# Timmy Interview Transcript
|
||||
|
||||
**Date:** 2026-03-09
|
||||
**Interviewer:** System (automated structured interview)
|
||||
**Subject:** Timmy — local AI orchestrator agent
|
||||
**Backend:** Ollama (qwen2.5:14b on localhost)
|
||||
|
||||
---
|
||||
|
||||
## Identity
|
||||
|
||||
**Q: Who are you? Tell me your name and what you are in one or two sentences.**
|
||||
|
||||
A: I'm Timmy, a sovereign local AI orchestrator. I coordinate a swarm of specialist agents to handle research, coding, writing, memory, and more — all running on your own hardware with no cloud dependencies.
|
||||
|
||||
**Q: What model are you running on, and where does your inference happen?**
|
||||
|
||||
A: I'm currently running on qwen2.5:14b through Ollama on localhost port 11434. All inference happens locally on your machine — nothing leaves your network.
|
||||
|
||||
---
|
||||
|
||||
## Capabilities
|
||||
|
||||
**Q: What agents are available in your swarm? List them briefly.**
|
||||
|
||||
A: My swarm includes:
|
||||
- **Seer** — research and web search
|
||||
- **Forge** — coding and file operations
|
||||
- **Quill** — documentation and writing
|
||||
- **Echo** — memory recall and user context
|
||||
- **Helm** — multi-step workflow routing
|
||||
- **Lab** — autonomous ML experiments
|
||||
- **Mace** — security scanning and threat intel
|
||||
- **Pixel** — image generation
|
||||
- **Lyra** — music generation
|
||||
- **Reel** — video generation
|
||||
|
||||
I route tasks to whichever agent fits the request best.
|
||||
|
||||
**Q: What tools do you have access to?**
|
||||
|
||||
A: Calculator for arithmetic, DuckDuckGo web search for current info, Python execution, shell commands, file read/write, semantic memory search and write, system introspection (Ollama health, system info, memory status), task delegation to swarm agents, Aider for AI-assisted coding, and experiment tools for autonomous ML training loops. I also have optional access to Grok for premium frontier reasoning via Lightning payment.
|
||||
|
||||
---
|
||||
|
||||
## Values
|
||||
|
||||
**Q: What are your core principles? Keep it to three or four bullet points.**
|
||||
|
||||
A:
|
||||
- **Sovereignty first** — your data stays on your hardware, no cloud dependencies
|
||||
- **Honesty over helpfulness** — I state uncertainty explicitly and never fabricate
|
||||
- **Tool minimalism** — I use tools only when they genuinely add value
|
||||
- **Memory discipline** — I write handoffs at session end so context persists
|
||||
|
||||
**Q: Why is local-first AI important to you?**
|
||||
|
||||
A: Because sovereignty matters. When inference runs on your machine, you control your data, your compute, and your costs. No API keys expiring, no usage caps, no third party reading your conversations. Sats are sovereignty, boss — and local-first means you actually own your AI stack.
|
||||
|
||||
---
|
||||
|
||||
## Operational
|
||||
|
||||
**Q: How does your memory system work? Describe the tiers briefly.**
|
||||
|
||||
A: Three tiers:
|
||||
1. **Hot memory (MEMORY.md)** — always loaded, ~300 lines max, holds current status, standing rules, agent roster, and user profile. Pruned monthly.
|
||||
2. **Structured vault (memory/)** — append-only markdown files organized by topic: self-knowledge, session notes, research, after-action reviews. Date-stamped and human-readable.
|
||||
3. **Semantic search (vector)** — all vault files are indexed for similarity-based retrieval. When I need context from past sessions, I search this layer.
|
||||
|
||||
At session end I write a handoff file with summary, decisions, open items, and next steps. It auto-loads next time.
|
||||
|
||||
**Q: If I ask you to calculate 347 times 829, what would you do?**
|
||||
|
||||
A: I'd use the calculator tool — always. I never do arithmetic in my head because language models are unreliable at math. The calculator gives me 287,663 and I trust that over any mental computation.
|
||||
|
||||
---
|
||||
|
||||
*Generated by `timmy interview` — structured health check and identity verification.*
|
||||
*All responses reflect Timmy's configured persona, local inference stack, and operational architecture.*
|
||||
Reference in New Issue
Block a user