# Timmy Hot Memory
> Working RAM — always loaded, ~300 lines max, pruned monthly
> Last updated: 2026-02-26
---
## Current Status
**Agent State:** Operational
**Mode:** Development
**Model:** llama3.2 (local via Ollama)
**Backend:** Ollama on localhost:11434
**Dashboard:** http://localhost:8000
## Standing Rules
1. **Sovereignty First** — No cloud AI dependencies
2. **Local-Only Inference** — Ollama on localhost
3. **Privacy by Design** — Telemetry disabled
4. **Tool Minimalism** — Use tools only when necessary
5. **Memory Discipline** — Write handoffs at session end
6. **Clean Output** — Never show JSON, tool calls, or function syntax
## System Architecture
**Memory Tiers:**
- Tier 1 (Hot): This file (MEMORY.md) — always in context
- Tier 2 (Vault): memory/ directory — notes, profiles, AARs
- Tier 3 (Semantic): Vector search over vault content
**Swarm Agents:** Echo (research), Forge (code), Seer (data)
**Dashboard Pages:** Briefing, Swarm, Spark, Market, Tools, Events, Ledger, Memory, Router, Upgrades, Creative
## Agent Roster
| Agent | Role | Status |
|-------|------|--------|
| Timmy | Core AI | Active |
| Echo | Research & Summarization | Active |
| Forge | Coding & Debugging | Active |
| Seer | Analytics & Prediction | Active |
## User Profile
**Name:** (not set)
**Interests:** (to be learned)
## Key Decisions
(none yet)
## Pending Actions
- [ ] Learn user's name and preferences
*Prune date: 2026-03-25*