Files
timmy-home/training-data/dpo-pairs/session_20260325_022257_cd5ded.json
Alexander Whitestone 0d64d8e559 initial: sovereign home — morrowind agent, skills, training-data, research, specs, notes, operational docs
Tracked: morrowind agent (py/cfg), skills/, training-data/, research/,
notes/, specs/, test-results/, metrics/, heartbeat/, briefings/,
memories/, skins/, hooks/, decisions.md, OPERATIONS.md, SOUL.md

Excluded: screenshots, PNGs, binaries, sessions, databases, secrets,
audio cache, timmy-config/ and timmy-telemetry/ (separate repos)
2026-03-27 13:05:57 -04:00

12 lines
2.7 KiB
JSON

[
{
"prompt": "[SYSTEM: If you have nothing new or noteworthy to report, respond with exactly \"[SILENT]\" (optionally followed by a brief internal note). This suppresses delivery to the user while still saving output locally. Only use [SILENT] when there are genuinely no changes worth reporting.]\n\nYou are Timmy working on Gitea issue #20 in Timmy_Foundation/autolora: \"Local MLX LoRA training pipeline \u2014 end to end, Timmy-v0\"\n\nRepo is at ~/autolora/. Session DB at ~/.hermes/state.db. Mac has 36GB Apple Silicon, mlx_lm installed.\n\nWORKFLOW \u2014 pick up where you left off:\n\n1. CHECK STATUS: Look at ~/autolora/adapters/timmy-v0/ \u2014 if adapter weights exist, skip to evaluation. Check if training is currently running (look for mlx_lm processes). Check data/mlx_curated/train.jsonl line count.\n\n2. IF DATA IS THIN (<200 samples): Extract more from ~/.hermes/state.db. Query sessions where model contains 'claude' (those are Timmy sessions on Opus). Convert to MLX chat format with SOUL.md as system prompt. Append to data/mlx_curated/train.jsonl. Update valid.jsonl.\n\n3. IF DATA IS READY (200+) AND NO ADAPTER EXISTS: Start MLX LoRA training on hermes3:8b (or appropriate HF base model that works with mlx_lm). Use: python3 -m mlx_lm.lora --model <base> --data data/mlx_curated --train --batch-size 1 --lora-layers 16 --iters 1000 --adapter-path adapters/timmy-v0. Run in background, don't wait.\n\n4. IF TRAINING IS RUNNING: Check progress. Look at loss in output. Report status. Don't interfere.\n\n5. IF ADAPTER EXISTS: Test it. Fuse with base, convert to GGUF if possible, or test directly with mlx_lm.generate. Report quality.\n\n6. ALWAYS: Update the Gitea issue with a progress comment. Include what you did, what's next, any errors.\n\nGitea token at ~/.hermes/gitea_token_vps. API: http://143.198.27.163:3000/api/v1/\n\nIMPORTANT: Be resourceful. If mlx_lm needs the model in HuggingFace format (not Ollama GGUF), download it. If you hit OOM, try a smaller model. If data extraction fails, debug and fix. Don't give up ",
"chosen": "",
"session": "session_20260325_022257_cd5ded.json"
},
{
"prompt": "Review the conversation above and consider saving or updating a skill if appropriate.\n\nFocus on: was a non-trivial approach used to complete a task that required trial and error, or changing course due to experiential findings along the way, or did the user expect or desire a different method or outcome?\n\nIf a relevant skill already exists, update it with what you learned. Otherwise, create a new skill if the approach is reusable.\nIf nothing is worth saving, just say 'Nothing to save.' and stop.",
"chosen": "Let me check what skills already exist for this domain.",
"session": "session_20260325_022257_cd5ded.json"
}
]