Compare commits

...

2 Commits

Author SHA1 Message Date
Alexander Whitestone
893520b7ba [gemini] Pull Hermes 4 14B — inference (GGUF) + training (MLX) models (#9) 2026-03-26 12:41:07 -04:00
Alexander Whitestone
f9155b28e3 v1.0 rejected — NaN from wrong tokenizer, Morrowind MCP pipeline working 2026-03-26 12:32:08 -04:00
2 changed files with 5 additions and 3 deletions

1
.gitignore vendored
View File

@@ -8,3 +8,4 @@
*.db-wal
*.db-shm
__pycache__/
.aider*

View File

@@ -55,7 +55,8 @@ adapters:
timmy-v1.0:
base: hermes4-14b-4bit
date: 2026-03-26
status: training
data: 1125 train / 126 valid (same curated set, reused)
status: rejected
data: 1125 train / 126 valid (same curated set, reused from 8B — NOT re-tokenized)
training: { lr: 1e-6, rank: 16, iters: 800 }
notes: "First 14B adapter. Conservative lr for new arch."
eval: "Val NaN iter 100, train NaN iter 160. Dead."
notes: "Data was pre-truncated for Llama3 tokenizer, not Qwen3. Must re-run clean_data.py with 14B tokenizer before v1.1."