[CONFIG] Migrate autolora data and configs into timmy-config/training/ #556
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Per direction shift (#542). The autolora repo is ~1,500 lines of custom pipeline code wrapping tools that already exist. The valuable parts are data and config — those belong in timmy-config.
What moves to
timmy-config/training/Data (yours, irreplaceable):
data/curated_dataset.jsonl— 26 hand-crafted exemplar conversations (crisis, pastoral, sovereignty, honesty, concision)data/preference_pairs.jsonl— DPO preference pairsdata/mlx_curated/— train/valid splitseval/prompts_vibes.yaml— custom vibe check promptseval/prompts_nexus_vibes.yaml— Nexus-specific eval promptsConfig (replaces custom code):
training/axolotl.yaml— replacestrain_modal.py(239 lines → ~20 line config)training/eval-tasks.yaml— replaceseval/run_eval.py(300 lines → lm-evaluation-harness task definition)training/Makefile— 3 targets:make train→ callsaxolotlormlx-lm lorawith the YAMLmake eval→ callslm-evalwith the task YAML against Ollamamake vibes→ pipes prompts throughollama runand diffs outputThin glue (keep as small scripts):
training/ingest_trajectories.py— quality filter for heartbeat cycle data (174 lines, domain-specific, keep)training/build_curated.py— the exemplar generator (this is data authoring, keep)What gets deleted (replaced by imports)
train_modal.py→axolotlhandles Modal/cloud GPU dispatch nativelyscripts/convert_to_mlx.py→mlx-lmaccepts chat format directlyeval/run_eval.py→lm-evaluation-harnesswith custom YAML taskseval/run_vibes.py→ shell loop overollama runeval/compare.py→lm-eval-harnesscomparison ordiffResulting structure
Requirements
No other dependencies. No custom training harness. No custom eval framework.
Done. training/ directory committed to timmy-config with data, configs, and Makefile. Commit: 6507cff