Per direction shift (the-nexus#542). Replaces the autolora repo (1,500 lines of custom pipeline code) with config files for existing tools: - axolotl.yaml: replaces train_modal.py (239 lines) - mlx-lora.yaml: replaces MLX training scripts - eval-tasks.yaml: replaces run_eval.py (300 lines) - Makefile: replaces run_vibes.py, compare.py, convert_to_mlx.py Data migrated as-is: - curated_dataset.jsonl (26 gold-standard conversations) - preference_pairs.jsonl (DPO pairs) - prompts_vibes.yaml, prompts_nexus_vibes.yaml - v0-baseline eval results (historical record) Thin glue kept: - build_curated.py (data authoring, not infrastructure) - ingest_trajectories.py (domain-specific quality filter) Dependencies: pip install axolotl mlx-lm lm-evaluation-harness
21 lines
535 B
YAML
21 lines
535 B
YAML
# MLX LoRA Training Config — Apple Silicon
|
|
# Replaces: autolora/train_modal.py for local training
|
|
#
|
|
# Usage:
|
|
# mlx_lm.lora --config training/mlx-lora.yaml
|
|
#
|
|
# Runs on Mac M-series. No cloud needed for small models.
|
|
# v0.1 was trained this way: 1,214 samples, 7.8 GB peak, 48 tok/s on M3 Max.
|
|
|
|
model: mlx-community/Hermes-3-Llama-3.1-8B-4bit
|
|
data: data/mlx_curated
|
|
train: true
|
|
iters: 1000
|
|
batch_size: 2
|
|
learning_rate: 2e-6
|
|
lora_layers: 16
|
|
steps_per_report: 10
|
|
steps_per_eval: 100
|
|
save_every: 200
|
|
adapter_path: ./output/mlx-adapters
|