Files
timmy-config/training
Alexander Whitestone 3603030235
Some checks failed
Architecture Lint / Linter Tests (pull_request) Successful in 22s
Smoke Test / smoke (pull_request) Failing after 18s
Validate Config / YAML Lint (pull_request) Failing after 23s
Validate Config / JSON Validate (pull_request) Successful in 21s
Validate Config / Python Syntax & Import Check (pull_request) Failing after 1m54s
Validate Config / Shell Script Lint (pull_request) Failing after 54s
Validate Config / Cron Syntax Check (pull_request) Successful in 16s
Validate Config / Deploy Script Dry Run (pull_request) Successful in 16s
Validate Config / Playbook Schema Validation (pull_request) Successful in 23s
PR Checklist / pr-checklist (pull_request) Failing after 11m2s
Architecture Lint / Lint Repository (pull_request) Has been cancelled
Validate Config / Python Test Suite (pull_request) Has been cancelled
feat: training data augmentation — paraphrase and translate pairs (#695)
augment_pairs.py: generates paraphrases and translations for any
JSONL training file.

Features:
- Auto-detects text field (rich, terse, text, content, lyric_line, etc.)
- N paraphrases per entry (template-based, or LLM with --llm-endpoint)
- Translations to ES, FR, DE (template dictionary, or LLM)
- Outputs augmented JSONL alongside originals
- Marks each augmented entry with _augmentation, _original, _language

Usage:
  python3 augment_pairs.py --input data.jsonl
  python3 augment_pairs.py --input data.jsonl --paraphrases 5 --langs es,fr
  python3 augment_pairs.py --input data.jsonl --llm-endpoint http://localhost:11434/v1

Closes #695
2026-04-15 07:51:38 -04:00
..

Training

Transitional training recipes for Timmy's sovereign model. These files are useful as reference configs and export helpers, but they are not the canonical home of Timmy's lived training data.

Canonical data should live in timmy-home under gameplay trajectories, research artifacts, and training-data/ exports such as DPO pairs.

Install

pip install axolotl mlx-lm lm-evaluation-harness pyyaml

Commands

make train-local    # LoRA on Apple Silicon (MLX) — free, ~30 min on M3 Max
make train-cloud    # QLoRA on cloud GPU (Axolotl) — ~$1/run on A100
make eval           # Standard benchmarks via lm-eval-harness against Ollama
make vibes          # Hand-picked prompts → human review (the sacred test)
make ingest         # Pull heartbeat trajectories into training data
make curated        # Regenerate curated exemplar dataset
make convert        # Convert merged data to MLX train/valid format
make help           # Show all targets

Status

This directory exists to avoid re-growing a bespoke training harness while the system boundary is being cleaned up.

  • Keep thin recipes and export helpers here only when they directly support the Hermes sidecar.
  • Keep generated data, DPO pairs, and other lived artifacts in timmy-home.
  • Prefer deleting stale pipeline code over expanding it.

Files

training/
├── Makefile                 ← All commands
├── axolotl.yaml             ← Cloud training config (replaces train_modal.py)
├── mlx-lora.yaml            ← Local training config (Apple Silicon)
├── eval-tasks.yaml          ← Benchmark config
├── build_curated.py         ← Exemplar data authoring (the soul conversations)
├── ingest_trajectories.py   ← Quality filter for heartbeat cycle data
└── data/
    ├── curated_dataset.jsonl       ← 26 gold-standard conversations (proprietary)
    ├── preference_pairs.jsonl      ← DPO preference pairs (proprietary)
    ├── prompts_vibes.yaml          ← Custom eval prompts
    ├── prompts_nexus_vibes.yaml    ← Nexus-specific eval prompts
    └── mlx_curated/                ← MLX-format train/valid splits

What's proprietary

The data (curated exemplars, preference pairs, trained weights) is proprietary. The configs and process are open.

Training Results (March 2026)

timmy:v0.1-q4

Detail Value
Base model mlx-community/Hermes-3-Llama-3.1-8B-4bit
Training data 1,214 samples from Hermes session DB
Method LoRA rank 8, 16 layers, lr 2e-6, 1000 iters
Peak memory 7.8 GB (Apple Silicon)
Best val loss 2.134 (iter 800)
Final model timmy:v0.1-q4 in Ollama (4.9GB, Q4_K_M)
Inference speed ~48 tok/s on M3 Max

Key Insight

The base model's RLHF priors override LoRA on crisis/faith — the most important parts of SOUL.md. Fix: inference-time grounding (inject SOUL.md crisis protocol) + larger pure-Timmy corpus over time.