Files
hermes-agent/skills/mlops/trl-fine-tuning/references/online-rl.md
teknium f172f7d4aa Add skills tools and enhance model integration
- Introduced new skills tools: `skills_categories`, `skills_list`, and `skill_view` in `model_tools.py`, allowing for better organization and access to skill-related functionalities.
- Updated `toolsets.py` to include a new `skills` toolset, providing a dedicated space for skill tools.
- Enhanced `batch_runner.py` to recognize and validate skills tools during batch processing.
- Added comprehensive tool definitions for skills tools, ensuring compatibility with OpenAI's expected format.
- Created new shell script `test_skills_kimi.sh` for testing skills tool functionality with Kimi K2.5.
- Added example skill files demonstrating the structure and usage of skills within the Hermes-Agent framework, including `SKILL.md` for example and audiocraft skills.
- Improved documentation for skills tools and their integration into the existing tool framework, ensuring clarity for future development and usage.
2026-01-30 07:39:55 +00:00

1.9 KiB

Online RL Methods

Guide to online reinforcement learning with PPO, GRPO, RLOO, and OnlineDPO.

Overview

Online RL generates completions during training and optimizes based on rewards.

PPO (Proximal Policy Optimization)

Classic RL algorithm for LLM alignment.

Basic Usage

python -m trl.scripts.ppo \
    --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct \
    --reward_model_path reward-model \
    --dataset_name trl-internal-testing/descriptiveness-sentiment-trl-style \
    --output_dir model-ppo \
    --learning_rate 3e-6 \
    --per_device_train_batch_size 64 \
    --total_episodes 10000 \
    --num_ppo_epochs 4 \
    --kl_coef 0.05

Key Parameters

  • kl_coef: KL penalty (0.05-0.2)
  • num_ppo_epochs: Epochs per batch (2-4)
  • cliprange: PPO clip (0.1-0.3)
  • vf_coef: Value function coef (0.1)

GRPO (Group Relative Policy Optimization)

Memory-efficient online RL.

Basic Usage

from trl import GRPOTrainer, GRPOConfig
from datasets import load_dataset

# Define reward function
def reward_func(completions, **kwargs):
    return [len(set(c.split())) for c in completions]

config = GRPOConfig(
    output_dir="model-grpo",
    num_generations=4,  # Completions per prompt
    max_new_tokens=128
)

trainer = GRPOTrainer(
    model="Qwen/Qwen2-0.5B-Instruct",
    reward_funcs=reward_func,
    args=config,
    train_dataset=load_dataset("trl-lib/tldr", split="train")
)
trainer.train()

Key Parameters

  • num_generations: 2-8 completions
  • max_new_tokens: 64-256
  • Learning rate: 1e-5 to 1e-4

Memory Comparison

Method Memory (7B) Speed Use Case
PPO 40GB Medium Maximum control
GRPO 24GB Fast Memory-constrained
OnlineDPO 28GB Fast No reward model

References