Skip to content

Commit

Permalink
Update README.md for Step2 DPO finetuning.
Browse files Browse the repository at this point in the history
  • Loading branch information
stceum committed Jan 1, 2025
1 parent 40c6e83 commit ec55e20
Showing 1 changed file with 26 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# πŸ• Direct Preference Optimization (DPO) finetuning
[Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290) is a novel approach to preference learning, which directly optimizes the policy without explicit reward modeling or reinforcement learning. It leverages a specific parameterization of the reward model that enables the extraction of the corresponding optimal policy in closed form. By using a simple classification loss, DPO aligns language models with human preferences, avoiding the complexity and instability often associated with RLHF.

As the paper says, "Your Language Model is Secretly a Reward Model." Therefore, the training arguments and the training process of DPO are mostly the same as the reward model, as shown in [step2 "Reward Model (RM) finetuning"](../step2_reward_model_finetuning/README.md). After the training of DPO, you will get a model that has been aligned with human preferences.

## πŸƒ How to train the model

We provide the script for OPT-350m, which you can test by launching the command

```bash
training_scripts/opt/single_node/run_350m.sh
```

We also provide the script for llama2, which you can test by launching the command

```bash
training_scripts/llama2/run_llama2_7b.sh
```

## πŸƒ How to evaluate the DPO checkpoint?

The checkpoint of DPO is exactly the language model that can be evaluated as [step1 "Supervised Finetuning"](../step1_supervised_finetuning/README.md).

## πŸ’ Datasets

Because DPO treats the language model as a reward model, the dataset for DPO is in the same format as that used for reward model fine-tuning. Each item in the dataset includes one "chosen" and one "rejected" output for the same input.

0 comments on commit ec55e20

Please sign in to comment.