You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm doing a LORA peft of GPT2 through trl and have noticed that my trained model assigns very low probability to the EOS token which causes it to alway generate the maximum number of tokens.
After trying a few different fixes I ran the code without the PEFT option and just used the base model. The problem resolved immediately.
To make the comparison clear I created a toy case with a dataset that contains the same datapoint ("Hello <|endoftext|>") repeatedly. I then overfit on this dataset with a small batch size for a few dozen iterations. To see the effect on the probability of generating the eos_token I inserted the following code fragment in my compute_metrics method:
logits, labels = eval_preds
eos_indices = np.where(labels==tokenizer.eos_token_id)
model_distribution = torch.softmax(torch.tensor(logits),dim=-1).numpy()
eos_probs = model_distribution[eos_indices[0],eos_indices[1],-1]
eos_probs = [format(x*100,'.3f') for x in eos_probs.tolist()]
print('eos probs:',eos_probs)
The basic full finetuning results in the EOS token probability converging to 1 almost immediately as the model memorizes the location of the EOS tokens. However if I just use TRL's code for a LORA PEFT the printed values remain close to zero and don't increase at all.
I've seen some references online suggesting that this could be caused by LORA not updating the model's embedding matrix. So I added the following change to the peft_config: peft_config.modules_to_save = ["wte"]. This doesn't have any effect on the results. I'm also doubtful this is the cause as when I run the supervised finetuning I don't see any change in the embedding matrix but get the desired results anyway.
Any help would be appreciated as I would like to avoid a full finetuning but right now have no way of getting a functional model with a PEFT.
Reproduction
Use the following model_config (note the PEFT parameters) and training arguments:
Running into this same issue myself. No PEFT and EOS is predicted fine. w/ PEFT and EOS is not predicted at all, which causes the text generation pipeline to continue until max_tokens.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
System Info
Issue
I'm doing a LORA peft of GPT2 through trl and have noticed that my trained model assigns very low probability to the EOS token which causes it to alway generate the maximum number of tokens.
After trying a few different fixes I ran the code without the PEFT option and just used the base model. The problem resolved immediately.
To make the comparison clear I created a toy case with a dataset that contains the same datapoint ("Hello <|endoftext|>") repeatedly. I then overfit on this dataset with a small batch size for a few dozen iterations. To see the effect on the probability of generating the eos_token I inserted the following code fragment in my
compute_metrics
method:The basic full finetuning results in the EOS token probability converging to 1 almost immediately as the model memorizes the location of the EOS tokens. However if I just use TRL's code for a LORA PEFT the printed values remain close to zero and don't increase at all.
I've seen some references online suggesting that this could be caused by LORA not updating the model's embedding matrix. So I added the following change to the peft_config:
peft_config.modules_to_save = ["wte"]
. This doesn't have any effect on the results. I'm also doubtful this is the cause as when I run the supervised finetuning I don't see any change in the embedding matrix but get the desired results anyway.Any help would be appreciated as I would like to avoid a full finetuning but right now have no way of getting a functional model with a PEFT.
Reproduction
Use the following model_config (note the PEFT parameters) and training arguments:
Create dataset:
Set up custom evaluation function:
Instantiate and run SFTTrainer
The
eos_probs
printed incompute_metrics
will be near-zeroExpected behavior
I would expect the above code to result in
eos_probs
values being nearly 1 after a few training iterations.The text was updated successfully, but these errors were encountered: