You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Several people on Reddit have been mentioning ideal loss ranges for training. Some training tools have an option to "stop training when loss reaches X". I would love to use this feature with alpaca_lora_4bit, so I wouldn't have to guess at the ideal loss, save a lot of checkpoints, etc.
Would this be something feasible to implement? I may be missing something about the details which would prevent it from being feasible.
For example, I would love to be able to specify:
--stop-at-loss 1.5
The text was updated successfully, but these errors were encountered:
The finetune scripts use transformers.Trainer, so you can just utilize the feature of it.
You can adjust the finetune.py:
classMyCallback(transformers.TrainerCallback):
"A callback that prints a message at the beginning of training"defon_step_end(self, args, state, control, **kwargs):
iflen(state.log_history) >0:
ifstate.log_history[-1]['loss'] <1.5:
control.should_training_stop=True
...
trainer=transformers.Trainer(
...
callbacks=[MyCallback],
)
Several people on Reddit have been mentioning ideal loss ranges for training. Some training tools have an option to "stop training when loss reaches X". I would love to use this feature with alpaca_lora_4bit, so I wouldn't have to guess at the ideal loss, save a lot of checkpoints, etc.
Would this be something feasible to implement? I may be missing something about the details which would prevent it from being feasible.
For example, I would love to be able to specify:
The text was updated successfully, but these errors were encountered: