Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Stop when loss reaches X #142

Open
tensiondriven opened this issue Jul 23, 2023 · 1 comment
Open

Feature request: Stop when loss reaches X #142

tensiondriven opened this issue Jul 23, 2023 · 1 comment

Comments

@tensiondriven
Copy link

Several people on Reddit have been mentioning ideal loss ranges for training. Some training tools have an option to "stop training when loss reaches X". I would love to use this feature with alpaca_lora_4bit, so I wouldn't have to guess at the ideal loss, save a lot of checkpoints, etc.

Would this be something feasible to implement? I may be missing something about the details which would prevent it from being feasible.

For example, I would love to be able to specify:

  --stop-at-loss 1.5
@johnsmith0031
Copy link
Owner

The finetune scripts use transformers.Trainer, so you can just utilize the feature of it.
You can adjust the finetune.py:

class MyCallback(transformers.TrainerCallback):
        "A callback that prints a message at the beginning of training"

        def on_step_end(self, args, state, control, **kwargs):
            if len(state.log_history) > 0:
                if state.log_history[-1]['loss'] < 1.5:
                    control.should_training_stop = True

...

trainer = transformers.Trainer(
        ...
        callbacks=[MyCallback],
    )

And you can also check its document:
https://huggingface.co/docs/transformers/main_classes/callback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants