Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss.. should be specified as either training loss or validation loss #35494

Open
4 tasks
FlogramMatt opened this issue Jan 3, 2025 · 3 comments
Open
4 tasks
Labels

Comments

@FlogramMatt
Copy link

System Info

Doesn't apply.. using runpod

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

As you fine tune a new model, it shows a loss curve and numbers as it runs.. is that training loss or validation loss? Should be labelled

If training loss.. how can I view validation loss? I set 'val size', it should exist somewhere

Expected behavior

Should be labeled more clearly

@FlogramMatt FlogramMatt added the bug label Jan 3, 2025
@Rajatavaa
Copy link

Could you please elaborate more or provide a script to replicate this?

@FlogramMatt
Copy link
Author

I did figure it out, it is training loss.

I mean when fine tuning the model, you have a 'loss' graph in bottom right and same for outputs in terminal and it's not obvious that it's training loss, not validation loss. Would be nice if labelled better.

@Rajatavaa
Copy link

I mean if your code doesn't specify to plot a graph then it will not show any graph.
And if you are plotting the graph then you can label it correctly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants