Custom Train Loop and Validation Loss? #2468
Unanswered
Futuramistic
asked this question in
Q&A
Replies: 1 comment 1 reply
-
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Version used: 0.x
Hello! I was wondering about two things:
Why would you write a custom loop? I want to scrape the accuracy and loss and monitor these using TensorBoard. I am more familiar with Pytorch than the framework itself, so it seemed like a natural step. If you see any improvements in this loop, please let me know.
I followed my train pipeline's target and target weights generation procedure. What I find odd is when I use a similar validation loop to the one in question 1 (no gradient descent here, obviously), the validation loss increases with epochs. Is this due to the custom loop or the target generation? It is weird as the accuracy on the validation side also increases
Maybe you are overfitting? I thought so, but the trend follows from the first epoch onwards. I don't think I would be overfitting so early.
Beta Was this translation helpful? Give feedback.
All reactions