You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 17, 2023. It is now read-only.
I've noticed for each scale the maximum of iteration is different. I have reduced and made all scales to be equal. The results turned out to be nosier than the default config.
Would there be any efficient ways to adjust these numbers and train faster without losing the performance?
At the last scale is 200000, I assume it's the training data size n= 200000. If my data is less than 200000 should I change my dataset size?
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I've noticed for each scale the maximum of iteration is different. I have reduced and made all scales to be equal. The results turned out to be nosier than the default config.
Would there be any efficient ways to adjust these numbers and train faster without losing the performance?
At the last scale is 200000, I assume it's the training data size n= 200000. If my data is less than 200000 should I change my dataset size?
The text was updated successfully, but these errors were encountered: