Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I use your refinedet to train our data ,I find the loss is very big why ? #27

Open
lingtengqiu opened this issue Mar 8, 2019 · 3 comments

Comments

@lingtengqiu
Copy link

for example the train loss for total_loss as to 4.0 ? why this condition happen? do you divide batch_size ?
please help me

@yqyao
Copy link
Owner

yqyao commented Mar 8, 2019

At the begin of training, it's ok. After several epochs, the loss will decrease. @lingtengqiu

@lingtengqiu
Copy link
Author

I try , at the begin of training the loss is 16 ,at the end of epoch the loss is 4.0

@rw1995
Copy link

rw1995 commented Apr 9, 2019

我尝试,在训练开始时损失是16,在时代结束时损失是4.0

请问一下,数据是如何合并的,我尝试了许多方法,但是没有一个正确的,请问您是如何详细操作的? @lingtengqiu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants