You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've run your code and achieved the performance you gave in your paper, it's a very nice work which shocked me a lot. But there are still few things which make me confuse, first, the training accuracy is even lower than the validation accuracy which is quite different from our experience; second, there is only one layer basis for the weight of one layer and it remains unchanged during training, have you tried the filter-wise basis or learnable basis? Third, have you ever tried much lower bit-width such as w2a2 or w1a1?
Hope to receive your reply, thanks very much!
The text was updated successfully, but these errors were encountered:
Hi, I've run your code and achieved the performance you gave in your paper, it's a very nice work which shocked me a lot. But there are still few things which make me confuse, first, the training accuracy is even lower than the validation accuracy which is quite different from our experience; second, there is only one layer basis for the weight of one layer and it remains unchanged during training, have you tried the filter-wise basis or learnable basis? Third, have you ever tried much lower bit-width such as w2a2 or w1a1?
Hope to receive your reply, thanks very much!
The text was updated successfully, but these errors were encountered: