You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that while training, the regression prediction tensor is reshaped into [B, C, H, W]. Whereas in test time, only half of the output tensor is kept, resulting in [B, C/2, H, W] sized tensor. As in here
Is there a reason for that, or is it a bug?
I wish to calculate the validation loss, but it is not possible with the current setting. Should it be changed from: ord_prob = F.softmax(x, dim=1)[:, 0, :, :, :]
to: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ?
The text was updated successfully, but these errors were encountered:
It seems that while training, the regression prediction tensor is reshaped into [B, C, H, W]. Whereas in test time, only half of the output tensor is kept, resulting in [B, C/2, H, W] sized tensor. As in here Is there a reason for that, or is it a bug? I wish to calculate the validation loss, but it is not possible with the current setting. Should it be changed from: ord_prob = F.softmax(x, dim=1)[:, 0, :, :, :] to: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ?
is that mean the code should be changed into: prob = F.log_softmax(x, dim=1).view(N, C, H, W) ord_label = torch.sum((prob > 0.5), dim=1) return prob, ord_label
in other words,is that mean there is no need to distinguish between self.training and not self.training?
It seems that while training, the regression prediction tensor is reshaped into [B, C, H, W]. Whereas in test time, only half of the output tensor is kept, resulting in [B, C/2, H, W] sized tensor. As in here
Is there a reason for that, or is it a bug?
I wish to calculate the validation loss, but it is not possible with the current setting. Should it be changed from:
ord_prob = F.softmax(x, dim=1)[:, 0, :, :, :]
to:
prob = F.log_softmax(x, dim=1).view(N, C, H, W)
?The text was updated successfully, but these errors were encountered: