size mismatch for transformer.text_embed.text_embed.weight: copying a param with shape torch.Size([2546, 100]) from checkpoint, the shape in current model is torch.Size([2546, 512]) #740
Labels
bug
Something isn't working
Checks
Environment Details
widow's10 , python 3.9.5 ,
Steps to Reproduce
size mismatch for transformer.text_embed.text_embed.weight: copying a param with shape torch.Size([2546, 100]) from checkpoint, the shape in current model is torch.Size([2546, 512]).
size mismatch for transformer.input_embed.proj.weight: copying a param with shape torch.Size([1024, 300]) from checkpoint, the shape in current model is torch.Size([1024, 712]).
✔️ Expected Behavior
training must be started
❌ Actual Behavior
size mismatch for transformer.text_embed.text_embed.weight: copying a param with shape torch.Size([2546, 100]) from checkpoint, the shape in current model is torch.Size([2546, 512])
The text was updated successfully, but these errors were encountered: