-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
size mismatch for model.mapping #5
Comments
Hi, I tried to run the experiment with
after resetting gpus as [0]. It is successful. Would you give more details about the checkpoint of ESM2 you have? |
I have also encountered a similar issue. May I ask why you used the original model for downstream tasks in your answer? Shouldn't we use a pre-trained model like the protst_esm2.pth mentioned above? |
Hi, sorry for the misleading comments before. I used the ESM2 checkpoint instead of the ProtST-enhanced one only to test if the released code has any inconsistency with our developing codebase. I'm now downloading the ProtST-enhanced ESM2 checkpoint onto my current working cluster and would come back to you once I tested it. |
Hi, I downloaded the ProtST-enhanced ESM2 checkpoint (https://protsl.s3.us-east-2.amazonaws.com/checkpoints/protst_esm2.pth, the url link is also stated in README). I have run the following command on 1 GPU, and it's successful.
This suggests that the uploaded checkpoint and the released code should work fine. Potential problem I can think of is running environment. I currently use And if you'd like, you can provide me with the checkpoint you're using right now so that I can use it to reproduce the issue and further detect the reason behind. |
Hi, we just realized this mismatch is caused by the recent update of TorchDrug. In TorchDrug=0.2.1, the way to construct the variable The fastest solution would be rolling back to TorchDrug=0.2.0. Sorry for all the troubles! |
Torchdrug has been adjusted to version 0.2.0 (installed via pip), but when running: python ./script/run_downstream.py --config ./config/downstream_task/PretrainESM2/annotation_tune.yaml --checkpoint ~/scratch/protst_output/protst_esm2.pth --dataset GeneOntology --branch BP there is still an issue, and the error message states: Unknown model |
Hello again,
I tried to run the protein function annotation task after downloaded pkl file. However, I got the following error:
RuntimeError: Error(s) in loading state_dict for MultipleBinaryClassification:
size mismatch for model.mapping: copying a param with shape torch.Size([20]) from checkpoint, the shape in current model is torch.Size([33]).
[sj4@gn10 ProtST]$ python ./script/run_downstream.py --config ./config/downstream_task/PretrainESM2/annotation_tune.yaml --checkpoint /work/sj4/protst_esm2.pth --dataset GeneOntology --branch BP
The yaml is file is almost the same as one on GitHub except I changed the number of GPUS to gpus: [0]. Could you please check this issue?
Best wishes,
The text was updated successfully, but these errors were encountered: