-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please pin transformers to <3.0.0 as the new installs of emBERT are broken #15
Comments
I am in the process of updating it to the latest version anyway ( |
I saw your commit on the dependencies lately. Meanwhile I have created a workaround in emtsv based on my findings. Could you review an pin all necessary packages (with the transitive dependencies) to a working state in order to resolve this issue and allow to remove the workaround? We are about to create a new release from emtsv next week, we would be glad to have this issue resolved in that release. |
@dlazesz What breaks for you? I have transformers 3.4 and it works for me. I still pinned it to below 3.5, but I see no reason why I should go below 3.0. |
I try to use emBERT with emtsv on Ubuntu 18.04 in the following configurations:
Now
For installed packages see requirements_setuppy.txt attached. After this I tried uninstalling packages installed by setup.py and install the ones in requirements.txt: Effectively meaning: This yields:
For installed packages see requirements_req.txt attached. Then I try the emtsv workaround version of the packages uncommenting the previously commented lines in emtsv's requirements.txt. This yields:
For installed packages see requirements_emtsv_req.txt attached. Finally installing The following output is produced:
From this state if I try to increase the version of Could you also test your working setup with emtsv? Do you use any features from transformers introduced after 2.10.0? Thank you! |
I have tested the whole thing on Ubuntu 20.04 with the same results. Meanwhile, I have found a workaround to lift the requirement for wandb. Setting Could this be implemented inside emBERT or it should be set externally before the imports? Do you have any clue about the error below? File ".../emtsv/embert/embert/embert.py", line 35, in __init__
self._load_model()
File ".../emtsv/embert/embert/embert.py", line 77, in _load_model
raise ValueError(f'Could not load model {self.config["model"]}: {e}')
ValueError: Could not load model szeged_maxnp_bioes: Error(s) in loading state_dict for TokenClassifier:
size mismatch for classifier.weight: copying a param with shape torch.Size([8, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]).
size mismatch for classifier.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([2]). The most current versions of the transformers and torch packages yields these errors, so fixing them whould also solve the pinning issue. |
I will look into it after the MSZNY deadline. I am not sure I need transformers > 3.0. Also, these errors seem a bit strange. For one, I don't even have wandb installed, why do you? Do you need it for anything? I pinned |
OK. MSZNY first.
I do not use wandb. I actually do not even know how these stuff works, but somehow it was installed on the system python environment and causing problems in the virtualenv. Very strange. The funny thing is emBERT worked in the docker version of emtsv without even asking for wandb, so you are right it actually should work without it even when it is installed.
Thank you! And thank you for the further investigations in advance! |
Due to API breakage in transformers package, you should either pin the package version, or update emBERT to support newer transformers package version.
Personally, I recommend the first option.
Thank You!
The text was updated successfully, but these errors were encountered: