-
-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
On tensorflow and incompatible protobuf version #288
Comments
Please try to create an environment following the instructions on the conda-forge.org main website For one, you are "mixing" defaults and conda-forge channel in your environment. Does the environment created with the following command work:
|
I guess "tensorflow thinks" it is only compatible with 3.x, but we haven't found any usability issues moving to 4.x. |
are you hitting a bug/crash with the environment as is? |
The probem is related to this part
Since tensorflow thinks that it is only compatible with 3.x, it's very easy for any other installation steps you do through |
This shows the same behaviour of installing 4.x protobuf, but differs in that doing |
Can you give an example of a package that would depend on tensorflow but trigger protobuf to be updated? We run mixing pip and conda is challenging https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-pkgs.html#installing-non-conda-packages Maybe you can give us a concrete example of how things fail. |
I guess it seems that we aren't providing the stubs that tensorflow actually exists. |
Here is an example of this happening in the real world https://github.com/nengo/nengo-dl/actions/runs/3621436928/jobs/6104897544#step:5:1524. This is part of a large, somewhat complicated CI pipeline using a mix of libraries, so it isn't easy to control all parts to ensure things are only being installed in the cleanest way.
|
And you are sure it isn't being triggered by: https://github.com/nengo/nengo-dl/blob/master/setup.py#L80 ?? |
Yes the same thing happens with or without that requirement (and the |
But pip list still reports
So there must be some kind of dependency pulling protobuf down. |
I needed this flag since the computer i used doesn't have a CUDA GPU. |
Did a bit more digging. I think the issue is actually triggered by
then you end up with
versus if you switch the above to
Then when you do So to sum up, setting In any case, appreciate your time looking into this! |
Thank you for digging into this. that "flexible" solve should have resulted in tensorboard being installed from our channel but maybe we are out of sync. That said, it is quite "random" what the solver might find. I see that the tensorboard feedstock is at 2.11, while tensorflow (2.10 on conda-forge) requires tensorboard 2.10 which may not be fully up to date (even though nothing jumps to mind immediately) I unfortunately do not have a quick answer for you. Maybe you can try to to use strict priorities? From my experience, the issue will only get worse if you continue to "mix" channels. But that is just my opinion. |
It seems that it may also be a difference between I can recreate the effect with:
I notice that the following packages are picked out from main:
It maybe that it is preferring the architecture specific package on the main channel compared to the noarch package on the conda-forge channel for keras. I'm not sure how the solver should behave:
Unfortunately, I think that the inclusion of the We do want to allow users to mix and match (at least, I mix and match for my usecase), however, it is up to users to be careful when they mix with outside of conda-forge. In your case, it would mean using strict channel priorities. |
The issue is that protobuf changed to a very weird version scheme, where the minor number is the main number, and the major number for the C++ lib stayed the same (3) while the major number for python got bumped (4). Protobuf is hard to distribute, so it's pinned quite tightly in the pip metadata, however this is not necessary because it "just" depends on correctly recompiling the code. This is something you wouldn't ask of your average user, but conda-forge can do it, and in fact must, because we need to rebuild all our ecosystem for a consistent protobuf version in order to be able to use shared libraries. Since we still need to follow upstream versioning for the sake of keeping things manageable, we therefore progressed past the point where this major version bump happened (3.20 -> 4.21) and for conda-forge it was entirely uneventful. The answer here is: don't mix channels, and certainly don't install anything with pip. If you cannot help the latter, then patch the metadata or use something like |
Solution to issue cannot be found in the documentation.
Issue
Installing
tensorflow-gpu=2.10
results in theprotobuf
dependency being installed with an incompatible version (4.x; tensorflow is only compatible with 3.x).Note that installing
tensorflow=2.10
(rather thantensorflow-gpu
) results in a compatible 3.xprotobuf
version being installed. Or installing an older version (e.g.,tensorflow-gpu=2.8
) results in a compatible 3.xprotobuf
version being installed.The installation with a (seemingly) incompatible
protobuf
version actually seems to work somehow, as long as you leave everything as is. But if thetensorflow
installation ever gets triggered again for some reason (e.g. in some later step of an installation pipeline), then it will downgrade theprotobuf
version from 4.x to 3.x, and then the tensorflow installation will be broken.To reproduce:
Installed packages
Environment info
The text was updated successfully, but these errors were encountered: