Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update conda environments and CI #814

Merged
merged 6 commits into from
Dec 21, 2024
Merged

Update conda environments and CI #814

merged 6 commits into from
Dec 21, 2024

Conversation

constantinpape
Copy link
Contributor

No description provided.

@constantinpape
Copy link
Contributor Author

constantinpape commented Dec 20, 2024

Hi @hmaarrfk ,
it looks like the installation of pytorch from conda-forge works in principle, but the pins you specified in #435 (comment) and #435 (comment) don't seem to work in the environment files.

For now, I used pytorch-cpu and pytorch-gpu , but you said that you wouldn't use those.
Could you please elaborate a bit on:

  • What is the difference between the pytorch, pytorch-gpu and pytorch-cpu packages (from conda-forge).
  • If I just specify pytorch, how would conda know wether to install the CUDA or CPU build (unless this is specified by the pins).
    • The reason I ask: if this works, we wouldn't need to have separate environment files for gpu and cpu. But to my (very limited) understanding it isn't possible for conda/mamba to determine at runtime if CUDA is available or not.
  • How would I write those pins in an environment file? (If it is actually necessary)

Thanks!

Copy link
Contributor

@hmaarrfk hmaarrfk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the difference between the pytorch, pytorch-gpu and pytorch-cpu packages (from conda-forge).

See: conda-forge/pytorch-cpu-feedstock#71 (comment)

From "build 6" https://github.com/conda-forge/pytorch-cpu-feedstock/blob/b498f465ef7ecd80c53041f914cd82dca85a2c91/recipe/meta.yaml#L329

pytorch-gpu is just a shortcut for pytorch=*=cuda*

If I just specify pytorch, how would conda know wether to install the CUDA or CPU build (unless this is specified by the pins).

Type

conda info

on your GPU machine. You will see something like

__cuda=12.4=0

So conda knows you have cuda ;).
I hope you don't have cuda 11.8..... god forbid 11.2. We kinda lost steam in building 11.8 packages and dropped it a few months back. But I think we have a few builds for cuda 11.8 for pytorch 2.5.1
image

However, I think that if you are running Cuda 11.8 maybe you should update to Ubuntu 24.04???? Especially for a GUI software...

In short, I work very hard such that on a "modern" system, say Ubuntu 22.04 or 24.04, the two supported LTS', one typing

mamba install pytorch

will get the "best performing pytorch" for their system.

See conda-forge/pytorch-cpu-feedstock#102

The reason I ask: if this works, we wouldn't need to have separate environment files for gpu and cpu. But to my (very limited) understanding it isn't possible for conda/mamba to determine at runtime if CUDA is available or not.

Correct, this is one of the big advantages of going all in on conda-forge. Just tell users to install pytorch. It will be fine ;). With the exception of all the caveats!!!

How would I write those pins in an environment file? (If it is actually necessary)

Not necessary, but see review comments.

They could help for users that have CUDA 11.8.... because for that one, it may not be compatible with other packages that have higher version numbers.

So CPU + New Version of OTHER packages, will take precedence of pytorch with GPU.
And you might want to prefer GPU Cuda instead of "the latest Qt"

environment_cpu.yaml Outdated Show resolved Hide resolved
environment_gpu.yaml Outdated Show resolved Hide resolved
@constantinpape
Copy link
Contributor Author

Thank you for the explanation @hmaarrfk !

In short, I work very hard such that on a "modern" system, say Ubuntu 22.04 or 24.04, the two supported LTS', one typing

mamba install pytorch

will get the "best performing pytorch" for their system.

That is amazing and will make installation a lot easier in the future. I also checked, and on our GPU server we get the correct output from mamba info: __cuda=12.4=0.

I will go ahead and simplify the environments here! For older versions where this does not work we will just add a note in the installation instructions.

@constantinpape
Copy link
Contributor Author

This seems to all work as expected. I will update the installation instructions here later and then merge it

@constantinpape
Copy link
Contributor Author

I have now also updated the installation instructions, to describe the new way to install it - using only conda-forge dependencies on Mac OS / Linux, and pinned dependencies on Windows. I have also updated the doc to using conda instead of mamba (since conda now uses libmamba by default we don't need to enforce using mamba any more, which was confusing for some users).

cc @anwai98

@constantinpape constantinpape merged commit ad73eb3 into dev Dec 21, 2024
3 checks passed
@constantinpape constantinpape deleted the installation-updates branch December 21, 2024 17:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants