Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Go green #344

Closed
wants to merge 5 commits into from
Closed

Go green #344

wants to merge 5 commits into from

Conversation

h-vetinari
Copy link
Member

We've had various problems with the linux GPU builds for the last couple of PRs (caching, pytest crashing, hanging). The last green commit on main was dfadf15 (before merging #318). This PR tries to reduce the diff to that last passing run as much as possible, while keeping necessary changes like the CMake metadata fixes & tests.

I would like to keep the pybind11 unvendor, but for now let's try to get this back to 🟢.

Changes from last passing commit until f072ad6

Removed obviously irrelevant changes (like in bld.bat, added test skips, an increased timeout, changes in README, etc.).

diff --git a/recipe/build.sh b/recipe/build.sh
index 57044b0..4496601 100644
--- a/recipe/build.sh
+++ b/recipe/build.sh
@@ -1,9 +1,11 @@
 #!/bin/bash
 
-echo "=== Building ${PKG_NAME} (py: ${PY_VER}) ==="
-
 set -ex
 
+echo "#########################################################################"
+echo "Building ${PKG_NAME} (py: ${PY_VER}) using BLAS implementation $blas_impl"
+echo "#########################################################################"
+
 # This is used to detect if it's in the process of building pytorch
 export IN_PYTORCH_BUILD=1
 
@@ -20,9 +22,22 @@ rm -rf pyproject.toml
 export USE_CUFILE=0
 export USE_NUMA=0
 export USE_ITT=0
+
+#################### ADJUST COMPILER AND LINKER FLAGS #####################
+# Pytorch's build system doesn't like us setting the c++ standard through CMAKE_CXX_FLAGS
+# and will issue a warning.  We need to use at least C++17 to match the abseil ABI, see
+# https://github.com/conda-forge/abseil-cpp-feedstock/issues/45, which pytorch 2.5 uses already:
+# https://github.com/pytorch/pytorch/blob/v2.5.1/CMakeLists.txt#L36-L48
+export CXXFLAGS="$(echo $CXXFLAGS | sed 's/-std=c++[0-9][0-9]//g')"
+# The below three lines expose symbols that would otherwise be hidden or
+# optimised away. They were here before, so removing them would potentially
+# break users' programs
 export CFLAGS="$(echo $CFLAGS | sed 's/-fvisibility-inlines-hidden//g')"
 export CXXFLAGS="$(echo $CXXFLAGS | sed 's/-fvisibility-inlines-hidden//g')"
 export LDFLAGS="$(echo $LDFLAGS | sed 's/-Wl,--as-needed//g')"
+# The default conda LDFLAGs include -Wl,-dead_strip_dylibs, which removes all the
+# MKL sequential, core, etc. libraries, resulting in a "Symbol not found: _mkl_blas_caxpy"
+# error on osx-64.
 export LDFLAGS="$(echo $LDFLAGS | sed 's/-Wl,-dead_strip_dylibs//g')"
 export LDFLAGS_LD="$(echo $LDFLAGS_LD | sed 's/-dead_strip_dylibs//g')"
 if [[ "$c_compiler" == "clang" ]]; then
@@ -45,6 +60,7 @@ fi
 # can be imported on system without a GPU
 LDFLAGS="${LDFLAGS//-Wl,-z,now/-Wl,-z,lazy}"
 
+################ CONFIGURE CMAKE FOR CONDA ENVIRONMENT ###################
 export CMAKE_GENERATOR=Ninja
 export CMAKE_LIBRARY_PATH=$PREFIX/lib:$PREFIX/include:$CMAKE_LIBRARY_PATH
 export CMAKE_PREFIX_PATH=$PREFIX
@@ -98,18 +114,29 @@ if [[ "${CI}" == "github_actions" ]]; then
     # reduce parallelism to avoid getting OOM-killed on
     # cirun-openstack-gpu-2xlarge, which has 32GB RAM, 8 CPUs
     export MAX_JOBS=4
-else
+elif [[ "${CI}" == "azure" ]]; then
     export MAX_JOBS=${CPU_COUNT}
-fi
-
-if [[ "$blas_impl" == "generic" ]]; then
-    # Fake openblas
-    export BLAS=OpenBLAS
-    export OpenBLAS_HOME=${PREFIX}
 else
-    export BLAS=MKL
+    # Leave a spare core for other tasks, per common practice.
+    # Reducing further can help with out-of-memory errors.
+    export MAX_JOBS=$((CPU_COUNT > 1 ? CPU_COUNT - 1 : 1))
 fi
 
+case "$blas_impl" in
+    "generic")
+        # Fake openblas
+        export BLAS=OpenBLAS
+        export OpenBLAS_HOME=${PREFIX}
+        ;;
+    "mkl")
+        export BLAS=MKL
+        ;;
+    *)
+        echo "[ERROR] Unsupported BLAS implementation '${blas_impl}'" >&2
+        exit 1
+        ;;
+esac
+
 if [[ "$PKG_NAME" == "pytorch" ]]; then
   # Trick Cmake into thinking python hasn't changed
   sed "s/3\.12/$PY_VER/g" build/CMakeCache.txt.orig > build/CMakeCache.txt
@@ -147,11 +174,9 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then
     # all of them.
     export CUDAToolkit_BIN_DIR=${BUILD_PREFIX}/bin
     export CUDAToolkit_ROOT_DIR=${PREFIX}
-    if [[ "${target_platform}" != "${build_platform}" ]]; then
-        export CUDA_TOOLKIT_ROOT=${PREFIX}
-    fi
     # for CUPTI
     export CUDA_TOOLKIT_ROOT_DIR=${PREFIX}
+    export CUDAToolkit_ROOT=${PREFIX}
     case ${target_platform} in
         linux-64)
             export CUDAToolkit_TARGET_DIR=${PREFIX}/targets/x86_64-linux
@@ -163,12 +188,24 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then
             echo "unknown CUDA arch, edit build.sh"
             exit 1
     esac
+
+    # Compatibility matrix for update: https://en.wikipedia.org/wiki/CUDA#GPUs_supported
+    # Warning from pytorch v1.12.1: In the future we will require one to
+    # explicitly pass TORCH_CUDA_ARCH_LIST to cmake instead of implicitly
+    # setting it as an env variable.
+    # Doing this is nontrivial given that we're using setup.py as an entry point, but should
+    # be addressed to pre-empt upstream changing it, as it probably won't result in a failed
+    # configuration.
+    #
+    # See:
+    # https://pytorch.org/docs/stable/cpp_extension.html (Compute capabilities)
+    # https://github.com/pytorch/pytorch/blob/main/.ci/manywheel/build_cuda.sh
     case ${cuda_compiler_version} in
-        12.6)
+        12.[0-6])
             export TORCH_CUDA_ARCH_LIST="5.0;6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX"
             ;;
         *)
-            echo "unsupported cuda version. edit build.sh"
+            echo "No CUDA architecture list exists for CUDA v${cuda_compiler_version}. See build.sh for information on adding one."
             exit 1
     esac
     export TORCH_NVCC_FLAGS="-Xfatbin -compress-all"
@@ -180,6 +217,8 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then
     export USE_STATIC_CUDNN=0
     export MAGMA_HOME="${PREFIX}"
     export USE_MAGMA=1
+    # turn off noisy nvcc warnings
+    export CMAKE_CUDA_FLAGS="-w --ptxas-options=-w"
 else
     if [[ "$target_platform" != *-64 ]]; then
       # Breakpad seems to not work on aarch64 or ppc64le
@@ -211,7 +250,7 @@ case ${PKG_NAME} in
     cp build/CMakeCache.txt build/CMakeCache.txt.orig
     ;;
   pytorch)
-    $PREFIX/bin/python -m pip install . --no-deps -vvv --no-clean \
+    $PREFIX/bin/python -m pip install . --no-deps -v --no-clean \
         | sed "s,${CXX},\$\{CXX\},g" \
         | sed "s,${PREFIX},\$\{PREFIX\},g"
     # Keep this in ${PREFIX}/lib so that the library can be found by
diff --git a/recipe/cmake_test/CMakeLists.txt b/recipe/cmake_test/CMakeLists.txt
new file mode 100644
index 0000000..7168454
--- /dev/null
+++ b/recipe/cmake_test/CMakeLists.txt
@@ -0,0 +1,4 @@
+project(cf_dummy LANGUAGES C CXX)
+cmake_minimum_required(VERSION 3.12)
+find_package(Torch CONFIG REQUIRED)
+find_package(ATen CONFIG REQUIRED)
diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index d5fc48f..ff618c9 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -1,7 +1,10 @@
 # if you wish to build release candidate number X, append the version string with ".rcX"
 {% set version = "2.5.1" %}
-{% set build = 10 %}
+{% set build = 12 %}
 
+# Use a higher build number for the CUDA variant, to ensure that it's
+# preferred by conda's solver, and it's preferentially
+# installed where the platform supports it.
 {% if cuda_compiler_version != "None" %}
 {% set build = build + 200 %}
 {% endif %}
@@ -64,6 +67,12 @@ source:
     - patches/0015-simplify-torch.utils.cpp_extension.include_paths-use.patch
     # point to headers that are now living in $PREFIX/include instead of $SP_DIR/torch/include
     - patches/0016-point-include-paths-to-PREFIX-include.patch
+    - patches/0017-Add-conda-prefix-to-inductor-include-paths.patch
+    - patches/0018-make-ATEN_INCLUDE_DIR-relative-to-TORCH_INSTALL_PREF.patch
+    - patches/0019-remove-DESTINATION-lib-from-CMake-install-TARGETS-di.patch                       # [win]
+    - patches/0021-avoid-deprecated-find_package-CUDA-in-caffe2-CMake-m.patch
+    - patches_submodules/fbgemm/0001-remove-DESTINATION-lib-from-CMake-install-directives.patch     # [win]
+    - patches_submodules/tensorpipe/0001-switch-away-from-find_package-CUDA.patch
 
 build:
   number: {{ build }}
@@ -167,6 +176,9 @@ requirements:
     - libuv
     - pkg-config  # [unix]
     - typing_extensions
+    - pybind11    # [win]
+    - eigen       # [win]
+    - zlib
   run:
     # GPU requirements without run_exports
     - {{ pin_compatible('cudnn') }}                       # [cuda_compiler_version != "None"]
@@ -192,6 +204,18 @@ requirements:
 # a particularity of conda-build, that output is defined in
 # the global build stage, including tests
 test:
+  requires:
+    # cmake needs a compiler to run package detection, see
+    # https://discourse.cmake.org/t/questions-about-find-package-cli-msvc/6194
+    - {{ compiler('cxx') }}
+    # for CMake config to find cuda & nvrtc
+    - {{ compiler('cuda') }}    # [cuda_compiler_version != "None"]
+    - cuda-nvrtc-dev            # [cuda_compiler_version != "None"]
+    - cmake
+    - ninja
+    - pkg-config
+  files:
+    - cmake_test/
   commands:
     # libraries; peculiar formatting to avoid linter false positives about selectors
     {% set torch_libs = [
@@ -217,6 +241,11 @@ test:
     - test -f $PREFIX/share/cmake/Torch/TorchConfig.cmake                       # [linux]
     - if not exist %LIBRARY_PREFIX%\share\cmake\Torch\TorchConfig.cmake exit 1  # [win]
 
+    # test integrity of CMake metadata
+    - cd cmake_test
+    - cmake -GNinja -DCMAKE_CXX_STANDARD=17 $CMAKE_ARGS .   # [unix]
+    - cmake -GNinja -DCMAKE_CXX_STANDARD=17 %CMAKE_ARGS% .  # [win]
+
 outputs:
   - name: libtorch
   - name: pytorch
@@ -294,11 +323,14 @@ outputs:
         - intel-openmp {{ mkl }}  # [win]
         - libabseil
         - libprotobuf
+        - eigen       # [win]
+        - pybind11    # [win]
         - sleef
         - libuv
         - pkg-config  # [unix]
         - typing_extensions
         - {{ pin_subpackage('libtorch', exact=True) }}
+        - zlib
       run:
         - llvm-openmp    # [osx]
         - intel-openmp {{ mkl }}  # [win]
@@ -314,6 +346,7 @@ outputs:
         - filelock
         - jinja2
         - networkx
+        - pybind11  # [win]
         - nomkl                 # [blas_impl != "mkl"]
         - fsspec
         # avoid that people without GPUs needlessly download ~0.5-1GB
@@ -335,6 +368,8 @@ outputs:
       requires:
         - {{ compiler('c') }}
         - {{ compiler('cxx') }}
+        # for torch.compile tests
+        - {{ compiler('cuda') }}       # [cuda_compiler_version != "None"]
         - ninja
         - boto3
         - hypothesis
@@ -367,6 +402,14 @@ outputs:
         - python -c "import torch; print(torch.__version__)"
         - python -c "import torch; assert torch.backends.mkldnn.m.is_available()"  # [x86 and cuda_compiler_version == "None"]
         - python -c "import torch; torch.tensor(1).to('cpu').numpy(); print('numpy support enabled!!!')"
+        # We have had issues with openmp .dylibs being doubly loaded in certain cases. These two tests catch the (observed) issue
+        - python -c "import torch; import numpy"
+        - python -c "import numpy; import torch"
+        # distributed support is enabled by default on linux; for mac, we enable it manually in build.sh
+        - python -c "import torch; assert torch.distributed.is_available()"        # [linux or osx]
+        - python -c "import torch; assert torch.backends.cuda.is_built()"          # [linux64 and (cuda_compiler_version != "None")]
+        - python -c "import torch; assert torch.backends.cudnn.is_available()"     # [linux64 and (cuda_compiler_version != "None")]
+        - python -c "import torch; assert torch.backends.cudnn.enabled"            # [linux64 and (cuda_compiler_version != "None")]
         # At conda-forge, we target versions of OSX that are too old for MPS support
         # But if users install a newer version of OSX, they will have MPS support
         # https://github.com/conda-forge/pytorch-cpu-feedstock/pull/123#issuecomment-1186355073
@@ -378,7 +421,6 @@ outputs:
         - if not exist %LIBRARY_LIB%\torch_python.lib exit 1  # [win]
 
         # a reasonably safe subset of tests that should run under 15 minutes
-        # disable hypothesis because it randomly yields health check errors
         {% set tests = " ".join([
             "test/test_autograd.py",
             "test/test_autograd_fallback.py",
@@ -389,8 +431,10 @@ outputs:
             "test/test_nn.py",
             "test/test_torch.py",
             "test/test_xnnpack_integration.py",
-            "-m \"not hypothesis\"",
         ]) %}
+        # tests torch.compile; avoid on aarch because it adds >4h in test runtime in emulation;
+        # they add a lot of runtime (15->60min on windows), so run them for only one python version
+        {% set tests = tests ~ " test/inductor/test_torchinductor.py" %}    # [py==312 and not aarch64]
 
         {% set skips = "(TestTorch and test_print)" %}
         # tolerance violation with openblas
@@ -438,8 +496,9 @@ outputs:
         # for potential packaging problems by running a fixed subset
         - export OMP_NUM_THREADS=4  # [unix]
         # reduced paralellism to avoid OOM; test only one python version on aarch because emulation is super-slow
-        - python -m pytest -n 2 {{ tests }} -k "not ({{ skips }})" --durations=50   # [unix and (not aarch64 or py==312)]
-        - python -m pytest -v -s {{ tests }} -k "not ({{ skips }})" --durations=50  # [win]
+        # disable hypothesis because it randomly yields health check errors
+        - python -m pytest -n 2 {{ tests }} -k "not ({{ skips }})" -m "not hypothesis" --durations=50   # [unix and (not aarch64 or py==312)]
+        - python -m pytest -v -s {{ tests }} -k "not ({{ skips }})" -m "not hypothesis" --durations=50  # [win]
 
         # regression test for https://github.com/conda-forge/pytorch-cpu-feedstock/issues/329, where we picked up
         # duplicate `.pyc` files due to newest py-ver (3.13) in the build environment not matching the one in host;
patches that were added since then (no changes to existing patches)
From 9f73a02bacf9680833ac64657fde6762d33ab200 Mon Sep 17 00:00:00 2001
From: Daniel Petry <[email protected]>
Date: Tue, 21 Jan 2025 17:45:23 -0600
Subject: [PATCH 17/21] Add conda prefix to inductor include paths

Currently inductor doesn't look in conda's includes and libs. This results in
errors when it tries to compile, if system versions are being used of
dependencies (e.g., sleef).

Note that this is for inductor's JIT mode, not its AOT mode, for which the
end user provides a <filename>_compile_flags.json file.
---
 torch/_inductor/cpp_builder.py | 1 +
 1 file changed, 1 insertion(+)

diff --git a/torch/_inductor/cpp_builder.py b/torch/_inductor/cpp_builder.py
index 860e7fb062f..76c61375d91 100644
--- a/torch/_inductor/cpp_builder.py
+++ b/torch/_inductor/cpp_builder.py
@@ -1048,6 +1048,7 @@ def get_cpp_torch_options(
         + python_include_dirs
         + torch_include_dirs
         + omp_include_dir_paths
+        + [os.getenv('CONDA_PREFIX') + '/include']
     )
     cflags = sys_libs_cflags + omp_cflags
     ldflags = omp_ldflags
From b0cfa0f728e96a3a9d6f7434e2c02d74d6daa9a9 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <[email protected]>
Date: Tue, 28 Jan 2025 14:15:34 +1100
Subject: [PATCH 18/21] make ATEN_INCLUDE_DIR relative to TORCH_INSTALL_PREFIX

we cannot set CMAKE_INSTALL_PREFIX without the pytorch build complaining, but we can
use TORCH_INSTALL_PREFIX, which is set correctly relative to our CMake files already:
https://github.com/pytorch/pytorch/blob/v2.5.1/cmake/TorchConfig.cmake.in#L47
---
 aten/src/ATen/CMakeLists.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/aten/src/ATen/CMakeLists.txt b/aten/src/ATen/CMakeLists.txt
index 6d9152a4d07..aa4dd7b05cc 100644
--- a/aten/src/ATen/CMakeLists.txt
+++ b/aten/src/ATen/CMakeLists.txt
@@ -563,7 +563,7 @@ if(USE_ROCM)
   # list(APPEND ATen_HIP_DEPENDENCY_LIBS ATEN_CUDA_FILES_GEN_LIB)
 endif()
 
-set(ATEN_INCLUDE_DIR "${CMAKE_INSTALL_PREFIX}/${AT_INSTALL_INCLUDE_DIR}")
+set(ATEN_INCLUDE_DIR "${TORCH_INSTALL_PREFIX}/${AT_INSTALL_INCLUDE_DIR}")
 configure_file(ATenConfig.cmake.in "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake")
 install(FILES "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake"
   DESTINATION "${AT_INSTALL_SHARE_DIR}/cmake/ATen")
From f7db4cbfb0af59027ed8bdcd0387dba6fbcb1192 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <[email protected]>
Date: Tue, 28 Jan 2025 10:58:29 +1100
Subject: [PATCH 19/21] remove `DESTINATION lib` from CMake `install(TARGETS`
 directives

Suggested-By: Silvio Traversaro <[email protected]>
---
 c10/CMakeLists.txt                      |  2 +-
 c10/cuda/CMakeLists.txt                 |  2 +-
 c10/hip/CMakeLists.txt                  |  2 +-
 c10/xpu/CMakeLists.txt                  |  2 +-
 caffe2/CMakeLists.txt                   | 18 +++++++---------
 torch/CMakeLists.txt                    |  2 +-
 torch/lib/libshm_windows/CMakeLists.txt |  2 +-
 7 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/c10/CMakeLists.txt b/c10/CMakeLists.txt
index 80e172497d5..d7f8987020d 100644
--- a/c10/CMakeLists.txt
+++ b/c10/CMakeLists.txt
@@ -162,7 +162,7 @@ if(NOT BUILD_LIBTORCHLESS)
   # Note: for now, we will put all export path into one single Caffe2Targets group
   # to deal with the cmake deployment need. Inside the Caffe2Targets set, the
   # individual libraries like libc10.so and libcaffe2.so are still self-contained.
-  install(TARGETS c10 EXPORT Caffe2Targets DESTINATION lib)
+  install(TARGETS c10 EXPORT Caffe2Targets)
 endif()
 
 install(DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
diff --git a/c10/cuda/CMakeLists.txt b/c10/cuda/CMakeLists.txt
index 3327dab4779..9336c9e8f77 100644
--- a/c10/cuda/CMakeLists.txt
+++ b/c10/cuda/CMakeLists.txt
@@ -82,7 +82,7 @@ if(NOT BUILD_LIBTORCHLESS)
 # Note: for now, we will put all export path into one single Caffe2Targets group
 # to deal with the cmake deployment need. Inside the Caffe2Targets set, the
 # individual libraries like libc10.so and libcaffe2.so are still self-contained.
-install(TARGETS c10_cuda EXPORT Caffe2Targets DESTINATION lib)
+install(TARGETS c10_cuda EXPORT Caffe2Targets)
 
 endif()
 
diff --git a/c10/hip/CMakeLists.txt b/c10/hip/CMakeLists.txt
index f153030e793..514c6d29266 100644
--- a/c10/hip/CMakeLists.txt
+++ b/c10/hip/CMakeLists.txt
@@ -55,7 +55,7 @@ if(NOT BUILD_LIBTORCHLESS)
       $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/../..>
       $<BUILD_INTERFACE:${CMAKE_BINARY_DIR}>
       $<INSTALL_INTERFACE:include>)
-  install(TARGETS c10_hip EXPORT Caffe2Targets DESTINATION lib)
+  install(TARGETS c10_hip EXPORT Caffe2Targets)
   set(C10_HIP_LIB c10_hip)
 endif()
 
diff --git a/c10/xpu/CMakeLists.txt b/c10/xpu/CMakeLists.txt
index 01f77d61713..437ade657f9 100644
--- a/c10/xpu/CMakeLists.txt
+++ b/c10/xpu/CMakeLists.txt
@@ -45,7 +45,7 @@ if(NOT BUILD_LIBTORCHLESS)
       $<BUILD_INTERFACE:${CMAKE_BINARY_DIR}>
       $<INSTALL_INTERFACE:include>
       )
-  install(TARGETS c10_xpu EXPORT Caffe2Targets DESTINATION lib)
+  install(TARGETS c10_xpu EXPORT Caffe2Targets)
   set(C10_XPU_LIB c10_xpu)
   add_subdirectory(test)
 endif()
diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index 9be7f3732f3..b51c7cc637b 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -549,7 +549,7 @@ if(USE_CUDA)
   endif()
 
   target_link_libraries(caffe2_nvrtc PRIVATE caffe2::nvrtc ${DELAY_LOAD_FLAGS})
-  install(TARGETS caffe2_nvrtc DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS caffe2_nvrtc)
   if(USE_NCCL)
     list(APPEND Caffe2_GPU_SRCS
       ${TORCH_SRC_DIR}/csrc/cuda/nccl.cpp)
@@ -609,7 +609,7 @@ if(USE_ROCM)
   target_link_libraries(caffe2_nvrtc ${PYTORCH_HIP_LIBRARIES} ${ROCM_HIPRTC_LIB})
   target_include_directories(caffe2_nvrtc PRIVATE ${CMAKE_BINARY_DIR})
   target_compile_definitions(caffe2_nvrtc PRIVATE USE_ROCM __HIP_PLATFORM_AMD__)
-  install(TARGETS caffe2_nvrtc DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS caffe2_nvrtc)
 endif()
 
 if(NOT NO_API AND NOT BUILD_LITE_INTERPRETER)
@@ -995,7 +995,7 @@ elseif(USE_CUDA)
           CUDA::culibos ${CMAKE_DL_LIBS})
     endif()
     set_source_files_properties(${CMAKE_CURRENT_SOURCE_DIR}/../aten/src/ATen/native/cuda/LinearAlgebraStubs.cpp PROPERTIES COMPILE_FLAGS "-DBUILD_LAZY_CUDA_LINALG")
-    install(TARGETS torch_cuda_linalg DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+    install(TARGETS torch_cuda_linalg)
   endif()
 
   if(USE_PRECOMPILED_HEADERS)
@@ -1467,17 +1467,17 @@ endif()
 
 caffe2_interface_library(torch torch_library)
 
-install(TARGETS torch_cpu torch_cpu_library EXPORT Caffe2Targets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+install(TARGETS torch_cpu torch_cpu_library EXPORT Caffe2Targets)
 
 if(USE_CUDA)
-  install(TARGETS torch_cuda torch_cuda_library EXPORT Caffe2Targets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS torch_cuda torch_cuda_library EXPORT Caffe2Targets)
 elseif(USE_ROCM)
-  install(TARGETS torch_hip torch_hip_library EXPORT Caffe2Targets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS torch_hip torch_hip_library EXPORT Caffe2Targets)
 elseif(USE_XPU)
-  install(TARGETS torch_xpu torch_xpu_library EXPORT Caffe2Targets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS torch_xpu torch_xpu_library EXPORT Caffe2Targets)
 endif()
 
-install(TARGETS torch torch_library EXPORT Caffe2Targets DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+install(TARGETS torch torch_library EXPORT Caffe2Targets)
 
 target_link_libraries(torch PUBLIC torch_cpu_library)
 
@@ -1616,7 +1616,7 @@ if(BUILD_SHARED_LIBS)
       target_link_libraries(torch_global_deps torch::nvtoolsext)
     endif()
   endif()
-  install(TARGETS torch_global_deps DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+  install(TARGETS torch_global_deps)
 endif()
 
 # ---[ Caffe2 HIP sources.
diff --git a/torch/CMakeLists.txt b/torch/CMakeLists.txt
index c74b45431c9..80fb5e7734e 100644
--- a/torch/CMakeLists.txt
+++ b/torch/CMakeLists.txt
@@ -447,7 +447,7 @@ if(NOT TORCH_PYTHON_LINK_FLAGS STREQUAL "")
     set_target_properties(torch_python PROPERTIES LINK_FLAGS ${TORCH_PYTHON_LINK_FLAGS})
 endif()
 
-install(TARGETS torch_python DESTINATION "${TORCH_INSTALL_LIB_DIR}")
+install(TARGETS torch_python)
 
 # Generate torch/version.py from the appropriate CMake cache variables.
 if(${CMAKE_BUILD_TYPE} STREQUAL "Debug")
diff --git a/torch/lib/libshm_windows/CMakeLists.txt b/torch/lib/libshm_windows/CMakeLists.txt
index df2a1064938..5fa15e6be31 100644
--- a/torch/lib/libshm_windows/CMakeLists.txt
+++ b/torch/lib/libshm_windows/CMakeLists.txt
@@ -19,7 +19,7 @@ target_include_directories(shm PRIVATE
 target_link_libraries(shm torch c10)
 
 
-install(TARGETS shm DESTINATION "${LIBSHM_INSTALL_LIB_SUBDIR}")
+install(TARGETS shm)
 install(FILES libshm.h DESTINATION "include")
 
 if(MSVC AND BUILD_SHARED_LIBS)
From 1780879024ea952f8591aa175a9787f93e697368 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <[email protected]>
Date: Thu, 30 Jan 2025 08:33:44 +1100
Subject: [PATCH 21/21] avoid deprecated `find_package(CUDA)` in caffe2 CMake
metadata

vendor the not-available-anymore function torch_cuda_get_nvcc_gencode_flag from CMake
---
caffe2/CMakeLists.txt      |  14 ++--
cmake/Summary.cmake        |  10 +--
cmake/TorchConfig.cmake.in |   2 +-
cmake/public/cuda.cmake    |  48 +++----------
cmake/public/utils.cmake   | 127 ++++++++++++++++++++++++++++
setup.py                   |   2 +-
6 files changed, 153 insertions(+), 50 deletions(-)

diff --git a/caffe2/CMakeLists.txt b/caffe2/CMakeLists.txt
index b51c7cc637b..6e107b5b02a 100644
--- a/caffe2/CMakeLists.txt
+++ b/caffe2/CMakeLists.txt
@@ -906,25 +906,25 @@ if(USE_ROCM)
        "$<$<COMPILE_LANGUAGE:CXX>:ATen/core/ATen_pch.h>")
  endif()
elseif(USE_CUDA)
-  set(CUDA_LINK_LIBRARIES_KEYWORD PRIVATE)
+  set(CUDAToolkit_LINK_LIBRARIES_KEYWORD PRIVATE)
  list(APPEND Caffe2_GPU_SRCS ${GENERATED_CXX_TORCH_CUDA})
-  if(CUDA_SEPARABLE_COMPILATION)
+  if(CUDAToolkit_SEPARABLE_COMPILATION)
    # Separate compilation fails when kernels using `thrust::sort_by_key`
    # are linked with the rest of CUDA code. Workaround by linking them separately.
    add_library(torch_cuda ${Caffe2_GPU_SRCS} ${Caffe2_GPU_CU_SRCS})
-    set_property(TARGET torch_cuda PROPERTY CUDA_SEPARABLE_COMPILATION ON)
+    set_property(TARGET torch_cuda PROPERTY CUDAToolkit_SEPARABLE_COMPILATION ON)

    add_library(torch_cuda_w_sort_by_key OBJECT
        ${Caffe2_GPU_SRCS_W_SORT_BY_KEY}
        ${Caffe2_GPU_CU_SRCS_W_SORT_BY_KEY})
-    set_property(TARGET torch_cuda_w_sort_by_key PROPERTY CUDA_SEPARABLE_COMPILATION OFF)
+    set_property(TARGET torch_cuda_w_sort_by_key PROPERTY CUDAToolkit_SEPARABLE_COMPILATION OFF)
    target_link_libraries(torch_cuda PRIVATE torch_cuda_w_sort_by_key)
  else()
    add_library(torch_cuda
        ${Caffe2_GPU_SRCS} ${Caffe2_GPU_SRCS_W_SORT_BY_KEY}
        ${Caffe2_GPU_CU_SRCS} ${Caffe2_GPU_CU_SRCS_W_SORT_BY_KEY})
  endif()
-  set(CUDA_LINK_LIBRARIES_KEYWORD)
+  set(CUDAToolkit_LINK_LIBRARIES_KEYWORD)
  torch_compile_options(torch_cuda)  # see cmake/public/utils.cmake
  target_compile_definitions(torch_cuda PRIVATE USE_CUDA)

@@ -973,12 +973,12 @@ elseif(USE_CUDA)
        torch_cuda
    )
    if($ENV{ATEN_STATIC_CUDA})
-      if(CUDA_VERSION_MAJOR LESS_EQUAL 11)
+      if(CUDAToolkit_VERSION_MAJOR LESS_EQUAL 11)
        target_link_libraries(torch_cuda_linalg PRIVATE
            CUDA::cusolver_static
            ${CUDAToolkit_LIBRARY_DIR}/liblapack_static.a     # needed for libcusolver_static
        )
-      elseif(CUDA_VERSION_MAJOR GREATER_EQUAL 12)
+      elseif(CUDAToolkit_VERSION_MAJOR GREATER_EQUAL 12)
        target_link_libraries(torch_cuda_linalg PRIVATE
            CUDA::cusolver_static
            ${CUDAToolkit_LIBRARY_DIR}/libcusolver_lapack_static.a     # needed for libcusolver_static
diff --git a/cmake/Summary.cmake b/cmake/Summary.cmake
index d51c451589c..154f04a89dd 100644
--- a/cmake/Summary.cmake
+++ b/cmake/Summary.cmake
@@ -76,7 +76,7 @@ function(caffe2_print_configuration_summary)
    message(STATUS "    USE_CUSPARSELT      : ${USE_CUSPARSELT}")
    message(STATUS "    USE_CUDSS           : ${USE_CUDSS}")
    message(STATUS "    USE_CUFILE          : ${USE_CUFILE}")
-    message(STATUS "    CUDA version        : ${CUDA_VERSION}")
+    message(STATUS "    CUDA version        : ${CUDAToolkit_VERSION}")
    message(STATUS "    USE_FLASH_ATTENTION : ${USE_FLASH_ATTENTION}")
    message(STATUS "    USE_MEM_EFF_ATTENTION : ${USE_MEM_EFF_ATTENTION}")
    if(${USE_CUDNN})
@@ -88,7 +88,7 @@ function(caffe2_print_configuration_summary)
    if(${USE_CUFILE})
      message(STATUS "    cufile library    : ${CUDA_cuFile_LIBRARY}")
    endif()
-    message(STATUS "    CUDA root directory : ${CUDA_TOOLKIT_ROOT_DIR}")
+    message(STATUS "    CUDA root directory : ${CUDAToolkit_ROOT}")
    message(STATUS "    CUDA library        : ${CUDA_cuda_driver_LIBRARY}")
    message(STATUS "    cudart library      : ${CUDA_cudart_LIBRARY}")
    message(STATUS "    cublas library      : ${CUDA_cublas_LIBRARY}")
@@ -108,12 +108,12 @@ function(caffe2_print_configuration_summary)
      message(STATUS "    cuDSS library       : ${__tmp}")
    endif()
    message(STATUS "    nvrtc               : ${CUDA_nvrtc_LIBRARY}")
-    message(STATUS "    CUDA include path   : ${CUDA_INCLUDE_DIRS}")
-    message(STATUS "    NVCC executable     : ${CUDA_NVCC_EXECUTABLE}")
+    message(STATUS "    CUDA include path   : ${CUDATookit_INCLUDE_DIRS}")
+    message(STATUS "    NVCC executable     : ${CUDATookit_NVCC_EXECUTABLE}")
    message(STATUS "    CUDA compiler       : ${CMAKE_CUDA_COMPILER}")
    message(STATUS "    CUDA flags          : ${CMAKE_CUDA_FLAGS}")
    message(STATUS "    CUDA host compiler  : ${CMAKE_CUDA_HOST_COMPILER}")
-    message(STATUS "    CUDA --device-c     : ${CUDA_SEPARABLE_COMPILATION}")
+    message(STATUS "    CUDA --device-c     : ${CUDATookit_SEPARABLE_COMPILATION}")
    message(STATUS "    USE_TENSORRT        : ${USE_TENSORRT}")
    if(${USE_TENSORRT})
      message(STATUS "      TensorRT runtime library: ${TENSORRT_LIBRARY}")
diff --git a/cmake/TorchConfig.cmake.in b/cmake/TorchConfig.cmake.in
index cba4d929855..da904fc6a18 100644
--- a/cmake/TorchConfig.cmake.in
+++ b/cmake/TorchConfig.cmake.in
@@ -125,7 +125,7 @@ if(@USE_CUDA@)
    find_library(CAFFE2_NVRTC_LIBRARY caffe2_nvrtc PATHS "${TORCH_INSTALL_PREFIX}/lib")
    list(APPEND TORCH_CUDA_LIBRARIES ${CAFFE2_NVRTC_LIBRARY})
  else()
-    set(TORCH_CUDA_LIBRARIES ${CUDA_NVRTC_LIB})
+    set(TORCH_CUDA_LIBRARIES CUDA::nvrtc)
  endif()
  if(TARGET torch::nvtoolsext)
    list(APPEND TORCH_CUDA_LIBRARIES torch::nvtoolsext)
diff --git a/cmake/public/cuda.cmake b/cmake/public/cuda.cmake
index 152fbdbe6dd..0d1aeffc59f 100644
--- a/cmake/public/cuda.cmake
+++ b/cmake/public/cuda.cmake
@@ -26,8 +26,8 @@ if(NOT MSVC)
endif()

# Find CUDA.
-find_package(CUDA)
-if(NOT CUDA_FOUND)
+find_package(CUDAToolkit)
+if(NOT CUDAToolkit_FOUND)
  message(WARNING
    "Caffe2: CUDA cannot be found. Depending on whether you are building "
    "Caffe2 or a Caffe2 dependent library, the next warning / error will "
@@ -36,8 +36,6 @@ if(NOT CUDA_FOUND)
  return()
endif()

-# Enable CUDA language support
-set(CUDAToolkit_ROOT "${CUDA_TOOLKIT_ROOT_DIR}")
# Pass clang as host compiler, which according to the docs
# Must be done before CUDA language is enabled, see
# https://cmake.org/cmake/help/v3.15/variable/CMAKE_CUDA_HOST_COMPILER.html
@@ -56,24 +54,18 @@ if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.12.0)
  cmake_policy(SET CMP0074 NEW)
endif()

-find_package(CUDAToolkit REQUIRED)
+find_package(CUDAToolkit REQUIRED COMPONENTS cudart nvrtc REQUIRED)

cmake_policy(POP)

-if(NOT CMAKE_CUDA_COMPILER_VERSION VERSION_EQUAL CUDAToolkit_VERSION)
-  message(FATAL_ERROR "Found two conflicting CUDA versions:\n"
-                      "V${CMAKE_CUDA_COMPILER_VERSION} in '${CUDA_INCLUDE_DIRS}' and\n"
-                      "V${CUDAToolkit_VERSION} in '${CUDAToolkit_INCLUDE_DIRS}'")
-endif()
-
-message(STATUS "Caffe2: CUDA detected: " ${CUDA_VERSION})
-message(STATUS "Caffe2: CUDA nvcc is: " ${CUDA_NVCC_EXECUTABLE})
-message(STATUS "Caffe2: CUDA toolkit directory: " ${CUDA_TOOLKIT_ROOT_DIR})
-if(CUDA_VERSION VERSION_LESS 11.0)
+message(STATUS "Caffe2: CUDA detected: " ${CUDAToolkit_VERSION})
+message(STATUS "Caffe2: CUDA nvcc is: " ${CUDAToolkit_NVCC_EXECUTABLE})
+message(STATUS "Caffe2: CUDA toolkit directory: " ${CUDAToolkit_ROOT})
+if(CUDAToolkit_VERSION VERSION_LESS 11.0)
  message(FATAL_ERROR "PyTorch requires CUDA 11.0 or above.")
endif()

-if(CUDA_FOUND)
+if(CUDAToolkit_FOUND)
  # Sometimes, we may mismatch nvcc with the CUDA headers we are
  # compiling with, e.g., if a ccache nvcc is fed to us by CUDA_NVCC_EXECUTABLE
  # but the PATH is not consistent with CUDA_HOME.  It's better safe
@@ -97,8 +89,8 @@ if(CUDA_FOUND)
    )
  if(NOT CMAKE_CROSSCOMPILING)
    try_run(run_result compile_result ${PROJECT_RANDOM_BINARY_DIR} ${file}
-      CMAKE_FLAGS "-DINCLUDE_DIRECTORIES=${CUDA_INCLUDE_DIRS}"
-      LINK_LIBRARIES ${CUDA_LIBRARIES}
+      CMAKE_FLAGS "-DINCLUDE_DIRECTORIES=${CUDAToolkit_INCLUDE_DIRS}"
+      LINK_LIBRARIES ${CUDAToolkit_LIBRARIES}
      RUN_OUTPUT_VARIABLE cuda_version_from_header
      COMPILE_OUTPUT_VARIABLE output_var
      )
@@ -106,30 +98,14 @@ if(CUDA_FOUND)
      message(FATAL_ERROR "Caffe2: Couldn't determine version from header: " ${output_var})
    endif()
    message(STATUS "Caffe2: Header version is: " ${cuda_version_from_header})
-    if(NOT cuda_version_from_header STREQUAL ${CUDA_VERSION_STRING})
-      # Force CUDA to be processed for again next time
-      # TODO: I'm not sure if this counts as an implementation detail of
-      # FindCUDA
-      set(${cuda_version_from_findcuda} ${CUDA_VERSION_STRING})
-      unset(CUDA_TOOLKIT_ROOT_DIR_INTERNAL CACHE)
-      # Not strictly necessary, but for good luck.
-      unset(CUDA_VERSION CACHE)
-      # Error out
-      message(FATAL_ERROR "FindCUDA says CUDA version is ${cuda_version_from_findcuda} (usually determined by nvcc), "
-        "but the CUDA headers say the version is ${cuda_version_from_header}.  This often occurs "
-        "when you set both CUDA_HOME and CUDA_NVCC_EXECUTABLE to "
-        "non-standard locations, without also setting PATH to point to the correct nvcc.  "
-        "Perhaps, try re-running this command again with PATH=${CUDA_TOOLKIT_ROOT_DIR}/bin:$PATH.  "
-        "See above log messages for more diagnostics, and see https://github.com/pytorch/pytorch/issues/8092 for more details.")
-    endif()
  endif()
endif()

# ---[ CUDA libraries wrapper

# find lbnvrtc.so
-set(CUDA_NVRTC_LIB "${CUDA_nvrtc_LIBRARY}" CACHE FILEPATH "")
-if(CUDA_NVRTC_LIB AND NOT CUDA_NVRTC_SHORTHASH)
+get_target_property(CUDA_NVRTC_LIB CUDA::nvrtc INTERFACE_LINK_LIBRARIES)
+if(NOT CUDA_NVRTC_SHORTHASH)
  find_package(Python COMPONENTS Interpreter)
  execute_process(
    COMMAND Python::Interpreter -c
diff --git a/cmake/public/utils.cmake b/cmake/public/utils.cmake
index c6647eb457c..accebfd3457 100644
--- a/cmake/public/utils.cmake
+++ b/cmake/public/utils.cmake
@@ -306,6 +306,133 @@ macro(torch_hip_get_arch_list store_var)
  string(REPLACE " " ";" ${store_var} "${_TMP}")
endmacro()

+# torch_cuda_get_nvcc_gencode_flag is part of find_package(CUDA), but not find_package(CUDAToolkit);
+# vendor it from https://github.com/Kitware/CMake/blob/master/Modules/FindCUDA/select_compute_arch.cmake
+# but disable CUDA_DETECT_INSTALLED_GPUS
+################################################################################################
+# Function for selecting GPU arch flags for nvcc based on CUDA architectures from parameter list
+# Usage:
+#   SELECT_NVCC_ARCH_FLAGS(out_variable [list of CUDA compute archs])
+function(CUDA_SELECT_NVCC_ARCH_FLAGS out_variable)
+  set(CUDA_ARCH_LIST "${ARGN}")
+
+  if("X${CUDA_ARCH_LIST}" STREQUAL "X" )
+    set(CUDA_ARCH_LIST "Auto")
+  endif()
+
+  set(cuda_arch_bin)
+  set(cuda_arch_ptx)
+
+  if("${CUDA_ARCH_LIST}" STREQUAL "All")
+    set(CUDA_ARCH_LIST ${CUDA_KNOWN_GPU_ARCHITECTURES})
+  elseif("${CUDA_ARCH_LIST}" STREQUAL "Common")
+    set(CUDA_ARCH_LIST ${CUDA_COMMON_GPU_ARCHITECTURES})
+  elseif("${CUDA_ARCH_LIST}" STREQUAL "Auto")
+    # disabled, replaced by common architectures
+    # CUDA_DETECT_INSTALLED_GPUS(CUDA_ARCH_LIST)
+    # message(STATUS "Autodetected CUDA architecture(s): ${CUDA_ARCH_LIST}")
+    set(CUDA_ARCH_LIST ${CUDA_COMMON_GPU_ARCHITECTURES})
+  endif()
+
+  # Now process the list and look for names
+  string(REGEX REPLACE "[ \t]+" ";" CUDA_ARCH_LIST "${CUDA_ARCH_LIST}")
+  list(REMOVE_DUPLICATES CUDA_ARCH_LIST)
+  foreach(arch_name ${CUDA_ARCH_LIST})
+    set(arch_bin)
+    set(arch_ptx)
+    set(add_ptx FALSE)
+    # Check to see if we are compiling PTX
+    if(arch_name MATCHES "(.*)\\+PTX$")
+      set(add_ptx TRUE)
+      set(arch_name ${CMAKE_MATCH_1})
+    endif()
+    if(arch_name MATCHES "^([0-9]\\.[0-9](\\([0-9]\\.[0-9]\\))?)$")
+      set(arch_bin ${CMAKE_MATCH_1})
+      set(arch_ptx ${arch_bin})
+    else()
+      # Look for it in our list of known architectures
+      if(${arch_name} STREQUAL "Fermi")
+        set(arch_bin 2.0 "2.1(2.0)")
+      elseif(${arch_name} STREQUAL "Kepler+Tegra")
+        set(arch_bin 3.2)
+      elseif(${arch_name} STREQUAL "Kepler+Tesla")
+        set(arch_bin 3.7)
+      elseif(${arch_name} STREQUAL "Kepler")
+        set(arch_bin 3.0 3.5)
+        set(arch_ptx 3.5)
+      elseif(${arch_name} STREQUAL "Maxwell+Tegra")
+        set(arch_bin 5.3)
+      elseif(${arch_name} STREQUAL "Maxwell")
+        set(arch_bin 5.0 5.2)
+        set(arch_ptx 5.2)
+      elseif(${arch_name} STREQUAL "Pascal")
+        set(arch_bin 6.0 6.1)
+        set(arch_ptx 6.1)
+      elseif(${arch_name} STREQUAL "Volta")
+        set(arch_bin 7.0 7.0)
+        set(arch_ptx 7.0)
+      elseif(${arch_name} STREQUAL "Turing")
+        set(arch_bin 7.5)
+        set(arch_ptx 7.5)
+      elseif(${arch_name} STREQUAL "Ampere")
+        set(arch_bin 8.0)
+        set(arch_ptx 8.0)
+      else()
+        message(SEND_ERROR "Unknown CUDA Architecture Name ${arch_name} in CUDA_SELECT_NVCC_ARCH_FLAGS")
+      endif()
+    endif()
+    if(NOT arch_bin)
+      message(SEND_ERROR "arch_bin wasn't set for some reason")
+    endif()
+    list(APPEND cuda_arch_bin ${arch_bin})
+    if(add_ptx)
+      if (NOT arch_ptx)
+        set(arch_ptx ${arch_bin})
+      endif()
+      list(APPEND cuda_arch_ptx ${arch_ptx})
+    endif()
+  endforeach()
+
+  # remove dots and convert to lists
+  string(REGEX REPLACE "\\." "" cuda_arch_bin "${cuda_arch_bin}")
+  string(REGEX REPLACE "\\." "" cuda_arch_ptx "${cuda_arch_ptx}")
+  string(REGEX MATCHALL "[0-9()]+" cuda_arch_bin "${cuda_arch_bin}")
+  string(REGEX MATCHALL "[0-9]+"   cuda_arch_ptx "${cuda_arch_ptx}")
+
+  if(cuda_arch_bin)
+    list(REMOVE_DUPLICATES cuda_arch_bin)
+  endif()
+  if(cuda_arch_ptx)
+    list(REMOVE_DUPLICATES cuda_arch_ptx)
+  endif()
+
+  set(nvcc_flags "")
+  set(nvcc_archs_readable "")
+
+  # Tell NVCC to add binaries for the specified GPUs
+  foreach(arch ${cuda_arch_bin})
+    if(arch MATCHES "([0-9]+)\\(([0-9]+)\\)")
+      # User explicitly specified ARCH for the concrete CODE
+      list(APPEND nvcc_flags -gencode arch=compute_${CMAKE_MATCH_2},code=sm_${CMAKE_MATCH_1})
+      list(APPEND nvcc_archs_readable sm_${CMAKE_MATCH_1})
+    else()
+      # User didn't explicitly specify ARCH for the concrete CODE, we assume ARCH=CODE
+      list(APPEND nvcc_flags -gencode arch=compute_${arch},code=sm_${arch})
+      list(APPEND nvcc_archs_readable sm_${arch})
+    endif()
+  endforeach()
+
+  # Tell NVCC to add PTX intermediate code for the specified architectures
+  foreach(arch ${cuda_arch_ptx})
+    list(APPEND nvcc_flags -gencode arch=compute_${arch},code=compute_${arch})
+    list(APPEND nvcc_archs_readable compute_${arch})
+  endforeach()
+
+  string(REPLACE ";" " " nvcc_archs_readable "${nvcc_archs_readable}")
+  set(${out_variable}          ${nvcc_flags}          PARENT_SCOPE)
+  set(${out_variable}_readable ${nvcc_archs_readable} PARENT_SCOPE)
+endfunction()
+
##############################################################################
# Get the NVCC arch flags specified by TORCH_CUDA_ARCH_LIST and CUDA_ARCH_NAME.
# Usage:
diff --git a/setup.py b/setup.py
index b0e01e0d1ee..dc21f91d69e 100644
--- a/setup.py
+++ b/setup.py
@@ -627,7 +627,7 @@ class build_ext(setuptools.command.build_ext.build_ext):
        else:
            report("-- Not using cuDNN")
        if cmake_cache_vars["USE_CUDA"]:
-            report("-- Detected CUDA at " + cmake_cache_vars["CUDA_TOOLKIT_ROOT_DIR"])
+            report(f"-- Detected CUDA at {cmake_cache_vars['CMAKE_CUDA_COMPILER_TOOLKIT_ROOT']}")
        else:
            report("-- Not using CUDA")
        if cmake_cache_vars["USE_XPU"]:
From a9879bdd5ea793c5301a4b86f163a07e1f28f321 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <[email protected]>
Date: Tue, 28 Jan 2025 13:32:28 +1100
Subject: [PATCH] remove `DESTINATION lib` from CMake install directives

Suggested-By: Silvio Traversaro <[email protected]>
---
 CMakeLists.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/third_party/fbgemm/CMakeLists.txt b/third_party/fbgemm/CMakeLists.txt
index 134523e7..86fb8fad 100644
--- a/third_party/fbgemm/CMakeLists.txt
+++ b/third_party/fbgemm/CMakeLists.txt
@@ -370,8 +370,8 @@ if(MSVC)
       FILES $<TARGET_PDB_FILE:fbgemm> $<TARGET_PDB_FILE:asmjit>
       DESTINATION ${CMAKE_INSTALL_LIBDIR} OPTIONAL)
   endif()
-  install(TARGETS fbgemm DESTINATION ${CMAKE_INSTALL_LIBDIR})
-  install(TARGETS asmjit DESTINATION ${CMAKE_INSTALL_LIBDIR})
+  install(TARGETS fbgemm)
+  install(TARGETS asmjit)
 endif()
 
 #Make project importable from the build directory
From 9a1de62dd1b3d816d6fb87c2041f4005ab5c683d Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <[email protected]>
Date: Sun, 2 Feb 2025 08:54:01 +1100
Subject: [PATCH] switch away from find_package(CUDA)

---
 tensorpipe/CMakeLists.txt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/third_party/tensorpipe/tensorpipe/CMakeLists.txt b/third_party/tensorpipe/tensorpipe/CMakeLists.txt
index efcffc2..1c3b2ca 100644
--- a/third_party/tensorpipe/tensorpipe/CMakeLists.txt
+++ b/third_party/tensorpipe/tensorpipe/CMakeLists.txt
@@ -234,7 +234,7 @@ if(TP_USE_CUDA)
   # TP_INCLUDE_DIRS is list of include path to be used
   set(TP_CUDA_INCLUDE_DIRS)
 
-  find_package(CUDA REQUIRED)
+  find_package(CUDAToolkit REQUIRED)
   list(APPEND TP_CUDA_LINK_LIBRARIES ${CUDA_LIBRARIES})
   list(APPEND TP_CUDA_INCLUDE_DIRS ${CUDA_INCLUDE_DIRS})

@conda-forge-admin
Copy link
Contributor

conda-forge-admin commented Feb 5, 2025

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe/meta.yaml:

  • ℹ️ The recipe is not parsable by parser conda-souschef (grayskull). This parser is not currently used by conda-forge, but may be in the future. We are collecting information to see which recipes are compatible with grayskull.
  • ℹ️ The recipe is not parsable by parser conda-recipe-manager. The recipe can only be automatically migrated to the new v1 format if it is parseable by conda-recipe-manager.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/13167092914. Examine the logs at this URL for more detail.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants