Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CMake metadata for CUDA-enabled libtorch #339

Merged
merged 25 commits into from
Feb 3, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
b6808c2
Revert "skip test failures with CUDA due to non-unique temporaries"
h-vetinari Jan 30, 2025
f52e86c
make temporary library names unique in failing tests
h-vetinari Jan 30, 2025
c2b6ce4
collect USE_* variables in `bld.bat`
h-vetinari Jan 30, 2025
8622491
clean up CUDA option handling in `bld.bat`
h-vetinari Jan 30, 2025
093816a
first attempt at patching `find_package(CUDA)`
h-vetinari Jan 30, 2025
847f7b1
delete an unnecessary check
h-vetinari Jan 30, 2025
9a36bd4
vendor CMake's cuda_select_nvcc_arch_flags
h-vetinari Jan 30, 2025
32527dc
fix a casing error in CMake
h-vetinari Jan 30, 2025
2a0827b
add zlib
h-vetinari Jan 30, 2025
1ee54fb
add cuda-nvrtc; CMake files require to find it now
h-vetinari Jan 31, 2025
f35c9aa
clean up an old CMake variable in setup.py
h-vetinari Jan 31, 2025
0d86709
disable CUDA_DETECT_INSTALLED_GPUS in vendored CMake function
h-vetinari Jan 31, 2025
fc3fa85
set CUDAToolkit_ROOT so that CMake cache gets populated correctly
h-vetinari Feb 1, 2025
03cb0fe
use computed variable for looking in `CMakeCache.txt`
h-vetinari Feb 1, 2025
9d80394
bump build number
h-vetinari Feb 1, 2025
e2c551d
keep setting CUDA_TOOLKIT_ROOT_DIR
h-vetinari Feb 1, 2025
0de45db
patch find_package(CUDA) in tensorpipe submodule
h-vetinari Feb 1, 2025
94c000b
don't blow up logs with nvcc warnings
h-vetinari Feb 1, 2025
e284ed0
reduce verbosity of pip install
h-vetinari Feb 1, 2025
1c23e13
skip a test that may fail on MKL
h-vetinari Feb 1, 2025
138456c
fix patch for unique `mylib` in torchinductor tests
h-vetinari Feb 2, 2025
bdb9df5
reinstate logs for building libtorch on windows
h-vetinari Feb 2, 2025
44782b3
also switch off very noisy ptxas warnings
h-vetinari Feb 2, 2025
5ee95f4
Reapply "skip test failures with CUDA due to non-unique temporaries"
h-vetinari Feb 2, 2025
162a7eb
Revert "reduce verbosity of pip install"
h-vetinari Feb 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 28 additions & 31 deletions recipe/bld.bat
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,9 @@ if "%blas_impl%" == "generic" (
SET BLAS=MKL
)

@REM TODO(baszalmstra): Figure out if we need these flags
SET "USE_NUMA=0"
SET "USE_ITT=0"

if "%PKG_NAME%" == "pytorch" (
set "PIP_ACTION=install"
set "PIP_VERBOSITY=-v"
@REM We build libtorch for a specific python version.
@REM This ensures its only build once. However, when that version changes
@REM we need to make sure to update that here.
Expand Down Expand Up @@ -62,51 +59,58 @@ if "%PKG_NAME%" == "pytorch" (
@REM For the main script we just build a wheel for so that the C++/CUDA
@REM parts are built. Then they are reused in each python version.
set "PIP_ACTION=wheel"
set "PIP_VERBOSITY=-vvv"
)

if not "%cuda_compiler_version%" == "None" (
set USE_CUDA=1
set "BUILD_CUSTOM_PROTOBUF=OFF"
set "USE_LITE_PROTO=ON"

@REM set CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v%desired_cuda%
@REM set CUDA_BIN_PATH=%CUDA_PATH%\bin
@REM TODO(baszalmstra): Figure out if we need these flags
SET "USE_ITT=0"
SET "USE_NUMA=0"

set TORCH_CUDA_ARCH_LIST=5.0;6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX
@REM TODO(baszalmstra): There are linker errors because of mixing Intel OpenMP (iomp) and Microsoft OpenMP (vcomp)
set "USE_OPENMP=OFF"

set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
@REM Use our Pybind11, Eigen, sleef
set USE_SYSTEM_EIGEN_INSTALL=1
set USE_SYSTEM_PYBIND11=1
set USE_SYSTEM_SLEEF=1

if not "%cuda_compiler_version%" == "None" (
set USE_CUDA=1
set USE_STATIC_CUDNN=0
set MAGMA_HOME=%PREFIX%

@REM NCCL is not available on windows
set USE_NCCL=0
set USE_STATIC_NCCL=0

set MAGMA_HOME=%LIBRARY_PREFIX%
@REM set CUDA_PATH=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v%desired_cuda%
@REM set CUDA_BIN_PATH=%CUDA_PATH%\bin

set "PATH=%CUDA_BIN_PATH%;%PATH%"
set "TORCH_CUDA_ARCH_LIST=5.0;6.0;6.1;7.0;7.5;8.0;8.6;8.9;9.0+PTX"
set "TORCH_NVCC_FLAGS=-Xfatbin -compress-all"

set MAGMA_HOME=%LIBRARY_PREFIX%
set "PATH=%CUDA_BIN_PATH%;%PATH%"
set CUDNN_INCLUDE_DIR=%LIBRARY_PREFIX%\include

@REM turn off very noisy nvcc warnings
set "CUDAFLAGS=-w --ptxas-options=-w"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
set "CUDAFLAGS=-w --ptxas-options=-w"
set "CMAKE_CUDA_FLAGS=-w --ptxas-options=-w"

I think it works. The new flags are passed in "libtorch" build, and according to diff that's the only change in CUDA invocations. I'll know for sure when libtorch recompiles and it starts building pytorch.

) else (
set USE_CUDA=0
@REM MKLDNN is an Apache-2.0 licensed library for DNNs and is used
@REM for CPU builds. Not to be confused with MKL.
set "USE_MKLDNN=1"

@REM On windows, env vars are case-insensitive and setup.py
@REM passes all env vars starting with CUDA_*, CMAKE_* to
@REM to cmake
set "cuda_compiler_version="
set "cuda_compiler="
set "CUDA_VERSION="

@REM MKLDNN is an Apache-2.0 licensed library for DNNs and is used
@REM for CPU builds. Not to be confused with MKL.
set "USE_MKLDNN=1"
)

set DISTUTILS_USE_SDK=1

@REM Use our Pybind11, Eigen
set USE_SYSTEM_PYBIND11=1
set USE_SYSTEM_EIGEN_INSTALL=1

set CMAKE_INCLUDE_PATH=%LIBRARY_PREFIX%\include
set LIB=%LIBRARY_PREFIX%\lib;%LIB%

Expand All @@ -126,17 +130,10 @@ set "INSTALL_TEST=0"
set "BUILD_TEST=0"

set "libuv_ROOT=%LIBRARY_PREFIX%"
set "USE_SYSTEM_SLEEF=ON"

@REM uncomment to debug cmake build
@REM set "CMAKE_VERBOSE_MAKEFILE=1"

set "BUILD_CUSTOM_PROTOBUF=OFF"
set "USE_LITE_PROTO=ON"

@REM TODO(baszalmstra): There are linker errors because of mixing Intel OpenMP (iomp) and Microsoft OpenMP (vcomp)
set "USE_OPENMP=OFF"

@REM The activation script for cuda-nvcc doesnt add the CUDA_CFLAGS on windows.
@REM Therefore we do this manually here. See:
@REM https://github.com/conda-forge/cuda-nvcc-feedstock/issues/47
Expand Down Expand Up @@ -165,7 +162,7 @@ if EXIST build (
if %ERRORLEVEL% neq 0 exit 1
)

%PYTHON% -m pip %PIP_ACTION% . --no-build-isolation --no-deps -vvv --no-clean
%PYTHON% -m pip %PIP_ACTION% . --no-build-isolation --no-deps %PIP_VERBOSITY% --no-clean
if %ERRORLEVEL% neq 0 exit 1

@REM Here we split the build into two parts.
Expand Down
6 changes: 3 additions & 3 deletions recipe/build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -176,11 +176,9 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then
# all of them.
export CUDAToolkit_BIN_DIR=${BUILD_PREFIX}/bin
export CUDAToolkit_ROOT_DIR=${PREFIX}
if [[ "${target_platform}" != "${build_platform}" ]]; then
export CUDA_TOOLKIT_ROOT=${PREFIX}
fi
# for CUPTI
export CUDA_TOOLKIT_ROOT_DIR=${PREFIX}
export CUDAToolkit_ROOT=${PREFIX}
case ${target_platform} in
linux-64)
export CUDAToolkit_TARGET_DIR=${PREFIX}/targets/x86_64-linux
Expand Down Expand Up @@ -221,6 +219,8 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then
export USE_STATIC_CUDNN=0
export MAGMA_HOME="${PREFIX}"
export USE_MAGMA=1
# turn off noisy nvcc warnings
export CUDAFLAGS="-w --ptxas-options=-w"
Comment on lines +222 to +223
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@conda-forge/cuda, we get 10'000s of line of ptxas advice à la

ptxas /tmp/tmpxft_00006928_00000000-8_SparseSemiStructuredOps.compute_86.ptx, line 55289; info    : Advisory: Modifier '.sp::ordered_metadata' should be used on instruction 'mma' instead of modifier '.sp' as it is expected to have substantially reduced performance on some future architectures
ptxas /tmp/tmpxft_00006928_00000000-8_SparseSemiStructuredOps.compute_86.ptx, line 55293; info    : Advisory: Modifier '.sp::ordered_metadata' should be used on instruction 'mma' instead of modifier '.sp' as it is expected to have substantially reduced performance on some future architectures
ptxas /tmp/tmpxft_00006928_00000000-8_SparseSemiStructuredOps.compute_86.ptx, line 55297; info    : Advisory: Modifier '.sp::ordered_metadata' should be used on instruction 'mma' instead of modifier '.sp' as it is expected to have substantially reduced performance on some future architectures
ptxas /tmp/tmpxft_00006928_00000000-8_SparseSemiStructuredOps.compute_86.ptx, line 55301; info    : Advisory: Modifier '.sp::ordered_metadata' should be used on instruction 'mma' instead of modifier '.sp' as it is expected to have substantially reduced performance on some future architectures

This is really something that (if at all) pytorch should take care off, and we shouldn't spam the logs here, making them harder to navigate and longer to download.

However, I haven't had success in turning this off despite already passing -w --ptxas-options=-w. Could you tell me what I'm missing please?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about -Xptxas="-w"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I can try. I went with the first (more canonical-looking) option from the docs, but AFAICT they should be equivalent? I was also wondering if perhaps for some reason -w doesn't affect info : Advisory:, which is not "technically" a warning?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about -Xptxas="-w"?

@leofang, that also didn't work (see #326).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

else
if [[ "$target_platform" != *-64 ]]; then
# Breakpad seems to not work on aarch64 or ppc64le
Expand Down
14 changes: 11 additions & 3 deletions recipe/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# if you wish to build release candidate number X, append the version string with ".rcX"
{% set version = "2.5.1" %}
{% set build = 11 %}
{% set build = 12 %}

# Use a higher build number for the CUDA variant, to ensure that it's
# preferred by conda's solver, and it's preferentially
Expand Down Expand Up @@ -69,8 +69,11 @@ source:
- patches/0016-point-include-paths-to-PREFIX-include.patch
- patches/0017-Add-conda-prefix-to-inductor-include-paths.patch
- patches/0018-make-ATEN_INCLUDE_DIR-relative-to-TORCH_INSTALL_PREF.patch
- patches/0019-remove-DESTINATION-lib-from-CMake-install-TARGETS-di.patch # [win]
- patches_submodules/0001-remove-DESTINATION-lib-from-CMake-install-directives.patch # [win]
- patches/0019-remove-DESTINATION-lib-from-CMake-install-TARGETS-di.patch # [win]
- patches/0020-make-library-name-in-test_mutable_custom_op_fixed_la.patch
- patches/0021-avoid-deprecated-find_package-CUDA-in-caffe2-CMake-m.patch
- patches_submodules/fbgemm/0001-remove-DESTINATION-lib-from-CMake-install-directives.patch # [win]
- patches_submodules/tensorpipe/0001-switch-away-from-find_package-CUDA.patch

build:
number: {{ build }}
Expand Down Expand Up @@ -179,6 +182,7 @@ requirements:
- typing_extensions
- pybind11
- eigen
- zlib
run:
# GPU requirements without run_exports
- {{ pin_compatible('cudnn') }} # [cuda_compiler_version != "None"]
Expand Down Expand Up @@ -208,7 +212,9 @@ test:
# cmake needs a compiler to run package detection, see
# https://discourse.cmake.org/t/questions-about-find-package-cli-msvc/6194
- {{ compiler('cxx') }}
# for CMake config to find cuda & nvrtc
- {{ compiler('cuda') }} # [cuda_compiler_version != "None"]
- cuda-nvrtc-dev # [cuda_compiler_version != "None"]
- cmake
- ninja
- pkg-config
Expand Down Expand Up @@ -494,6 +500,7 @@ outputs:
{% set skips = skips ~ " or (GPUTests and test_scatter_reduce2)" %} # [linux and cuda_compiler_version != "None"]
# MKL problems
{% set skips = skips ~ " or (TestLinalgCPU and test_inverse_errors_large_cpu)" %} # [linux and blas_impl == "mkl" and cuda_compiler_version != "None"]
{% set skips = skips ~ " or test_reentrant_parent_error_on_cpu_cuda)" %} # [linux and blas_impl == "mkl" and cuda_compiler_version != "None"]
# non-MKL problems
{% set skips = skips ~ " or test_cross_entropy_loss_2d_out_of_bounds_class_index_cuda" %} # [linux and blas_impl != "mkl" and cuda_compiler_version != "None"]
{% set skips = skips ~ " or test_cublas_config_nondeterministic_alert_cuda " %} # [linux and blas_impl != "mkl" and cuda_compiler_version != "None"]
Expand Down Expand Up @@ -559,6 +566,7 @@ about:
license_file:
- LICENSE
- NOTICE
- third_party/CMake/Copyright.txt
summary: PyTorch is an optimized tensor library for deep learning using GPUs and CPUs.
description: |
PyTorch is a Python package that provides two high-level features:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From f3a0f9aab6dce56eea590b946f60256014b61bf7 Mon Sep 17 00:00:00 2001
From: Mark Harfouche <mark.harfouche@gmail.com>
Date: Sun, 1 Sep 2024 17:35:40 -0400
Subject: [PATCH 01/19] Force usage of python 3 and error without numpy
Subject: [PATCH 01/21] Force usage of python 3 and error without numpy

---
cmake/Dependencies.cmake | 6 +++---
Expand Down
2 changes: 1 addition & 1 deletion recipe/patches/0002-Help-find-numpy.patch
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 21c30036b5b86f403c0cf4426165d9a6a50edb1a Mon Sep 17 00:00:00 2001
From: Mark Harfouche <mark.harfouche@gmail.com>
Date: Tue, 1 Oct 2024 00:28:40 -0400
Subject: [PATCH 02/19] Help find numpy
Subject: [PATCH 02/21] Help find numpy

---
tools/setup_helpers/cmake.py | 6 ++++++
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From d1826af525db41eda5020a1404f5d5521d67a5dc Mon Sep 17 00:00:00 2001
From: Jeongseok Lee <jeongseok@meta.com>
Date: Sat, 19 Oct 2024 04:26:01 +0000
Subject: [PATCH 03/19] Add USE_SYSTEM_NVTX option (#138287)
Subject: [PATCH 03/21] Add USE_SYSTEM_NVTX option (#138287)

## Summary

Expand Down
2 changes: 1 addition & 1 deletion recipe/patches/0004-Update-sympy-version.patch
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From e3219c5fe8834753b0cf9e92be4d1ef1e874f370 Mon Sep 17 00:00:00 2001
From: Jeongseok Lee <jeongseok@meta.com>
Date: Thu, 17 Oct 2024 15:04:05 -0700
Subject: [PATCH 04/19] Update sympy version
Subject: [PATCH 04/21] Update sympy version

---
setup.py | 2 +-
Expand Down
2 changes: 1 addition & 1 deletion recipe/patches/0005-Fix-duplicate-linker-script.patch
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 08a1f44fbc81324aa98d720dfb7b87a261923ac2 Mon Sep 17 00:00:00 2001
From: Jeongseok Lee <jeongseok@meta.com>
Date: Sun, 3 Nov 2024 01:12:36 -0700
Subject: [PATCH 05/19] Fix duplicate linker script
Subject: [PATCH 05/21] Fix duplicate linker script

---
setup.py | 4 +++-
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 15df314a41c69a31c0443254d5552aa1b39d708d Mon Sep 17 00:00:00 2001
From: William Wen <williamwen@meta.com>
Date: Fri, 13 Sep 2024 13:02:33 -0700
Subject: [PATCH 06/19] fix 3.13 pickle error in serialization.py (#136034)
Subject: [PATCH 06/21] fix 3.13 pickle error in serialization.py (#136034)

Error encountered when adding dynamo 3.13 support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136034
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 655f694854c3eafdd631235b60bc6c1b279218ed Mon Sep 17 00:00:00 2001
From: Mark Harfouche <mark.harfouche@gmail.com>
Date: Thu, 3 Oct 2024 22:49:56 -0400
Subject: [PATCH 07/19] Allow users to overwrite ld with environment variables
Subject: [PATCH 07/21] Allow users to overwrite ld with environment variables

This should help in the case of cross compilation.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From f03bf82d9da9cccb2cf4d4833c1a6349622dc37d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Micha=C5=82=20G=C3=B3rny?= <mgorny@gentoo.org>
Date: Wed, 27 Nov 2024 13:47:23 +0100
Subject: [PATCH 08/19] Allow overriding CUDA-related paths
Subject: [PATCH 08/21] Allow overriding CUDA-related paths

---
cmake/Modules/FindCUDAToolkit.cmake | 2 +-
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 4b1faf6ba142953ce2730766db44f8d98d161ef0 Mon Sep 17 00:00:00 2001
From: Haifeng Jin <haifeng-jin@users.noreply.github.com>
Date: Tue, 1 Oct 2024 07:53:24 +0000
Subject: [PATCH 09/19] Fix test/test_linalg.py for NumPy 2 (#136800)
Subject: [PATCH 09/21] Fix test/test_linalg.py for NumPy 2 (#136800)

Related to #107302.

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 032b9be9ca7f9ae174e75554cecc82600ea3ef54 Mon Sep 17 00:00:00 2001
From: Haifeng Jin <haifeng-jin@users.noreply.github.com>
Date: Sat, 12 Oct 2024 02:40:17 +0000
Subject: [PATCH 10/19] Fixes NumPy 2 test failures in test_torch.py (#137740)
Subject: [PATCH 10/21] Fixes NumPy 2 test failures in test_torch.py (#137740)

Related to #107302

Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 56f1528fa072023fb2724d5abf8790f2f6cc3aaa Mon Sep 17 00:00:00 2001
From: Isuru Fernando <ifernando@quansight.com>
Date: Wed, 18 Dec 2024 03:59:00 +0000
Subject: [PATCH 11/19] Use BLAS_USE_CBLAS_DOT for OpenBLAS builds
Subject: [PATCH 11/21] Use BLAS_USE_CBLAS_DOT for OpenBLAS builds

There are two calling conventions for *dotu functions

Expand Down
2 changes: 1 addition & 1 deletion recipe/patches/0012-fix-issue-142484.patch
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From beba58d724cc1bd7ca73660b0a5ad9e61ae0c562 Mon Sep 17 00:00:00 2001
From: "Zheng, Zhaoqiong" <zhaoqiong.zheng@intel.com>
Date: Fri, 27 Dec 2024 13:49:36 +0800
Subject: [PATCH 12/19] fix issue 142484
Subject: [PATCH 12/21] fix issue 142484

From https://github.com/pytorch/pytorch/pull/143894
---
Expand Down
2 changes: 1 addition & 1 deletion recipe/patches/0013-Fix-FindOpenBLAS.patch
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 816a248a4425a97350959e412666e6db9012a52e Mon Sep 17 00:00:00 2001
From: Bas Zalmstra <bas@prefix.dev>
Date: Thu, 16 May 2024 10:46:49 +0200
Subject: [PATCH 13/19] Fix FindOpenBLAS
Subject: [PATCH 13/21] Fix FindOpenBLAS

---
cmake/Modules/FindOpenBLAS.cmake | 15 +++++++++------
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From db896f927403f55a18f931b18a6469cb4e37d322 Mon Sep 17 00:00:00 2001
From: atalman <atalman@fb.com>
Date: Tue, 12 Nov 2024 12:28:10 +0000
Subject: [PATCH 14/19] CD Enable Python 3.13 on windows (#138095)
Subject: [PATCH 14/21] CD Enable Python 3.13 on windows (#138095)

Adding CD windows. Part of: https://github.com/pytorch/pytorch/issues/130249
Builder PR landed with smoke test: https://github.com/pytorch/builder/pull/2035
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 33790dfbf966e7d8ea4ff6798d2ff92474d84079 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <h.vetinari@gmx.com>
Date: Thu, 23 Jan 2025 22:46:58 +1100
Subject: [PATCH 15/19] simplify torch.utils.cpp_extension.include_paths; use
Subject: [PATCH 15/21] simplify torch.utils.cpp_extension.include_paths; use
it in cpp_builder

The /TH headers have not existed since pytorch 1.11
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 799f6fa59dac93dabbbcf72d46f4e1334e3d65d9 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <h.vetinari@gmx.com>
Date: Thu, 23 Jan 2025 22:58:14 +1100
Subject: [PATCH 16/19] point include paths to $PREFIX/include
Subject: [PATCH 16/21] point include paths to $PREFIX/include

---
torch/utils/cpp_extension.py | 9 +++++++++
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From 9f73a02bacf9680833ac64657fde6762d33ab200 Mon Sep 17 00:00:00 2001
From: Daniel Petry <dpetry@anaconda.com>
Date: Tue, 21 Jan 2025 17:45:23 -0600
Subject: [PATCH 17/19] Add conda prefix to inductor include paths
Subject: [PATCH 17/21] Add conda prefix to inductor include paths

Currently inductor doesn't look in conda's includes and libs. This results in
errors when it tries to compile, if system versions are being used of
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From b0cfa0f728e96a3a9d6f7434e2c02d74d6daa9a9 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <h.vetinari@gmx.com>
Date: Tue, 28 Jan 2025 14:15:34 +1100
Subject: [PATCH 18/19] make ATEN_INCLUDE_DIR relative to TORCH_INSTALL_PREFIX
Subject: [PATCH 18/21] make ATEN_INCLUDE_DIR relative to TORCH_INSTALL_PREFIX

we cannot set CMAKE_INSTALL_PREFIX without the pytorch build complaining, but we can
use TORCH_INSTALL_PREFIX, which is set correctly relative to our CMake files already:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
From f7db4cbfb0af59027ed8bdcd0387dba6fbcb1192 Mon Sep 17 00:00:00 2001
From: "H. Vetinari" <h.vetinari@gmx.com>
Date: Tue, 28 Jan 2025 10:58:29 +1100
Subject: [PATCH 19/19] remove `DESTINATION lib` from CMake `install(TARGETS`
Subject: [PATCH 19/21] remove `DESTINATION lib` from CMake `install(TARGETS`
directives

Suggested-By: Silvio Traversaro <silvio@traversaro.it>
Expand Down
Loading
Loading