-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CMake metadata for CUDA-enabled libtorch #339
Changes from all commits
b6808c2
f52e86c
c2b6ce4
8622491
093816a
847f7b1
9a36bd4
32527dc
2a0827b
1ee54fb
f35c9aa
0d86709
fc3fa85
03cb0fe
9d80394
e2c551d
0de45db
94c000b
e284ed0
1c23e13
138456c
bdb9df5
44782b3
5ee95f4
162a7eb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -176,11 +176,9 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then | |
# all of them. | ||
export CUDAToolkit_BIN_DIR=${BUILD_PREFIX}/bin | ||
export CUDAToolkit_ROOT_DIR=${PREFIX} | ||
if [[ "${target_platform}" != "${build_platform}" ]]; then | ||
export CUDA_TOOLKIT_ROOT=${PREFIX} | ||
fi | ||
# for CUPTI | ||
export CUDA_TOOLKIT_ROOT_DIR=${PREFIX} | ||
export CUDAToolkit_ROOT=${PREFIX} | ||
case ${target_platform} in | ||
linux-64) | ||
export CUDAToolkit_TARGET_DIR=${PREFIX}/targets/x86_64-linux | ||
|
@@ -221,6 +219,8 @@ elif [[ ${cuda_compiler_version} != "None" ]]; then | |
export USE_STATIC_CUDNN=0 | ||
export MAGMA_HOME="${PREFIX}" | ||
export USE_MAGMA=1 | ||
# turn off noisy nvcc warnings | ||
export CUDAFLAGS="-w --ptxas-options=-w" | ||
Comment on lines
+222
to
+223
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @conda-forge/cuda, we get 10'000s of line of ptxas advice à la
This is really something that (if at all) pytorch should take care off, and we shouldn't spam the logs here, making them harder to navigate and longer to download. However, I haven't had success in turning this off despite already passing There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How about There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure, I can try. I went with the first (more canonical-looking) option from the docs, but AFAICT they should be equivalent? I was also wondering if perhaps for some reason There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Raised an issue: conda-forge/cuda-nvcc-feedstock#60 |
||
else | ||
if [[ "$target_platform" != *-64 ]]; then | ||
# Breakpad seems to not work on aarch64 or ppc64le | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it works. The new flags are passed in "libtorch" build, and according to
diff
that's the only change in CUDA invocations. I'll know for sure when libtorch recompiles and it starts building pytorch.