-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recipe overhaul: more tests & documentation, various clean-ups #298
Conversation
Upstream keeps all magma-related routines in a separate libtorch_cuda_linalg library that is loaded dynamically whenever linalg functions are used. Given the library is relatively small, splitting it makes it possible to provide "magma" and "nomagma" variants that can be alternated between. Fixes conda-forge#275 Co-authored-by: Isuru Fernando <ifernando@quansight.com>
…nda-forge-pinning 2024.12.04.13.54.14
Try to speed up magma/nomagma builds a bit. Rather than rebuilding the package 3 times (possibly switching magma → nomagma → magma again), build it twice at the very beginning and store the built files for later reuse in subpackage builds. While at it, replace the `pip wheel` calls with `setup.py build` to avoid unnecessarily zipping up and then unpacking the whole thing. In the end, we are only grabbing a handful of files for `libtorch*` packages and they are in predictable location in the build directory. `pip install` remains being used in the final builds for `pytorch`.
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/12202823840. Examine the logs at this URL for more detail. |
Since this is still WIP, I started only a single job on linux for now (x64+MKL+CUDA) |
Thanks. It seems to have failed only because it's adding new outputs. Do you want me to file the admin request for allowing |
That would be great! |
Filed as conda-forge/admin-requests#1209. |
The test currently refused to even start, since not all dependencies were satisfied.
Put all the rules in a single file. In the end, build_common.sh has pytorch-conditional code at the very end anyway, and keeping the code split like this only makes it harder to notice mistakes.
recipe/README.md
Outdated
that are independent of selected Python version and are therefore shared | ||
by all Python versions. | ||
|
||
2. `libtorch-cuda-linalg` that provides the shared `libtorch_cuda_linalg.so` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this the preffered one?
as in, is it preffered that users make use of linalg?
I ask because we try very hard to ensure that
mamba install pytorch
installs the best hardware optimized one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand the question.
libtorch_cuda_linalg.so
is required for some operations. It is always pulled in by libtorch
itself, i.e. the end result is that some version of the library is always installed.
As for magma vs. nomagma, I think we ought to prefer magma. #275 (comment) suggests that cusolver will be faster for some workflows, but the magma build supports both and according to https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library, it has a heuristic to choose the faster backend for given operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So who benefits from making this optional?
Sorry if this obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is #298, i.e. people who want to avoid installing the large magma dependency (~250M in package). Given that libtorch-cuda-linalg
is 200k (nomagma) / 245k (magma), I've figured out it's worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The no-magma variant reduces the on-disk install size of a pytorch-gpu
install from 7 GB to 5 GB (as #275 says). So this is definitely worth doing.
Given that cusolver
is now on average fairly close in performance to magma
and that the PyTorch wheels don't include magma, I'd argue that the default should be nomagma
(I'd certainly almost always want that for my use cases). However, what the default is is less important than both options being available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow missed the last ping here. Thanks for the info, you certainly know pytorch much better than I do.
Why do you think that linking libmagma statically will help?
Presumably those 10 pytorch functions do not make use of the entirety of the 2GB libmagma. If they were compiled against a static build, the impact would be much smaller.
One option that's different from upstream is conda-forge/libmagma-feedstock#22
That's even better though! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Including this bit from that (merged) PR thread:
Me:
Did you measure the difference between the two approaches? Would be interested to know how they compare
Isuru:
2GB -> 620 MB
xref: conda-forge/libmagma-feedstock#22 (comment)
IOW a nearly 70% reduction in size. Perhaps it is worth rechecking the analysis given this change? The biggest thing might be something else now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks even better on my machine after @isuruf's compression fix:
$ mamba create -n pytorch-gpu-19dec2024 pytorch-gpu
$ cd ~/mambaforge/envs/pytorch-gpu-19dec2024/
$ du -hsx * | sort -rh | head -3
4,1G lib
1,6G targets
69M include
$ du -hsx lib/* | sort -rh | head -10
1,4G lib/libtorch_cuda.so
432M lib/libcudnn_engines_precompiled.so.9.3.0
296M lib/python3.13
293M lib/libmagma.so
276M lib/libtorch_cpu.so
252M lib/libnccl.so.2.23.4
233M lib/libcudnn_adv.so.9.3.0
104M lib/libcudnn_ops.so.9.3.0
91M lib/libmagma_sparse.so
68M lib/libmkl_core.so.2
$ du -hsx targets/x86_64-linux/lib/* | sort -rh | head -10
469M targets/x86_64-linux/lib/libcublasLt.so.12.6.4.1
280M targets/x86_64-linux/lib/libcusparse.so.12.5.4.2
266M targets/x86_64-linux/lib/libcufft.so.11.3.0.4
146M targets/x86_64-linux/lib/libcusolver.so.11.7.1.2
104M targets/x86_64-linux/lib/libcublas.so.12.6.4.1
92M targets/x86_64-linux/lib/libcurand.so.10.3.7.77
86M targets/x86_64-linux/lib/libcusolverMg.so.11.7.1.2
56M targets/x86_64-linux/lib/libnvrtc.so.12.6.85
49M targets/x86_64-linux/lib/libnvJitLink.so.12.6.85
5,1M targets/x86_64-linux/lib/libnvrtc-builtins.so.12.6.85
So 385 MB left for libmagma.so
+ libmagma_sparse.so
.
I'll also note that there are two linux-64/libmagma
packages, and the CUDA 12 one seems to be a lot smaller than the CUDA 11 one. There are no interpretable build strings, but comparing package hashes with the logs from conda-forge/libmagma-feedstock#22 shows that CUDA 11 is the larger one. The analysis above uses the h7847c38_1
package.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I'm not opposed to reverting the magma parts. Just let me know if I should do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I'm not opposed to reverting the magma parts. Just let me know if I should do it.
For completeness here: Isuru moved this decision conversation to gh-275, which seems like the right thing to do for visibility.
Hi! This is the friendly automated conda-forge-linting service. I was trying to look for recipes to lint for you, but it appears we have a merge conflict. Please try to merge or rebase with the base branch to resolve this conflict. Please ping the 'conda-forge/core' team (using the |
I've added explanation in the README, as well as a fix to install |
recipe/meta.yaml
Outdated
package: | ||
name: libtorch | ||
name: libtorch-split |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not do this unless absolutely necessary. I was okay doing this because of libmagma builds, but now that they are gone, let's not do this split build and keep the top level as libtorch
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you prefer me reverting it as more commits, or rebasing the whole thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whichever is easier for you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I went for more commits, because I wanted to retain full history and attribution, and collapsing the code back seems relatively clean anyway. I'm going to finish testing in a few minutes and push.
@conda-forge/pytorch-cpu, I think this PR is finally done. 😅 Are there any remaining comments/opinions/requests? |
- test -f $PREFIX/lib/libtorch_python${SHLIB_EXT} # [unix] | ||
|
||
# a reasonably safe subset of tests that should run under 15 minutes | ||
# disable hypothesis because it randomly yields health check errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you considered their smoke test? It includes testing of some key features such as torch.compile and is useful for developing this feedstock, because it can expose problems pretty quickly. There's an implementation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIU, the testing here is more comprehensive than the smoke test. But that shouldn't stop us from adding the smoke test in the next PR if it's considered useful (and potentially covers some aspects not covered by the choice of modules here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Side note, if we're bringing across changes from Anaconda's recipe to here, would you prefer to have each change in a separate PR or all in one PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@danpetry: Depends a bit on the quantity of changes, and how well they're separated in terms of commits. But you can start with everything you have in mind, and we can carve out smaller PRs if necessary.
Hmm, that's a lot of test failures for the
But I don't have a GPU here. |
turns out it had nothing to do with order of `-m` & `-k` This reverts commit a59b008.
To be honest, I'm a bit worried about how much we're spending on failing tests here. Perhaps we could cancel all test runs, except for the one that has failed before? I'm also wondering if we shouldn't just either ignore failures (i.e. restore |
Pointed out by @isuruf in conda-forge#275 (comment)
Just ran the complete build for I've also prepared one more change requested by @isuruf in mgorny#3 ; not merging it yet, since I don't want to restart CI. |
Yeah, let's just skip them on GPUs. Especially now that it turns out that the x64 + CUDA build can fail catastrophically:
|
On second thought @mgorny, this might actually be due to one of the following commits of yours, which were part of mgorny#2, but mostly tangential to this PR: AFAICT, we haven't had a x64+generic+CUDA job since 98f8a9c that ran and passed the test suite (modulo the misapplied skips). Both times that the python test suite ran on that job (97cb097 & 4fe6a1e), things failed catastrophically. I'm going to roll back those commits (& 0a76034) for now; we can do more focussed debugging on the enablement in a separate PR. |
0a76034
to
cc93e41
Compare
If these are green, I would be in strong favor or just merging this as is an cleaning up in separately. The main concern I have is loosing momentun on nit picks. If the test suite doesn't pass, we should disable the tests, and work on a separate PR to address the failing tests so as to allow the following other projects to move forward:
(In no order of preference from me) I've also updated the top level description to be more in line with the new goals of the PR. |
The crashing-so-far generic + CUDA job has passed this time. Can we merge it now? I don't think there's a point in waiting for the remaining jobs to run. Also, should I open a separate PR to redo the extra deps that have caused trouble here? Possibly one by one, to see which one was responsible. |
@mgorny did you have any other pending commits you wanted to get in? I think we should just wait since the builds have "started" the problems often occur if the builds are "stuck". |
No, not at the moment. All checks passed! |
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)Old description until Dec 23, 2024
From the commit message:
Also:
In this PR's implementation, we've chosen to prioritize the magma build for those with GPUs. We have done so by using both
track_features
on nomagma, and an increased build number for the magma build with should help different solvers naturally find the magma build.Fixes #275
Due to the desire to decrease the maintenance burden, and the fact that we were able to reduce the compiled size of the libmagma package, we decided to abandon the original goal of making magma optional.
Instead this PR has changed focus to a refactor of the original recipe bringing tests and documentations into light.
@isuruf helped me a lot with this, particularly with refactoring the builds, so both variants are built in one run.