-
Notifications
You must be signed in to change notification settings - Fork 23.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ready for review] torch.btrifact for tensors with greater than 3 dimensions #14964
Conversation
Why specifically for btrifact and not for btrifact_with_info? - btrifact_with_info explicitly specifies whether the LU factorization has passed or failed. If we report this error, then this information is not returned, hence lost.
…fact-more-than-3-dims
The CUDA test failures are due to MAGMA not being able to capture errors in the non-pivoting mode. |
…fact-more-than-3-dims
…fact-more-than-3-dims
…fact-more-than-3-dims
Can we delete the old TH btrifact implementations please? Thank you! :) |
@vishwakftw just checking, this is still blocked by the magma problem, right? |
Yes, this is still blocked. MAGMA v2.5.0 solves these issues. |
…fact-more-than-3-dims
…fact-more-than-3-dims
@pytorchbot rebase this please |
@pytorchbot rebase this please |
@pytorchbot rebase this please |
…orch into btrifact-more-than-3-dims
I saw Soumith updated the magma version. Do you need anything else done for this PR? I see some tests are still failing |
There is one issue which is sadly a no-fix: MAGMA 2.5.0 doesn't seem to build for CUDA 8. It is because of this that the CUDA 8 tests are failing. The ROCm failures look weird to me (and might consider disabling those tests for ROCm alone). Soumith told me that it would be possible for him to update the MAGMA binaries for CUDA 9.2 though. |
can we skip these tests on cuda8 and cuda 9.2 for now, it'll take me a few days to update MAGMA 2.5.0 for cuda 9.2, and CUDA8 is not happening (magma 2.5 is non-trivially non-compilable on cuda8) |
Sure, I’ll push a commit to skip the test based on CUDA version. |
cd9c594
to
04ff21f
Compare
04ff21f
to
006902f
Compare
@pytorchbot rebase this please |
Builds have failed due to 3aeb780. I'll fix them by afternoon. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Motivation: - Earlier, `torch.btrifact` could not handle tensors with greater than 3 dimensions. This is because of the check: > AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes()); What is in this PR?: - Move `btrifact` to ATen - Remove relation to TH/THC. - Handle tensors with more than three dimensions - Tests - Docs modifications: added a note about the non-pivoting variant. [blocked due to old magma-cuda binaries] Pull Request resolved: pytorch/pytorch#14964 Differential Revision: D14405106 Pulled By: soumith fbshipit-source-id: f051f5d6aaa45f85836a2867176c065733563184
* upstream/master: (87 commits) Make Variable::set_data non-const; cosmetic fixes. remove warning for upsample code (pytorch#17921) Optimize TileOp (pytorch#17290) Optimize channel_stats_op (pytorch#16243) enable shape inference for elementwise operators (pytorch#17885) Remove remaining test jit expects redux (pytorch#17924) Handle Scalars Better (pytorch#17875) Fixed a formatting issue in doc comments (pytorch#17505) Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810) Fix lint in test_distributions.py Fix lint in test_jit.py Fix lint errors in test_autograd Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822) kthvalue consistency with sort in the presence of NaN (pytorch#17824) Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892) Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385) Restore full Windows tests (pytorch#17102) Prevent VS2017 from emitting ambiguous symbol errors (second time) Fix windows test hang (pytorch#17778) torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964) ...
Motivation:
torch.btrifact
could not handle tensors with greater than 3 dimensions. This is because of the check:What is in this PR?:
btrifact
to ATen[blocked due to old magma-cuda binaries]