Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ready for review] torch.btrifact for tensors with greater than 3 dimensions #14964

Closed
wants to merge 32 commits into from

Conversation

vishwakftw
Copy link
Contributor

@vishwakftw vishwakftw commented Dec 9, 2018

Motivation:

  • Earlier, torch.btrifact could not handle tensors with greater than 3 dimensions. This is because of the check:

AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes());

What is in this PR?:

  • Move btrifact to ATen
  • Remove relation to TH/THC.
  • Handle tensors with more than three dimensions
  • Tests
  • Docs modifications: added a note about the non-pivoting variant.

[blocked due to old magma-cuda binaries]

Why specifically for btrifact and not for btrifact_with_info?
- btrifact_with_info explicitly specifies whether the LU factorization has passed or failed. If we report this error, then this information is not returned, hence lost.
@vishwakftw
Copy link
Contributor Author

The CUDA test failures are due to MAGMA not being able to capture errors in the non-pivoting mode.

@vishwakftw vishwakftw changed the title [WIP] torch.btrifact for tensors with greater than 3 dimensions [ready for review] torch.btrifact for tensors with greater than 3 dimensions Dec 16, 2018
@ezyang
Copy link
Contributor

ezyang commented Dec 20, 2018

Can we delete the old TH btrifact implementations please? Thank you! :)

@zou3519
Copy link
Contributor

zou3519 commented Jan 29, 2019

@vishwakftw just checking, this is still blocked by the magma problem, right?

@vishwakftw
Copy link
Contributor Author

Yes, this is still blocked. MAGMA v2.5.0 solves these issues.

torch/_torch_docs.py Outdated Show resolved Hide resolved
@vishwakftw
Copy link
Contributor Author

@pytorchbot rebase this please

@vishwakftw
Copy link
Contributor Author

@pytorchbot rebase this please

@vishwakftw
Copy link
Contributor Author

@pytorchbot rebase this please

@ezyang
Copy link
Contributor

ezyang commented Mar 6, 2019

I saw Soumith updated the magma version. Do you need anything else done for this PR? I see some tests are still failing

@vishwakftw
Copy link
Contributor Author

vishwakftw commented Mar 6, 2019

There is one issue which is sadly a no-fix: MAGMA 2.5.0 doesn't seem to build for CUDA 8. It is because of this that the CUDA 8 tests are failing. The ROCm failures look weird to me (and might consider disabling those tests for ROCm alone).

Soumith told me that it would be possible for him to update the MAGMA binaries for CUDA 9.2 though.

@soumith
Copy link
Member

soumith commented Mar 6, 2019

can we skip these tests on cuda8 and cuda 9.2 for now, it'll take me a few days to update MAGMA 2.5.0 for cuda 9.2, and CUDA8 is not happening (magma 2.5 is non-trivially non-compilable on cuda8)

@vishwakftw
Copy link
Contributor Author

Sure, I’ll push a commit to skip the test based on CUDA version.

@vishwakftw vishwakftw force-pushed the btrifact-more-than-3-dims branch 2 times, most recently from cd9c594 to 04ff21f Compare March 7, 2019 12:41
@vishwakftw vishwakftw force-pushed the btrifact-more-than-3-dims branch from 04ff21f to 006902f Compare March 9, 2019 06:32
@soumith
Copy link
Member

soumith commented Mar 10, 2019

@pytorchbot rebase this please

@vishwakftw
Copy link
Contributor Author

Builds have failed due to 3aeb780. I'll fix them by afternoon.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@vishwakftw vishwakftw deleted the btrifact-more-than-3-dims branch March 12, 2019 10:03
zdevito pushed a commit to zdevito/ATen that referenced this pull request Mar 12, 2019
Summary:
Motivation:
- Earlier, `torch.btrifact` could not handle tensors with greater than 3 dimensions. This is because of the check:
>   AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes());

What is in this PR?:
- Move `btrifact` to ATen
- Remove relation to TH/THC.
- Handle tensors with more than three dimensions
- Tests
- Docs modifications: added a note about the non-pivoting variant.

[blocked due to old magma-cuda binaries]
Pull Request resolved: pytorch/pytorch#14964

Differential Revision: D14405106

Pulled By: soumith

fbshipit-source-id: f051f5d6aaa45f85836a2867176c065733563184
petrex pushed a commit to petrex/pytorch that referenced this pull request Mar 14, 2019
* upstream/master: (87 commits)
  Make Variable::set_data non-const; cosmetic fixes.
  remove warning for upsample code (pytorch#17921)
  Optimize TileOp (pytorch#17290)
  Optimize channel_stats_op (pytorch#16243)
  enable shape inference for elementwise operators (pytorch#17885)
  Remove remaining test jit expects redux (pytorch#17924)
  Handle Scalars Better (pytorch#17875)
  Fixed a formatting issue in doc comments (pytorch#17505)
  Add nbytes, itemsize, element_size to at::Tensor. (pytorch#17810)
  Fix lint in test_distributions.py
  Fix lint in test_jit.py
  Fix lint errors in test_autograd
  Added a few extra python bindings to help with walking the IR graph from Python (pytorch#17822)
  kthvalue consistency with sort in the presence of NaN (pytorch#17824)
  Fix minor grammatical mistakes in torch/nn/modules/loss.py (pytorch#17892)
  Remove (almost all) TensorOptions from native_functions.yaml (pytorch#17385)
  Restore full Windows tests (pytorch#17102)
  Prevent VS2017 from emitting ambiguous symbol errors (second time)
  Fix windows test hang (pytorch#17778)
  torch.btrifact for tensors with greater than 3 dimensions (pytorch#14964)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants