Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix indexing for more than 65535 elems in non-indexed first dim #23123

Closed
wants to merge 2 commits into from

Conversation

ngimel
Copy link
Collaborator

@ngimel ngimel commented Jul 20, 2019

Fixes #22843, also adds test from #23102

@pytorchbot pytorchbot added module: autograd Related to torch.autograd, and the autograd engine in general module: cuda Related to torch.cuda, and CUDA support in general module: operators labels Jul 20, 2019
@ngimel ngimel requested a review from ezyang July 20, 2019 03:51
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soumith is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@soumith merged this pull request in 4e5f700.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Jul 20, 2019
Summary:
Fixes pytorch/pytorch#22843, also adds test from pytorch/pytorch#23102
Pull Request resolved: pytorch/pytorch#23123

Differential Revision: D16402422

Pulled By: soumith

fbshipit-source-id: aa7a79159ed947be03ce3725ec8abcf5246a60bf
@z-a-f z-a-f mentioned this pull request Jul 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: autograd Related to torch.autograd, and the autograd engine in general module: cuda Related to torch.cuda, and CUDA support in general open source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Zero gradients beyond a certain buffer size on CUDA
6 participants