-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add deterministic, pure-Python roi_align implementation #7587
Conversation
See https://dev-discuss.pytorch.org/t/a-pure-python-implementation-of-roi-align-that-looks-just-like-its-cuda-kernel/1266 for discussion and motivation. Signed-off-by: Edward Z. Yang <ezyang@meta.com>
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7587
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit d7f4e80: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @ezyang . PR Looks good, I just have minor Qs and comments below. There's nothing critical so I'll approve now.
I haven't checked the correctness of the new implementation too closely, I'm mostly relying on our tests for that. Out of precaution I parametrized the test_forward()
over 50 seeds locally and they all passed.
torchvision/ops/roi_align.py
Outdated
# https://dev-discuss.pytorch.org/t/a-pure-python-implementation-of-roi-align-that-looks-just-like-its-cuda-kernel/1266 | ||
@torch._dynamo.allow_in_graph | ||
def _roi_align(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio, aligned): | ||
from functorch.dim import dims |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to lazy import? Isn't the functorch
namespace always available now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is supposed to always be available but @zou3519 mentioned to me that xplat mumble mumble doesn't have working torchdims build mumble? In any case, there is not much harm in making it lazy like this, so I went ahead and did it this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A better reason that I remembered just now is that import functorch.dim
monkey-patches torch and we really do not want to monkey patch torch by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For safety purposes, I hand-inserted all of the expands necessary to remove the first-class dims impl. We should still keep the first class dims version around for documentary purposes though.
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Hey @ezyang! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Summary: Signed-off-by: Edward Z. Yang <ezyang@meta.com> Reviewed By: vmoens Differential Revision: D45903818 fbshipit-source-id: 4d80da89da5e149f64c5fde2d31bf9c490232b91
See
https://dev-discuss.pytorch.org/t/a-pure-python-implementation-of-roi-align-that-looks-just-like-its-cuda-kernel/1266
for discussion and motivation.
Signed-off-by: Edward Z. Yang ezyang@meta.com