-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
port AA tests #7927
port AA tests #7927
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7927
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Merge Blocking SEVsThere is 1 active merge blocking SEVs. Please view them below:
If you must merge, use ❌ 1 New FailureAs of commit a4de6f0 with merge base d0e16b7 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -232,7 +232,7 @@ def _check_transform_v1_compatibility(transform, input, *, rtol, atol): | |||
"""If the transform defines the ``_v1_transform_cls`` attribute, checks if the transform has a public, static | |||
``get_params`` method that is the v1 equivalent, the output is close to v1, is scriptable, and the scripted version | |||
can be called without error.""" | |||
if type(input) is not torch.Tensor or isinstance(input, PIL.Image.Image): | |||
if not (type(input) is torch.Tensor or isinstance(input, PIL.Image.Image)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was spotted by @NicolasHug. We only want to test the v1 compatibility for pure tensors and PIL images. Fixing this, lead to two more changes that I'll flag below.
input = make_input(device=device) | ||
check_transform( | ||
transforms.RandomErasing(p=1), input, check_v1_compatibility=not isinstance(input, PIL.Image.Image) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RandomErasing
in v1 does not support PIL images.
# here and only check if we can script the v2 transform and subsequently call the result. | ||
check_transform(transform, input, check_v1_compatibility=False) | ||
|
||
if type(input) is torch.Tensor and dtype is torch.uint8: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The and dtype is torch.uint8
is specific to AA, since some ops did not support uint8 in v1 (scripting and eager). This is no longer an issue in v2.
Co-authored-by: Nicolas Hug <contact@nicolas-hug.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM when green, thanks Philip
Hey @pmeier! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Summary: Co-authored-by: Nicolas Hug <contact@nicolas-hug.com> Reviewed By: matteobettini Differential Revision: D49600791 fbshipit-source-id: abf058e28a949717be7ad343e5417fca098d4078
Follow up to #7839 (comment).
Some questions that are up for discussion:
Since we have changed the random sampling in v2, we cannot easily compare the AA transforms to their v1 equivalent. It is doable, but leads to complicated monkeypatching / mocking, e.g.
vision/test/test_transforms_v2_consistency.py
Lines 729 to 743 in fb115c2
IMO, it was a good thing that we implemented this thorough check when we were actively working on implementing the AA transforms for v2 to avoid accidental breakages. However, now that they are ported and checked, we can probably drop them. We do not guarantee BC for randomly sampled parameters and thus, these test offer little value. Worse they might break in the future if we change something about the random sampling.
The checks for the v1 transforms call the transform multiple times:
vision/test/test_transforms.py
Lines 1570 to 1571 in fb115c2
vision/test/test_transforms_tensor.py
Lines 590 to 592 in fb115c2
I presume this is to get a better coverage of the randomly sampled ops. However, this also leads to quite long test times, e.g.
Since we have a frozen RNG seed, different parametrization will not increase coverage of the sampled ops. One option could be to reproducibly set the seed based on the parametrization, i.e.
seed = hash(("interpolation", InterpolationMode.NEAREST))
. Since we run multiple parametrizations for each transform, that should give us good coverage without the need to run the transform multiple times for each parametrization.Thoughts?
cc @vfdev-5