-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
port RandomHorizontalFlip to prototype API #5563
port RandomHorizontalFlip to prototype API #5563
Conversation
Hi @federicopozzi33! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
💊 CI failures summary and remediationsAs of commit 52dc452 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
I have some doubts: The first one is about testing. How are expected to be tested the random transforms? I would mock/patch the The second one is about the The So, the
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
@federicopozzi33 Very good points. I think we should definitely look for a better testing strategy for the prototype area. I also agree that @pmeier We shouldn't have both a |
To add to Vasilis comments:
Mocking randomness can become really weird really fast. Since our transformations support the
I agree with the sentiment that we should support segmentation masks here. They haven't got any love beyond a few examples yet, so there will probably be more transforms were we could add this support. That being said, I think we need to decide if we want to have a separate kernel for them or not. Right now we have vision/torchvision/prototype/transforms/functional/_geometry.py Lines 62 to 65 in d8654bb
which is a very thin wrapper. Given that on the "tensor level" images and segmentation masks are basically the same and the distinction only comes on the "feature level", I think we should remove the kernel here and handle this on the transform as proposed by @federicopozzi33. |
Add unit tests for RandomHorizontalFlip
I agree with you. I did as you proposed.
I added some unit tests to test all cases when the transform is always performed (p=1.0). One test case is missing, the fallback case, cause it sounds a little strange to me the following behavior:
Anyway, before adding quite similar tests for the missing case (p=0), I would appreciate some feedback/advice. class TestRandomHorizontalFlip:
def input_tensor(self):
pass
def expected_tensor(self):
pass
@pytest.mark.parametrize("input, expected", [(self.input_tensor(), self.expected_tensor()), ...])
def test_p_1(self, input, expected):
pass Another option might be to use
Yeah, I agree with you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code looks good to me. I'll leave it to @pmeier to do the final review and approve.
Just one quick nit below:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @federicopozzi33, I added some more comments.
test/test_prototype_transforms.py
Outdated
def input_tensor(self, dtype: torch.dtype = torch.float32) -> torch.Tensor: | ||
return torch.tensor([[[0, 1], [0, 1]], [[1, 0], [1, 0]]], dtype=dtype) | ||
|
||
def expected_tensor(self, dtype: torch.dtype = torch.float32) -> torch.Tensor: | ||
return torch.tensor([[[1, 0], [1, 0]], [[0, 1], [0, 1]]], dtype=dtype) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we need specific values here. I think we should be good to have random image inputs and use torch.flip
. I'm aware that this is the only call in the kernel
vision/torchvision/transforms/functional_tensor.py
Lines 126 to 129 in a8bde78
def hflip(img: Tensor) -> Tensor: | |
_assert_image_tensor(img) | |
return img.flip(-1) |
but this is also what I would use to produce a expected output if I didn't know the internals of the kernel. cc @NicolasHug for an opinion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used specific values to have full control over the creation of the expecting result. I preferred to not use the torch.flip
to not "mock" the internals of the transformation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @NicolasHug for an opinion
No strong opinion on my side for this specific transform. In some cases it's valuable to have a simple implementation of the transform that we're testing - it helps understanding what's going on with simpler code, and we can call it on arbitrary input. In the case of flip
the transform is very elementary and doesn't need much explaining anyway.
If we're confident that this hard-coded input covers all of what we might want to test against, then that's fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @federicopozzi33 for the continued work on this. I have some more comments on the tests, but otherwise we are good to go!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code on the transforms LGTM. Once @pmeier's comments on the tests are addressed we are good to merge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One minor suggestion. Otherwise LGTM! Thanks a lot for the patience @federicopozzi33!
Hey @pmeier! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Summary: * refactor: port RandomHorizontalFlip to prototype API (#5523) * refactor: merge HorizontalFlip and RandomHorizontalFlip Add unit tests for RandomHorizontalFlip * test: RandomHorizontalFlip with p=0 * refactor: remove type annotations from tests * refactor: improve tests * Update test/test_prototype_transforms.py Reviewed By: vmoens Differential Revision: D34878966 fbshipit-source-id: 1cd15f7dd7685a0c759e2c27b0e06a1884aeb90e Co-authored-by: Federico Pozzi <federico.pozzi@argo.vision> Co-authored-by: Philip Meier <github.pmeier@posteo.de>
Closes: #5523