Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port RandomHorizontalFlip to prototype API #5563

Merged

Conversation

federicopozzi33
Copy link
Contributor

Closes: #5523

@facebook-github-bot
Copy link

Hi @federicopozzi33!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link

facebook-github-bot commented Mar 7, 2022

💊 CI failures summary and remediations

As of commit 52dc452 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@federicopozzi33 federicopozzi33 changed the title refactor: port RandomHorizontalFlip to prototype API (#5523) port RandomHorizontalFlip to prototype API Mar 7, 2022
@federicopozzi33
Copy link
Contributor Author

federicopozzi33 commented Mar 7, 2022

I have some doubts:

The first one is about testing.
I saw only smoke tests for other transformations and no particular tests for random transformations in test_prototype_transforms.py

How are expected to be tested the random transforms?

I would mock/patch the torch.rand to control randomness. Another solution could be to add two tests, with p=0 and p=1, and check the expected result.

The second one is about the HorizontalFlip transform.

The _transform method is not handling inputs of type features.SegmentationMask.
IMO, the horizontal flip is performed the same way for features.Image and features.SegmentationMask.

So, the _transform method could be:

class HorizontalFlip(Transform):
    def _transform(self, input: Any, params: Dict[str, Any]) -> Any:
        if isinstance(input, features.Image):
            output = F.horizontal_flip_image_tensor(input)
            return features.Image.new_like(input, output)
        elif isinstance(input, features.SegmentationMask):
            output = F.horizontal_flip_image_tensor(input)
            return features.SegmentationMask.new_like(input, output)
...

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@datumbox
Copy link
Contributor

datumbox commented Mar 7, 2022

@federicopozzi33 Very good points. I think we should definitely look for a better testing strategy for the prototype area. I also agree that SegmentationMask should be easily supported. Perhaps we should incorporate your proposal in this PR.

@pmeier We shouldn't have both a HorizontalFlip and a RandomHorizontalFlip. I believe this was previously discussed but missed during the ticket creation (worth checking the existing tickets to avoid other issues)? I think the right approach here is to merge @federicopozzi33's RandomHorizontalFlip implementation in the existing HorizontalFlip and just rename the class.

@pmeier
Copy link
Collaborator

pmeier commented Mar 8, 2022

To add to Vasilis comments:

I would mock/patch the torch.rand to control randomness. Another solution could be to add two tests, with p=0 and p=1, and check the expected result.

Mocking randomness can become really weird really fast. Since our transformations support the p parameter, I would go for the tests you proposed. For good measure we can throw in some smoke test for p not in {0, 1}. Not sure if it is worth it though.

The _transform method is not handling inputs of type features.SegmentationMask.
IMO, the horizontal flip is performed the same way for features.Image and features.SegmentationMask.

I agree with the sentiment that we should support segmentation masks here. They haven't got any love beyond a few examples yet, so there will probably be more transforms were we could add this support.

That being said, I think we need to decide if we want to have a separate kernel for them or not. Right now we have

def resize_segmentation_mask(
segmentation_mask: torch.Tensor, size: List[int], max_size: Optional[int] = None
) -> torch.Tensor:
return resize_image_tensor(segmentation_mask, size=size, interpolation=InterpolationMode.NEAREST, max_size=max_size)

which is a very thin wrapper. Given that on the "tensor level" images and segmentation masks are basically the same and the distinction only comes on the "feature level", I think we should remove the kernel here and handle this on the transform as proposed by @federicopozzi33.

Add unit tests for RandomHorizontalFlip
@federicopozzi33
Copy link
Contributor Author

federicopozzi33 commented Mar 8, 2022

@federicopozzi33 Very good points. I think we should definitely look for a better testing strategy for the prototype area. I also agree that SegmentationMask should be easily supported. Perhaps we should incorporate your proposal in this PR.

@pmeier We shouldn't have both a HorizontalFlip and a RandomHorizontalFlip. I believe this was previously discussed but missed during the ticket creation (worth checking the existing tickets to avoid other issues)? I think the right approach here is to merge @federicopozzi33's RandomHorizontalFlip implementation in the existing HorizontalFlip and just rename the class.

I agree with you. I did as you proposed.

To add to Vasilis comments:

I would mock/patch the torch.rand to control randomness. Another solution could be to add two tests, with p=0 and p=1, and check the expected result.

Mocking randomness can become really weird really fast. Since our transformations support the p parameter, I would go for the tests you proposed. For good measure we can throw in some smoke test for p not in {0, 1}. Not sure if it is worth it though.

I added some unit tests to test all cases when the transform is always performed (p=1.0).
IMO, I would create separate test files for each transform, or at least a specific test class as I did.
I chose to create some very specific helper methods to generate simple data tensors: this should make tests more readable.

One test case is missing, the fallback case, cause it sounds a little strange to me the following behavior:

actual = transforms.RandomHorizontalFlip(p=1.0)(1.0)
assert actual == 1.0

Anyway, before adding quite similar tests for the missing case (p=0), I would appreciate some feedback/advice.
I think they can be refactored/improved a little bit more, to make them at least more compact. I would have used parametrize decorator from pytest, but it seems that I can't call instance methods to list the argument sets, like this:

class TestRandomHorizontalFlip:
    def input_tensor(self):
        pass

    def expected_tensor(self):
        pass

    @pytest.mark.parametrize("input, expected", [(self.input_tensor(), self.expected_tensor()), ...])
    def test_p_1(self, input, expected):
        pass

Another option might be to use fixtures.

The _transform method is not handling inputs of type features.SegmentationMask.
IMO, the horizontal flip is performed the same way for features.Image and features.SegmentationMask.

I agree with the sentiment that we should support segmentation masks here. They haven't got any love beyond a few examples yet, so there will probably be more transforms were we could add this support.

That being said, I think we need to decide if we want to have a separate kernel for them or not. Right now we have

def resize_segmentation_mask(
segmentation_mask: torch.Tensor, size: List[int], max_size: Optional[int] = None
) -> torch.Tensor:
return resize_image_tensor(segmentation_mask, size=size, interpolation=InterpolationMode.NEAREST, max_size=max_size)

which is a very thin wrapper. Given that on the "tensor level" images and segmentation masks are basically the same and the distinction only comes on the "feature level", I think we should remove the kernel here and handle this on the transform as proposed by @federicopozzi33.

Yeah, I agree with you.

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good to me. I'll leave it to @pmeier to do the final review and approve.

Just one quick nit below:

test/test_prototype_transforms.py Show resolved Hide resolved
@federicopozzi33
Copy link
Contributor Author

federicopozzi33 commented Mar 10, 2022

@datumbox, @pmeier

Let me add tests for p=0, then you can do the final review and approve it!

@pmeier pmeier self-requested a review March 10, 2022 14:11
Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @federicopozzi33, I added some more comments.

torchvision/prototype/transforms/_geometry.py Outdated Show resolved Hide resolved
torchvision/prototype/transforms/_geometry.py Outdated Show resolved Hide resolved
test/test_prototype_transforms.py Outdated Show resolved Hide resolved
test/test_prototype_transforms.py Outdated Show resolved Hide resolved
Comment on lines 158 to 162
def input_tensor(self, dtype: torch.dtype = torch.float32) -> torch.Tensor:
return torch.tensor([[[0, 1], [0, 1]], [[1, 0], [1, 0]]], dtype=dtype)

def expected_tensor(self, dtype: torch.dtype = torch.float32) -> torch.Tensor:
return torch.tensor([[[1, 0], [1, 0]], [[0, 1], [0, 1]]], dtype=dtype)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure we need specific values here. I think we should be good to have random image inputs and use torch.flip. I'm aware that this is the only call in the kernel

def hflip(img: Tensor) -> Tensor:
_assert_image_tensor(img)
return img.flip(-1)

but this is also what I would use to produce a expected output if I didn't know the internals of the kernel. cc @NicolasHug for an opinion

Copy link
Contributor Author

@federicopozzi33 federicopozzi33 Mar 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used specific values to have full control over the creation of the expecting result. I preferred to not use the torch.flip to not "mock" the internals of the transformation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @NicolasHug for an opinion

No strong opinion on my side for this specific transform. In some cases it's valuable to have a simple implementation of the transform that we're testing - it helps understanding what's going on with simpler code, and we can call it on arbitrary input. In the case of flip the transform is very elementary and doesn't need much explaining anyway.

If we're confident that this hard-coded input covers all of what we might want to test against, then that's fine.

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @federicopozzi33 for the continued work on this. I have some more comments on the tests, but otherwise we are good to go!

test/test_prototype_transforms.py Outdated Show resolved Hide resolved
test/test_prototype_transforms.py Outdated Show resolved Hide resolved
test/test_prototype_transforms.py Outdated Show resolved Hide resolved
test/test_prototype_transforms.py Outdated Show resolved Hide resolved
Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code on the transforms LGTM. Once @pmeier's comments on the tests are addressed we are good to merge.

@federicopozzi33 federicopozzi33 requested a review from pmeier March 14, 2022 20:47
Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor suggestion. Otherwise LGTM! Thanks a lot for the patience @federicopozzi33!

test/test_prototype_transforms.py Outdated Show resolved Hide resolved
@pmeier pmeier merged commit 7be2f55 into pytorch:main Mar 14, 2022
@github-actions
Copy link

Hey @pmeier!

You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py

facebook-github-bot pushed a commit that referenced this pull request Mar 15, 2022
Summary:
* refactor: port RandomHorizontalFlip to prototype API (#5523)

* refactor: merge HorizontalFlip and RandomHorizontalFlip

Add unit tests for RandomHorizontalFlip

* test: RandomHorizontalFlip with p=0

* refactor: remove type annotations from tests

* refactor: improve tests

* Update test/test_prototype_transforms.py

Reviewed By: vmoens

Differential Revision: D34878966

fbshipit-source-id: 1cd15f7dd7685a0c759e2c27b0e06a1884aeb90e

Co-authored-by: Federico Pozzi <federico.pozzi@argo.vision>
Co-authored-by: Philip Meier <github.pmeier@posteo.de>
@pmeier pmeier mentioned this pull request Apr 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port transforms.RandomHorizontalFlip to prototype.transforms
5 participants