Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

port FixedSizeCrop from detection references to prototype transforms #6417

Merged
merged 13 commits into from
Aug 19, 2022

Conversation

pmeier
Copy link
Collaborator

@pmeier pmeier commented Aug 15, 2022

import torch
from torchvision.prototype import features, transforms


image = features.EncodedImage.from_path("test/assets/fakedata/logos/rgb_pytorch.png").decode()

bounding_boxes = features.BoundingBox(
    [[60, 30, 15, 15]],
    format=features.BoundingBoxFormat.CXCYWH,
    image_size=(100, 100),
    dtype=torch.float,
)

segmentation_masks = torch.zeros((1, 100, 100), dtype=torch.bool)
segmentation_masks[..., 24:36, 55:66] = True
segmentation_masks = features.SegmentationMask(segmentation_masks)

orig

target = dict(
    boxes=bounding_boxes,
    masks=segmentation_masks,
)
sample = image, target

transform = transforms.FixedSizeCrop(size=(40, 40))

torch.manual_seed(0)

for i in range(5):
    transformed_image, transformed_target = transform(sample)
    transformed_bounding_boxes = transformed_target["boxes"]
    transformed_segmentation_masks = transformed_target["masks"]

0 1 2 3 4

transform = transforms.FixedSizeCrop(size=(150, 150))

5

@pmeier pmeier marked this pull request as ready for review August 17, 2022 14:24
datumbox
datumbox previously approved these changes Aug 17, 2022
Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

height=params["height"],
width=params["width"],
)
if isinstance(inpt, (features.Label, features.SegmentationMask)):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we handle masks here like that?

@vfdev-5 do you need this on your PR?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we need to apply that on one-hot encoded masks. Added that in my PR

@datumbox datumbox dismissed their stale review August 17, 2022 14:39

found a potential issue

Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@pmeier pmeier merged commit 9c3e2bf into pytorch:main Aug 19, 2022
@pmeier pmeier deleted the fixed-size-crop branch August 19, 2022 07:38
facebook-github-bot pushed a commit that referenced this pull request Aug 25, 2022
…transforms (#6417)

Summary:
* port `FixedSizeCrop` from detection references to prototype transforms

* mypy

* [skip ci] call invalid boxes and corresponding masks and labels

* cherry-pick missing functions from #6401

* fix feature wrapping

* add test

* mypy

* add input type restrictions

* add test for _get_params

* fix input checks

Reviewed By: datumbox

Differential Revision: D39013661

fbshipit-source-id: cd0d4275c1b2b496745cd1e3af5f35eb5b33fda3
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants