-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add --use-v2 support to classification references #7724
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/7724
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit bb4608c: NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
references/classification/presets.py
Outdated
): | ||
transforms, _ = get_modules(use_v2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not from this PR but we have
transforms
the moduletrans
the listself.transforms
the Compose attribute
Maybe we should do a bit of clean-up. I'm leaving it out for now but can do it in this PR if that's OK
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up making the cleanup in this PR:
transforms the module -> renamed to module
trans the list -> renamed to transforms
self.transforms the Compose attribute -> kept as self.transforms
also removed autoaugment
as you suggested below, it's now just module
references/classification/presets.py
Outdated
import torchvision.transforms | ||
import torchvision.transforms.autoaugment | ||
|
||
return torchvision.transforms, torchvision.transforms.autoaugment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also falls into the cleanup category, but maybe we can do that here: v1 also exposes all AA transforms under torchvision.transforms
from .autoaugment import * |
Meaning, there is no need for two modules here.
references/classification/presets.py
Outdated
# We need lazy import to avoid the V2 warning in case just V1 is used | ||
import torchvision |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just importing the top namespace shouldn't emit warning, or does it? Plus, since we are not using torchvision
directly below, but rather import other namespaces, do we need this line at all?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move it out and check if we actually need it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stamping, since none of my previous comments is blocking.
Thanks Philip, I addressed the comments and I'll merge now and move on to the detection part. I'll still nee to properly handle the case of the presets for classification (the ones used by |
Hey @NicolasHug! You merged this PR, but no labels were added. The list of valid labels is available at https://github.com/pytorch/vision/blob/main/.github/process_commit.py |
Reviewed By: matteobettini Differential Revision: D48642323 fbshipit-source-id: 649230cd31a4c30eb2e483d5c75b8a0f94f66680
I didn't add the
--backend datapoint
option here. It will be needed for detection, but for classif we don't have to. If we want it, I'll add it at the same time as the detection references (it'll be easier to tackle both at the same time).Note that using
--use-v2
adds our deprecation warnings (as expected). I feel like it's best to leave them, instead of suppressing them - after all, those references are meant to be copy/pasted/adapted.