-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redo torchscript example #7889
Merged
Merged
Redo torchscript example #7889
Changes from 1 commit
Commits
Show all changes
5 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,134 @@ | ||
""" | ||
=================== | ||
Torchscript support | ||
=================== | ||
|
||
.. note:: | ||
Try on `collab <https://colab.research.google.com/github/pytorch/vision/blob/gh-pages/main/_generated_ipynb_notebooks/plot_torchscript_support.ipynb>`_ | ||
or :ref:`go to the end <sphx_glr_download_auto_examples_transforms_plot_torchscript_support.py>` to download the full example code. | ||
|
||
This example illustrates `torchscript | ||
<https://pytorch.org/docs/stable/jit.html>`_ support of the torchvision | ||
:ref:`transforms <transforms>` on Tensor images. | ||
""" | ||
|
||
# %% | ||
from pathlib import Path | ||
|
||
import matplotlib.pyplot as plt | ||
|
||
import torch | ||
import torch.nn as nn | ||
|
||
import torchvision.transforms as v1 | ||
from torchvision.io import read_image | ||
|
||
plt.rcParams["savefig.bbox"] = 'tight' | ||
torch.manual_seed(1) | ||
|
||
# If you're trying to run that on collab, you can download the assets and the | ||
# helpers from https://github.com/pytorch/vision/tree/main/gallery/ | ||
from helpers import plot | ||
ASSETS_PATH = Path('../assets') | ||
|
||
|
||
# %% | ||
# Most transforms support torchscript. For composing transforms, we use | ||
# :class:`torch.nn.Sequential` instead of | ||
# :class:`~torchvision.transforms.v2.Compose`: | ||
|
||
dog1 = read_image(str(ASSETS_PATH / 'dog1.jpg')) | ||
dog2 = read_image(str(ASSETS_PATH / 'dog2.jpg')) | ||
|
||
transforms = torch.nn.Sequential( | ||
v1.RandomCrop(224), | ||
v1.RandomHorizontalFlip(p=0.3), | ||
) | ||
|
||
scripted_transforms = torch.jit.script(transforms) | ||
|
||
plot([dog1, scripted_transforms(dog1), dog2, scripted_transforms(dog2)]) | ||
|
||
|
||
# %% | ||
# .. warning:: | ||
# | ||
# Above we have used transforms from the ``torchvision.transforms`` | ||
# namespace, i.e. the "v1" transforms. The v2 transforms from the | ||
# ``torchvision.transforms.v2`` namespace are the :ref:`recommended | ||
# <v1_or_v2>` way to use transforms in your code. | ||
# | ||
# The v2 transforms also support torchscript, but if you call | ||
# ``torch.jit.script()`` on a v2 **class** transform, you'll actually end up | ||
# with its (scripted) v1 equivalent. This may lead to slightly different | ||
# results between the scripted and eager executions due to implementation | ||
# differences between v1 and v2. | ||
# | ||
# If you really need torchscript support for the v2 transforms, **we | ||
# recommend scripting the functionals** from the | ||
# ``torchvision.transforms.v2.functional`` namespace to avoid surprises. | ||
# | ||
# Below we now show how to combine image transformations and a model forward | ||
# pass, while using ``torch.jit.script`` to obtain a single scripted module. | ||
# | ||
# Let's define a ``Predictor`` module that transforms the input tensor and then | ||
# applies an ImageNet model on it. | ||
|
||
from torchvision.models import resnet18, ResNet18_Weights | ||
|
||
|
||
class Predictor(nn.Module): | ||
|
||
def __init__(self): | ||
super().__init__() | ||
weights = ResNet18_Weights.DEFAULT | ||
self.resnet18 = resnet18(weights=weights, progress=False).eval() | ||
self.transforms = weights.transforms(antialias=True) | ||
|
||
def forward(self, x: torch.Tensor) -> torch.Tensor: | ||
with torch.no_grad(): | ||
x = self.transforms(x) | ||
y_pred = self.resnet18(x) | ||
return y_pred.argmax(dim=1) | ||
|
||
|
||
# %% | ||
# Now, let's define scripted and non-scripted instances of ``Predictor`` and | ||
# apply it on multiple tensor images of the same size | ||
|
||
device = "cuda" if torch.cuda.is_available() else "cpu" | ||
|
||
predictor = Predictor().to(device) | ||
scripted_predictor = torch.jit.script(predictor).to(device) | ||
|
||
batch = torch.stack([dog1, dog2]).to(device) | ||
|
||
res = predictor(batch) | ||
res_scripted = scripted_predictor(batch) | ||
|
||
# %% | ||
# We can verify that the prediction of the scripted and non-scripted models are | ||
# the same: | ||
|
||
import json | ||
|
||
with open(Path('../assets') / 'imagenet_class_index.json') as labels_file: | ||
labels = json.load(labels_file) | ||
|
||
for i, (pred, pred_scripted) in enumerate(zip(res, res_scripted)): | ||
assert pred == pred_scripted | ||
print(f"Prediction for Dog {i + 1}: {labels[str(pred.item())]}") | ||
|
||
# %% | ||
# Since the model is scripted, it can be easily dumped on disk and re-used | ||
|
||
import tempfile | ||
|
||
with tempfile.NamedTemporaryFile() as f: | ||
scripted_predictor.save(f.name) | ||
|
||
dumped_scripted_predictor = torch.jit.load(f.name) | ||
res_scripted_dumped = dumped_scripted_predictor(batch) | ||
assert (res_scripted_dumped == res_scripted).all() | ||
|
||
# %% |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -172,6 +172,7 @@ | |
# Re-using the transforms and definitions from above. | ||
out_img, out_target = transforms(img, target) | ||
|
||
# sphinx_gallery_thumbnail_number = 4 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. drive-by |
||
plot([(img, target["boxes"]), (out_img, out_target["boxes"])]) | ||
print(f"{out_target['this_is_ignored']}") | ||
|
||
|
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a quick re-write and I didn't give this the same attention as the other examples. After 1+ week of writing docs I don't have much mental space left to dedicate to this. In particular, I didn't change anything to the text below. So if you have comments about it, and since everything below is pre-existing, I suggest to follow-up in other PRs if needed.