Skip to content

Commit

Permalink
fix: Update file and reference naming for new API
Browse files Browse the repository at this point in the history
  • Loading branch information
gs-olive committed Jul 25, 2023
1 parent 9a91fb8 commit 36823e8
Show file tree
Hide file tree
Showing 5 changed files with 30 additions and 24 deletions.
6 changes: 3 additions & 3 deletions docsrc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -70,9 +70,9 @@ Tutorials

tutorials/serving_torch_tensorrt_with_triton
tutorials/notebooks
tutorials/_rendered_examples/dynamo/dynamo_compile_resnet_example
tutorials/_rendered_examples/dynamo/dynamo_compile_transformers_example
tutorials/_rendered_examples/dynamo/dynamo_compile_advanced_usage
tutorials/_rendered_examples/dynamo/torch_compile_resnet_example
tutorials/_rendered_examples/dynamo/torch_compile_transformers_example
tutorials/_rendered_examples/dynamo/torch_compile_advanced_usage

Python API Documenation
------------------------
Expand Down
6 changes: 3 additions & 3 deletions examples/dynamo/README.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
.. _dynamo_compile:
.. _torch_compile:

Dynamo / ``torch.compile``
----------------------------

Torch-TensorRT provides a backend for the new ``torch.compile`` API released in PyTorch 2.0. In the following examples we describe
a number of ways you can leverage this backend to accelerate inference.

* :ref:`dynamo_compile_resnet`: Compiling a ResNet model using the Dynamo Compile Frontend for ``torch_tensorrt.compile``
* :ref:`torch_compile_resnet`: Compiling a ResNet model using the Torch Compile Frontend for ``torch_tensorrt.compile``
* :ref:`torch_compile_transformer`: Compiling a Transformer model using ``torch.compile``
* :ref:`dynamo_compile_advanced_usage`: Advanced usage including making a custom backend to use directly with the ``torch.compile`` API
* :ref:`torch_compile_advanced_usage`: Advanced usage including making a custom backend to use directly with the ``torch.compile`` API
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
.. _dynamo_compile_advanced_usage:
.. _torch_compile_advanced_usage:
Dynamo Compile Advanced Usage
Torch Compile Advanced Usage
======================================================
This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
Expand All @@ -11,6 +11,7 @@
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

import torch
import torch_tensorrt

# %%

Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
"""
.. _dynamo_compile_resnet:
.. _torch_compile_resnet:
Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
Compiling ResNet using the Torch-TensorRT Dynamo Backend
==========================================================
This interactive script is intended as a sample of the `torch_tensorrt.compile` workflow with `torch.compile` on a ResNet model."""
This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""

# %%
# Imports and Model Definition
Expand Down Expand Up @@ -57,8 +57,8 @@
)

# %%
# Equivalently, we could have run the above via the convenience frontend, as so:
# `torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)`
# Equivalently, we could have run the above via the torch.compile frontend, as so:
# `optimized_model = torch.compile(model, backend="torch_tensorrt", options={"enabled_precisions": enabled_precisions, ...}); optimized_model(*inputs)`

# %%
# Inference
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Compiling a Transformer using torch.compile and TensorRT
==============================================================
This interactive script is intended as a sample of the `torch_tensorrt.compile` workflow with `torch.compile` on a transformer-based model."""
This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a transformer-based model."""

# %%
# Imports and Model Definition
Expand Down Expand Up @@ -45,24 +45,29 @@
torch_executed_ops = {}

# %%
# Compilation with `torch_tensorrt.compile`
# Compilation with `torch.compile`
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

# Define backend compilation keyword arguments
compilation_kwargs = {
"enabled_precisions": enabled_precisions,
"debug": debug,
"workspace_size": workspace_size,
"min_block_size": min_block_size,
"torch_executed_ops": torch_executed_ops,
}

# Build and compile the model with torch.compile, using Torch-TensorRT backend
optimized_model = torch_tensorrt.compile(
optimized_model = torch.compile(
model,
ir="torch_compile",
inputs=inputs,
enabled_precisions=enabled_precisions,
debug=debug,
workspace_size=workspace_size,
min_block_size=min_block_size,
torch_executed_ops=torch_executed_ops,
backend="torch_tensorrt",
options=compilation_kwargs,
)
optimized_model(*inputs)

# %%
# Equivalently, we could have run the above via the convenience frontend, as so:
# `torch_tensorrt.compile(model, ir="dynamo_compile", inputs=inputs, ...)`
# `torch_tensorrt.compile(model, ir="torch_compile", inputs=inputs, **compilation_kwargs)`

# %%
# Inference
Expand Down

0 comments on commit 36823e8

Please sign in to comment.