Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Cherry-pick] Deprecate torchscript frontend #3376

Merged
merged 1 commit into from
Jan 31, 2025

Conversation

narendasan
Copy link
Collaborator

Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@github-actions github-actions bot added component: api [Python] Issues re: Python API component: api [C++] Issues re: C++ API labels Jan 31, 2025
@github-actions github-actions bot requested a review from peri044 January 31, 2025 19:50
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/cpp/include/torch_tensorrt/ptq.h b/tmp/changes.txt
index ae8aa07..a2f8234 100644
--- a/home/runner/work/TensorRT/TensorRT/cpp/include/torch_tensorrt/ptq.h
+++ b/tmp/changes.txt
@@ -59,7 +59,7 @@ class Int8Calibrator : Algorithm {
   * calibration cache
   * @param use_cache : bool - Whether to use the cache (if it exists)
   */
-   Int8Calibrator(DataLoaderUniquePtr dataloader, const std::string& cache_file_path, bool use_cache)
+  Int8Calibrator(DataLoaderUniquePtr dataloader, const std::string& cache_file_path, bool use_cache)
      : dataloader_(dataloader.get()), cache_file_path_(cache_file_path), use_cache_(use_cache) {
    for (auto batch : *dataloader_) {
      batched_data_.push_back(batch.data);
@@ -343,7 +343,8 @@ TORCH_TENSORRT_PTQ_DEPRECATION inline Int8Calibrator<Algorithm, DataLoader> make
 * @return Int8CacheCalibrator<Algorithm>
 */
template <typename Algorithm = nvinfer1::IInt8EntropyCalibrator2>
-TORCH_TENSORRT_PTQ_DEPRECATION inline Int8CacheCalibrator<Algorithm> make_int8_cache_calibrator(const std::string& cache_file_path) {
+TORCH_TENSORRT_PTQ_DEPRECATION inline Int8CacheCalibrator<Algorithm> make_int8_cache_calibrator(
+    const std::string& cache_file_path) {
  return Int8CacheCalibrator<Algorithm>(cache_file_path);
}

diff --git a/home/runner/work/TensorRT/TensorRT/cpp/include/torch_tensorrt/macros.h b/tmp/changes.txt
index 5fce518..bdc25f6 100644
--- a/home/runner/work/TensorRT/TensorRT/cpp/include/torch_tensorrt/macros.h
+++ b/tmp/changes.txt
@@ -30,7 +30,9 @@
  STR(TORCH_TENSORRT_MAJOR_VERSION) \
  "." STR(TORCH_TENSORRT_MINOR_VERSION) "." STR(TORCH_TENSORRT_PATCH_VERSION)

-#define TORCH_TENSORRT_PTQ_DEPRECATION [[deprecated("Int8 PTQ Calibrator has been deprecated by TensorRT, please plan on porting to a NVIDIA Model Optimizer Toolkit based workflow. See: https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html for more details")]]
+#define TORCH_TENSORRT_PTQ_DEPRECATION \
+  [[deprecated(                        \
+      "Int8 PTQ Calibrator has been deprecated by TensorRT, please plan on porting to a NVIDIA Model Optimizer Toolkit based workflow. See: https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html for more details")]]
// Setup namespace aliases for ease of use
namespace torch_tensorrt {
namespace torchscript {}
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-31 19:50:20.576160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-31 19:50:41.152248+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-31 19:50:20.576160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-31 19:50:41.171830+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:50:20.577160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:50:41.212485+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-31 19:50:20.576160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-31 19:50:41.223331+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-31 19:50:20.579160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-31 19:50:41.258447+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:50:20.581160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:50:41.336474+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-31 19:50:20.581160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-31 19:50:41.342496+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-31 19:50:20.581160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-31 19:50:41.347776+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:50:20.581160+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:50:41.382135+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:50:21.027168+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:50:41.390556+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:50:21.028168+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:50:41.406058+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:50:21.028168+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:50:41.431367+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-31 19:50:41.568970+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-31 19:50:41.626180+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-31 19:50:41.634355+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-31 19:50:41.636329+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-31 19:50:41.642891+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-31 19:50:21.058168+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-31 19:50:41.669999+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-31 19:50:21.067168+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-31 19:50:42.159875+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-31 19:50:21.069168+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-31 19:50:42.400111+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-31 19:50:21.067168+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-31 19:50:42.614275+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-31 19:50:21.070168+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-31 19:50:42.810294+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-31 19:50:21.074168+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-31 19:50:44.231147+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-31 19:50:21.079169+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-31 19:50:45.210083+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-31 19:50:21.083169+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-31 19:50:46.241591+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/ptq.py	2025-01-31 19:50:21.084169+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/ptq.py	2025-01-31 19:50:46.634350+00:00
@@ -91,11 +91,11 @@

    def __new__(cls, *args: Any, **kwargs: Any) -> Self:
        warnings.warn(
            "Int8 PTQ Calibrator has been deprecated by TensorRT, please plan on porting to a NVIDIA Model Optimizer Toolkit based workflow. See: https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html for more details",
            DeprecationWarning,
-            stacklevel=2
+            stacklevel=2,
        )
        dataloader = args[0]
        algo_type = kwargs.get("algo_type", CalibrationAlgo.ENTROPY_CALIBRATION_2)
        cache_file = kwargs.get("cache_file", None)
        use_cache = kwargs.get("use_cache", False)
@@ -183,11 +183,11 @@

    def __new__(cls, *args: Any, **kwargs: Any) -> Self:
        warnings.warn(
            "Int8 PTQ Calibrator has been deprecated by TensorRT, please plan on porting to a NVIDIA Model Optimizer Toolkit based workflow. See: https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_ptq.html for more details",
            DeprecationWarning,
-            stacklevel=2
+            stacklevel=2,
        )
        cache_file = args[0]
        algo_type = kwargs.get("algo_type", CalibrationAlgo.ENTROPY_CALIBRATION_2)

        if os.path.isfile(cache_file):
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2025-01-31 19:50:21.084169+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2025-01-31 19:50:46.805755+00:00
@@ -104,11 +104,11 @@
    """

    warnings.warn(
        'The torchscript frontend for Torch-TensorRT has been deprecated, please plan on porting to the dynamo frontend (torch_tensorrt.compile(..., ir="dynamo"). Torchscript will continue to be a supported deployment format via post compilation torchscript tracing, see: https://pytorch.org/TensorRT/user_guide/saving_models.html for more details',
        DeprecationWarning,
-        stacklevel=2
+        stacklevel=2,
    )

    input_list = list(inputs) if inputs is not None else []
    enabled_precisions_set = (
        enabled_precisions if enabled_precisions is not None else set()
@@ -248,11 +248,11 @@
        bytes: Serialized TensorRT engine, can either be saved to a file or deserialized via TensorRT APIs
    """
    warnings.warn(
        'The torchscript frontend for Torch-TensorRT has been deprecated, please plan on porting to the dynamo frontend (torch_tensorrt.convert_method_to_trt_engine(..., ir="dynamo"). Torchscript will continue to be a supported deployment format via post compilation torchscript tracing, see: https://pytorch.org/TensorRT/user_guide/saving_models.html for more details',
        DeprecationWarning,
-        stacklevel=2
+        stacklevel=2,
    )

    input_list = list(inputs) if inputs is not None else []
    enabled_precisions_set = (
        enabled_precisions if enabled_precisions is not None else {torch.float}

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-31 19:57:05.796040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/0e30a6276601af7e5fc4d5166e2e3d37/torch_compile_advanced_usage.py	2025-01-31 19:57:27.751982+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-31 19:57:05.796040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/2a9ac10f2667047a7f398d1593b7ca33/torch_export_gpt2.py	2025-01-31 19:57:27.776814+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:57:05.797040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:57:27.812209+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-31 19:57:05.797040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/3d4d74f6636d986f33167154f6553961/torch_export_cudagraphs.py	2025-01-31 19:57:27.825791+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-31 19:57:05.799040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/7b7004dc2ea6f839be532665e16e0426/torch_export_llama2.py	2025-01-31 19:57:27.859322+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:57:05.802040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:57:27.940286+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-31 19:57:05.801040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/dfa60e8f9850fd7761f3e7da81304d32/torch_compile_transformers_example.py	2025-01-31 19:57:27.943549+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-31 19:57:05.801040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/d6e1bb6ec5f884994554d9d12e37a0f6/torch_compile_resnet_example.py	2025-01-31 19:57:27.951805+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:57:05.802040+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:57:27.985761+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:57:06.254041+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/418941399c146271a7b7728ba3059960/dynamo_compile_resnet_example.py	2025-01-31 19:57:27.995944+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_resnet:

Compiling ResNet using the Torch-TensorRT Dyanmo Frontend
==========================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:57:06.254041+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e1ef5a42560a98a132f56a79d0b66f79/dynamo_compile_advanced_usage.py	2025-01-31 19:57:28.013231+00:00
@@ -2,11 +2,12 @@
.. _dynamo_compile_advanced_usage:

Dynamo Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.dynamo.compile` works, and how it integrates with the new `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:57:06.254041+00:00
+++ /home/runner/work/TensorRT/TensorRT/docs/v1.4.0/_downloads/e550c5f53cc43e11aa6da8cfb79b54df/dynamo_compile_transformers_example.py	2025-01-31 19:57:28.032424+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling a Transformer using torch.compile and TensorRT
==============================================================

-This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model."""
+This interactive script is intended as a sample of the `torch_tensorrt.dynamo.compile` workflow on a transformer-based model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-31 19:57:06.284042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2025-01-31 19:57:28.204136+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_advanced_usage:

Torch Compile Advanced Usage
======================================================

-This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API."""
+This interactive script is intended as an overview of the process by which `torch_tensorrt.compile(..., ir="torch_compile", ...)` works, and how it integrates with the `torch.compile` API.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-31 19:57:06.284042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_resnet_example.py	2025-01-31 19:57:28.241527+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_resnet:

Compiling ResNet with dynamic shapes using the `torch.compile` backend
==========================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a ResNet model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-31 19:57:06.284042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_transformers_example.py	2025-01-31 19:57:28.240301+00:00
@@ -2,11 +2,12 @@
.. _torch_compile_transformer:

Compiling BERT using the `torch.compile` backend
==============================================================

-This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model."""
+This interactive script is intended as a sample of the Torch-TensorRT workflow with `torch.compile` on a BERT model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-31 19:57:06.285042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_gpt2.py	2025-01-31 19:57:28.249558+00:00
@@ -2,11 +2,12 @@
.. _torch_export_gpt2:

Compiling GPT2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular GPT2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-31 19:57:06.284042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_cudagraphs.py	2025-01-31 19:57:28.256034+00:00
@@ -2,11 +2,12 @@
.. _torch_export_cudagraphs:

Torch Export with Cudagraphs
======================================================

-This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well."""
+This interactive script is intended as an overview of the process by which the Torch-TensorRT Cudagraphs integration can be used in the `ir="dynamo"` path. The functionality works similarly in the `torch.compile` path as well.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-31 19:57:06.285042+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_export_llama2.py	2025-01-31 19:57:28.288288+00:00
@@ -2,11 +2,12 @@
.. _torch_export_llama2:

Compiling Llama2 using the dynamo backend
==========================================================

-This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model."""
+This script illustrates Torch-TensorRT workflow with dynamo backend on popular Llama2 model.
+"""

# %%
# Imports and Model Definition
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
import torch
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-31 19:57:06.294042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_Input.py	2025-01-31 19:57:28.720332+00:00
@@ -259,11 +259,11 @@
        else:
            return False

    @staticmethod
    def _parse_tensor_domain(
-        domain: Optional[Tuple[float, float]]
+        domain: Optional[Tuple[float, float]],
    ) -> Tuple[float, float]:
        """
        Produce a tuple of integers which specifies a tensor domain in the interval format: [lo, hi)

        Args:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-31 19:57:06.295042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/_TRTBuilderMonitor.py	2025-01-31 19:57:29.027501+00:00
@@ -51,17 +51,17 @@

    def _redraw(self, *, blank_lines: int = 0) -> None:
        if self._render:

            def clear_line() -> None:
-                print("\x1B[2K", end="")
+                print("\x1b[2K", end="")

            def move_to_start_of_line() -> None:
-                print("\x1B[0G", end="")
+                print("\x1b[0G", end="")

            def move_cursor_up(lines: int) -> None:
-                print("\x1B[{}A".format(lines), end="")
+                print("\x1b[{}A".format(lines), end="")

            def progress_bar(steps: int, num_steps: int) -> str:
                INNER_WIDTH = 10
                completed_bar_chars = int(INNER_WIDTH * steps / float(num_steps))
                return "[{}{}]".format(
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-31 19:57:06.294042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/_enums.py	2025-01-31 19:57:29.266942+00:00
@@ -1198,11 +1198,11 @@
            "Provided unsupported source type for EngineCapability conversion"
        )

    @classmethod
    def try_from(
-        c: Union[trt.EngineCapability, EngineCapability]
+        c: Union[trt.EngineCapability, EngineCapability],
    ) -> Optional[EngineCapability]:
        """Create a Torch-TensorRT engine capability enum from a TensorRT engine capability enum.

        Takes a device type enum from tensorrt and create a ``torch_tensorrt.EngineCapability``.
        If the source is not supported or the engine capability level is not supported in Torch-TensorRT,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-31 19:57:06.296041+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/impl/activation/ops.py	2025-01-31 19:57:29.445807+00:00
@@ -245,11 +245,11 @@
    beta: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.HARD_SIGMOID

    def hard_sigmoid_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def hard_sigmoid_fn(x: float) -> float:
            return max(0, min(1, alpha * x + beta))

        return hard_sigmoid_fn(dyn_range[0]), hard_sigmoid_fn(dyn_range[1])
@@ -308,11 +308,11 @@
    alpha: float,
) -> TRTTensor:
    operation_type = trt.ActivationType.THRESHOLDED_RELU

    def thresholded_relu_dyn_range_fn(
-        dyn_range: Tuple[float, float]
+        dyn_range: Tuple[float, float],
    ) -> Tuple[float, float]:
        def thresholded_relu_fn(x: float) -> float:
            return x if x > alpha else 0

        return thresholded_relu_fn(dyn_range[0]), thresholded_relu_fn(dyn_range[1])
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-31 19:57:06.301042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/utils.py	2025-01-31 19:57:30.911942+00:00
@@ -463,11 +463,11 @@
    else:
        return torch.device(device)


def to_torch_tensorrt_device(
-    device: Optional[Union[Device, torch.device, str]]
+    device: Optional[Union[Device, torch.device, str]],
) -> Device:
    """Cast a device-type to torch_tensorrt.Device

    Returns the corresponding torch_tensorrt.Device
    """
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-31 19:57:06.306042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/test/converters/acc_op/test_where.py	2025-01-31 19:57:31.830548+00:00
@@ -99,11 +99,11 @@
                self.y = torch.ones(y_shape)

            def forward(self, condition):
                return torch.where(condition, self.x, self.y)

-        inputs = [(torch.randn(condition_shape) > 0)]
+        inputs = [torch.randn(condition_shape) > 0]
        self.run_test(
            Where(x_shape, y_shape),
            inputs,
            expected_ops={acc_ops.where},
            test_implicit_batch_dim=False,
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-31 19:57:06.310042+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/fx/tracer/acc_tracer/acc_tracer.py	2025-01-31 19:57:33.227199+00:00
@@ -515,11 +515,11 @@
    dim0 = cast(int, transpose_node.args[1])
    dim1 = cast(int, transpose_node.args[2])
    changed = False

    def _calculate_dim(
-        transpose_dim: Union[torch.fx.Node, int]
+        transpose_dim: Union[torch.fx.Node, int],
    ) -> Union[torch.fx.Node, int]:
        nonlocal transpose_input_node
        nonlocal changed
        if isinstance(transpose_dim, torch.fx.Node):
            # Transpose dim is sub node

@narendasan narendasan merged commit c7d610a into release/2.6 Jan 31, 2025
67 of 68 checks passed
@narendasan narendasan deleted the deprecate_ts_2.6 branch January 31, 2025 21:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants