Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Python tooling #2126

Merged
merged 1 commit into from
Aug 8, 2023
Merged

Improve Python tooling #2126

merged 1 commit into from
Aug 8, 2023

Conversation

narendasan
Copy link
Collaborator

@narendasan narendasan commented Jul 20, 2023

Description

Installing a number of standard tools to make the codebase more maintainable longterm

Adds:

  • Mypy: Static type checking
  • Ruff: Flake8 linting
  • isort: sort includes

Type of change

Please delete options that are not relevant and/or add your own.

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • Some types have slightly been adjusted to make sense in a static type context

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@narendasan narendasan requested review from peri044 and gs-olive July 20, 2023 19:30
@github-actions github-actions bot added component: api [Python] Issues re: Python API component: api [C++] Issues re: C++ API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: evaluators Issues re: Specific op evaluators component: lowering Issues re: The lowering / preprocessing passes component: partitioning component: runtime component: tests Issues re: Tests documentation Improvements or additions to documentation labels Jul 20, 2023
@narendasan narendasan changed the base branch from main to pep517_builds July 20, 2023 20:31
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
        "aten::pow.float_int(float a, int b) -> (float)",
    }));

-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+    and,
+    "aten::__and__",
+    a&& b,
+    bool,
+    std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
    xor,
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:30:59.066389 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:31:18.329140 +0000
@@ -72,15 +72,11 @@
        inputs = [
            torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
            torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
        ]

-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=1,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:30:59.066389 +0000
+++ tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:31:18.355272 +0000
@@ -91,15 +91,11 @@
                (5,),
            ),
        ]

        fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=2,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
        "aten::pow.float_int(float a, int b) -> (float)",
    }));

-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+    and,
+    "aten::__and__",
+    a&& b,
+    bool,
+    std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
    xor,
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:38:43.705986 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:39:13.917759 +0000
@@ -72,15 +72,11 @@
        inputs = [
            torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
            torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
        ]

-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=1,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:38:43.705986 +0000
+++ tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:39:13.978766 +0000
@@ -91,15 +91,11 @@
                (5,),
            ),
        ]

        fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=2,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
        "aten::pow.float_int(float a, int b) -> (float)",
    }));

-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+    and,
+    "aten::__and__",
+    a&& b,
+    bool,
+    std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
    xor,
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- py/torch_tensorrt/dynamo/runtime/__init__.py	2023-07-20 20:40:52.942688 +0000
+++ py/torch_tensorrt/dynamo/runtime/__init__.py	2023-07-20 20:41:09.494994 +0000
@@ -1,2 +1,2 @@
-from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
+from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule  # noqa: F401
from ._TorchTensorRTModule import TorchTensorRTModule  # noqa: F401
--- tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:40:52.958688 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:41:12.851884 +0000
@@ -72,15 +72,11 @@
        inputs = [
            torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
            torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
        ]

-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=1,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:40:52.958688 +0000
+++ tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:41:12.944283 +0000
@@ -91,15 +91,11 @@
                (5,),
            ),
        ]

        fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=2,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to C++ style guidelines:

diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
        "aten::pow.float_int(float a, int b) -> (float)",
    }));

-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+    and,
+    "aten::__and__",
+    a&& b,
+    bool,
+    std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
    xor,
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- py/torch_tensorrt/dynamo/runtime/__init__.py	2023-07-20 20:44:40.035477 +0000
+++ py/torch_tensorrt/dynamo/runtime/__init__.py	2023-07-20 20:45:01.353601 +0000
@@ -1,2 +1,2 @@
-from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
+from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule  # noqa: F401
from ._TorchTensorRTModule import TorchTensorRTModule  # noqa: F401
--- tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:44:40.059478 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py	2023-07-20 20:45:05.744794 +0000
@@ -72,15 +72,11 @@
        inputs = [
            torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
            torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
        ]

-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=1,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:44:40.059478 +0000
+++ tests/py/dynamo/backend/test_partitioning.py	2023-07-20 20:45:05.866701 +0000
@@ -91,15 +91,11 @@
                (5,),
            ),
        ]

        fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
-        (
-            unexpected_ops_seen,
-            _,
-            partitioned_graphs,
-        ) = lower_graph_testing(
+        (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
            fx_graph,
            inputs,
            unexpected_ops=unexpected_ops,
            min_block_size=2,
            torch_executed_ops={"torch.ops.aten.add.Tensor"},

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-07-27 19:39:59.343464+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-07-27 19:42:21.824739+00:00
@@ -302,12 +302,16 @@
        output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
        device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
    Returns:
        torch.jit.ScriptModule: New TorchScript module with engine embedded
    """
-    input_binding_name_list = input_binding_names if input_binding_names is not None else []
-    output_binding_name_list = output_binding_names if output_binding_names is not None else []
+    input_binding_name_list = (
+        input_binding_names if input_binding_names is not None else []
+    )
+    output_binding_name_list = (
+        output_binding_names if output_binding_names is not None else []
+    )
    cpp_mod = _C.embed_engine_in_new_module(
        serialized_engine,
        _parse_device(device),
        input_binding_name_list,
        output_binding_name_list,

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-07-31 19:15:49.366140+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-07-31 19:18:09.264729+00:00
@@ -302,12 +302,16 @@
        output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
        device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
    Returns:
        torch.jit.ScriptModule: New TorchScript module with engine embedded
    """
-    input_binding_name_list = input_binding_names if input_binding_names is not None else []
-    output_binding_name_list = output_binding_names if output_binding_names is not None else []
+    input_binding_name_list = (
+        input_binding_names if input_binding_names is not None else []
+    )
+    output_binding_name_list = (
+        output_binding_names if output_binding_names is not None else []
+    )
    cpp_mod = _C.embed_engine_in_new_module(
        serialized_engine,
        _parse_device(device),
        input_binding_name_list,
        output_binding_name_list,

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@@ -53,7 +49,9 @@ def gelu(

layer = network.add_plugin_v2([input_val], plugin)

def gelu_dyn_range_fn(dyn_range):
def gelu_dyn_range_fn(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

@gs-olive gs-olive Aug 2, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would likely fail in the FakeTensor context we use in the torch.compile path. I believe the correct type signature for dyn_range should be Tuple[float, float], as mentioned here. Then, the fix would be to change the return to be:

import math

...

        return (
            dyn_range[0] * 0.5 * (1.0 + math.erf(dyn_range[0] / math.sqrt(2.0)))
        ), (dyn_range[1] * 0.5 * (1.0 + math.erf(dyn_range[1] / math.sqrt(2.0))))

The above avoids the unnecessary cast to torch.Tensor. Additionally, the correct return type would then also be Tuple[float, float].

Note: I changed the second return argument from using dyn_range[0] to dyn_range[1]. Not sure if this is a bug in the original implementation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it supposed to be an op on a scalar or on a tensor?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this nested function gelu_dyn_range_function is supposed to be a function on a Tuple of two scalars.

Comment on lines 55 to 57
return (
dyn_range[0] * 0.5 * (1.0 + torch.erf(dyn_range[0] / math.sqrt(2.0)))
), (dyn_range[1] * 0.5 * (1.0 + torch.erf(dyn_range[0] / math.sqrt(2.0))))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above comment

@github-actions github-actions bot added component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: torch_compile labels Aug 7, 2023
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM pending CI

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2023-08-07 18:32:08.732415+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2023-08-07 18:34:44.880582+00:00
@@ -12,10 +12,11 @@

import torch
import torch_tensorrt

# %%
+

# We begin by defining a model
class Model(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/tools/opset_coverage.py	2023-08-07 18:32:08.744415+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/tools/opset_coverage.py	2023-08-07 18:34:45.808594+00:00
@@ -108,11 +108,10 @@
def opset_coverage(
    opset: List[Tuple[str, str]],
    converter_registry: Optional[ConverterRegistry] = None,
    decomposition_registry: Optional[Dict[OpOverload, Callable[..., Any]]] = None,
) -> OpsetCoverage:
-
    opset_schemas = dict(opset)
    opset_targets = set(opset_schemas.keys())

    support_status = {}

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-08-07 18:32:08.756415+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py	2023-08-07 18:34:49.220113+00:00
@@ -302,12 +302,16 @@
        output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
        device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
    Returns:
        torch.jit.ScriptModule: New TorchScript module with engine embedded
    """
-    input_binding_name_list = input_binding_names if input_binding_names is not None else []
-    output_binding_name_list = output_binding_names if output_binding_names is not None else []
+    input_binding_name_list = (
+        input_binding_names if input_binding_names is not None else []
+    )
+    output_binding_name_list = (
+        output_binding_names if output_binding_names is not None else []
+    )
    cpp_mod = _C.embed_engine_in_new_module(
        serialized_engine,
        _parse_device(device),
        input_binding_name_list,
        output_binding_name_list,

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2023-08-07 19:05:24.508037+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py	2023-08-07 19:07:53.410940+00:00
@@ -12,10 +12,11 @@

import torch
import torch_tensorrt

# %%
+

# We begin by defining a model
class Model(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@narendasan narendasan removed the WIP Work is in progress, pull request should not be merged yet label Aug 7, 2023
We are adding:
- mypy: Type checking
- ruff: Linting and common bug detection
- isort: import order cleanup

These tools will run as part of the precommit system and are required
for all code in //py and //examples. They are suggested for the rest

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/ts): Make torch_tensorrt.ts.TorchScriptInput
mypy compliant

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/ts): Making compile mypy compliant

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): Make Device conform to mypy

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): Making Input mypy compilaint

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/ts): Make compile_spec conform to mypy

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): more _Input.py mypy stuff

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): conforming logging to mypy

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): _compile.py conforms to mypy

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/dynamo/backend): Backend is mypy conforming

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): subsitutions mypy compliant

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/dynamo/lowering): mypy compilance

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/dynamo): Tracer mypy compliance

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): ptq mypy compliance

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt): __init__.py mypy conforming

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore(//py/torch_tensorrt/dynamo/conversion): mypy conforming

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: mypy compliance with 3.11 syntax

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: Buildifier on build files

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: linting for the remaining files

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: All code is now flake8 compliant

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: adding isort to pre-commit

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>

chore: ready to start review

Signed-off-by: Naren Dasan <naren@narendasan.com>
Signed-off-by: Naren Dasan <narens@nvidia.com>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

@narendasan narendasan merged commit b3089bf into main Aug 8, 2023
@narendasan narendasan deleted the mypy branch August 8, 2023 22:06
@narendasan narendasan restored the mypy branch August 9, 2023 04:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [C++] Issues re: C++ API component: api [Python] Issues re: Python API component: build system Issues re: Build system component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: evaluators Issues re: Specific op evaluators component: fx component: lowering Issues re: The lowering / preprocessing passes component: partitioning component: runtime component: tests Issues re: Tests component: torch_compile documentation Improvements or additions to documentation fx
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants