-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Python tooling #2126
Improve Python tooling #2126
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
"aten::pow.float_int(float a, int b) -> (float)",
}));
-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+ and,
+ "aten::__and__",
+ a&& b,
+ bool,
+ std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
xor,
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:30:59.066389 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:31:18.329140 +0000
@@ -72,15 +72,11 @@
inputs = [
torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
]
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=1,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:30:59.066389 +0000
+++ tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:31:18.355272 +0000
@@ -91,15 +91,11 @@
(5,),
),
]
fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=2,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
"aten::pow.float_int(float a, int b) -> (float)",
}));
-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+ and,
+ "aten::__and__",
+ a&& b,
+ bool,
+ std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
xor,
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:38:43.705986 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:39:13.917759 +0000
@@ -72,15 +72,11 @@
inputs = [
torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
]
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=1,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:38:43.705986 +0000
+++ tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:39:13.978766 +0000
@@ -91,15 +91,11 @@
(5,),
),
]
fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=2,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
"aten::pow.float_int(float a, int b) -> (float)",
}));
-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+ and,
+ "aten::__and__",
+ a&& b,
+ bool,
+ std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
xor,
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- py/torch_tensorrt/dynamo/runtime/__init__.py 2023-07-20 20:40:52.942688 +0000
+++ py/torch_tensorrt/dynamo/runtime/__init__.py 2023-07-20 20:41:09.494994 +0000
@@ -1,2 +1,2 @@
-from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
+from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
from ._TorchTensorRTModule import TorchTensorRTModule # noqa: F401
--- tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:40:52.958688 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:41:12.851884 +0000
@@ -72,15 +72,11 @@
inputs = [
torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
]
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=1,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:40:52.958688 +0000
+++ tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:41:12.944283 +0000
@@ -91,15 +91,11 @@
(5,),
),
]
fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=2,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to C++ style guidelines:
diff --git a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp b/tmp/changes.txt
index 8258a13..8381754 100644
--- a/home/runner/work/TensorRT/TensorRT/core/conversion/evaluators/aten.cpp
+++ b/tmp/changes.txt
@@ -103,7 +103,12 @@ DEFINE_ARITHMATIC_TWO_INPUT_EVALUATOR(
"aten::pow.float_int(float a, int b) -> (float)",
}));
-DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(and, "aten::__and__", a&& b, bool, std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
+DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
+ and,
+ "aten::__and__",
+ a&& b,
+ bool,
+ std::set<std::string>({"aten::__and__(int a, int b) -> (bool)", "aten::__and__.bool(bool a, bool b) -> (bool)"}));
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(or, "aten::__or__", a || b, bool, {"aten::__or__(int a, int b) -> (bool)"});
DEFINE_TWO_INPUT_SIMPLE_EVALUATOR(
xor,
ERROR: Some files do not conform to style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- py/torch_tensorrt/dynamo/runtime/__init__.py 2023-07-20 20:44:40.035477 +0000
+++ py/torch_tensorrt/dynamo/runtime/__init__.py 2023-07-20 20:45:01.353601 +0000
@@ -1,2 +1,2 @@
-from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
+from ._PythonTorchTensorRTModule import PythonTorchTensorRTModule # noqa: F401
from ._TorchTensorRTModule import TorchTensorRTModule # noqa: F401
--- tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:44:40.059478 +0000
+++ tests/py/dynamo/backend/test_backend_compiler.py 2023-07-20 20:45:05.744794 +0000
@@ -72,15 +72,11 @@
inputs = [
torch.randint(-40, 40, (16, 7, 5), dtype=torch.int).cuda(),
torch.randint(1, 40, (16, 7, 5), dtype=torch.int).cuda(),
]
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=1,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
--- tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:44:40.059478 +0000
+++ tests/py/dynamo/backend/test_partitioning.py 2023-07-20 20:45:05.866701 +0000
@@ -91,15 +91,11 @@
(5,),
),
]
fx_graph = torch.fx.symbolic_trace(PartiallySupportedMultiOp())
- (
- unexpected_ops_seen,
- _,
- partitioned_graphs,
- ) = lower_graph_testing(
+ (unexpected_ops_seen, _, partitioned_graphs,) = lower_graph_testing(
fx_graph,
inputs,
unexpected_ops=unexpected_ops,
min_block_size=2,
torch_executed_ops={"torch.ops.aten.add.Tensor"},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-07-27 19:39:59.343464+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-07-27 19:42:21.824739+00:00
@@ -302,12 +302,16 @@
output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
Returns:
torch.jit.ScriptModule: New TorchScript module with engine embedded
"""
- input_binding_name_list = input_binding_names if input_binding_names is not None else []
- output_binding_name_list = output_binding_names if output_binding_names is not None else []
+ input_binding_name_list = (
+ input_binding_names if input_binding_names is not None else []
+ )
+ output_binding_name_list = (
+ output_binding_names if output_binding_names is not None else []
+ )
cpp_mod = _C.embed_engine_in_new_module(
serialized_engine,
_parse_device(device),
input_binding_name_list,
output_binding_name_list,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-07-31 19:15:49.366140+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-07-31 19:18:09.264729+00:00
@@ -302,12 +302,16 @@
output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
Returns:
torch.jit.ScriptModule: New TorchScript module with engine embedded
"""
- input_binding_name_list = input_binding_names if input_binding_names is not None else []
- output_binding_name_list = output_binding_names if output_binding_names is not None else []
+ input_binding_name_list = (
+ input_binding_names if input_binding_names is not None else []
+ )
+ output_binding_name_list = (
+ output_binding_names if output_binding_names is not None else []
+ )
cpp_mod = _C.embed_engine_in_new_module(
serialized_engine,
_parse_device(device),
input_binding_name_list,
output_binding_name_list,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
@@ -53,7 +49,9 @@ def gelu( | |||
|
|||
layer = network.add_plugin_v2([input_val], plugin) | |||
|
|||
def gelu_dyn_range_fn(dyn_range): | |||
def gelu_dyn_range_fn( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would likely fail in the FakeTensor
context we use in the torch.compile
path. I believe the correct type signature for dyn_range
should be Tuple[float, float]
, as mentioned here. Then, the fix would be to change the return
to be:
import math
...
return (
dyn_range[0] * 0.5 * (1.0 + math.erf(dyn_range[0] / math.sqrt(2.0)))
), (dyn_range[1] * 0.5 * (1.0 + math.erf(dyn_range[1] / math.sqrt(2.0))))
The above avoids the unnecessary cast to torch.Tensor
. Additionally, the correct return type would then also be Tuple[float, float]
.
Note: I changed the second return argument from using dyn_range[0]
to dyn_range[1]
. Not sure if this is a bug in the original implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it supposed to be an op on a scalar or on a tensor?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this nested function gelu_dyn_range_function
is supposed to be a function on a Tuple of two scalars.
return ( | ||
dyn_range[0] * 0.5 * (1.0 + torch.erf(dyn_range[0] / math.sqrt(2.0))) | ||
), (dyn_range[1] * 0.5 * (1.0 + torch.erf(dyn_range[0] / math.sqrt(2.0)))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM pending CI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2023-08-07 18:32:08.732415+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2023-08-07 18:34:44.880582+00:00
@@ -12,10 +12,11 @@
import torch
import torch_tensorrt
# %%
+
# We begin by defining a model
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/tools/opset_coverage.py 2023-08-07 18:32:08.744415+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/tools/opset_coverage.py 2023-08-07 18:34:45.808594+00:00
@@ -108,11 +108,10 @@
def opset_coverage(
opset: List[Tuple[str, str]],
converter_registry: Optional[ConverterRegistry] = None,
decomposition_registry: Optional[Dict[OpOverload, Callable[..., Any]]] = None,
) -> OpsetCoverage:
-
opset_schemas = dict(opset)
opset_targets = set(opset_schemas.keys())
support_status = {}
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-08-07 18:32:08.756415+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/ts/_compiler.py 2023-08-07 18:34:49.220113+00:00
@@ -302,12 +302,16 @@
output_binding_names (List[str]): List of names of TensorRT bindings in order that should be returned from the encompassing PyTorch module
device (Union(torch_tensorrt.Device, torch.device, dict)): Target device to run engine on. Must be compatible with engine provided. Default: Current active device
Returns:
torch.jit.ScriptModule: New TorchScript module with engine embedded
"""
- input_binding_name_list = input_binding_names if input_binding_names is not None else []
- output_binding_name_list = output_binding_names if output_binding_names is not None else []
+ input_binding_name_list = (
+ input_binding_names if input_binding_names is not None else []
+ )
+ output_binding_name_list = (
+ output_binding_names if output_binding_names is not None else []
+ )
cpp_mod = _C.embed_engine_in_new_module(
serialized_engine,
_parse_device(device),
input_binding_name_list,
output_binding_name_list,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2023-08-07 19:05:24.508037+00:00
+++ /home/runner/work/TensorRT/TensorRT/examples/dynamo/torch_compile_advanced_usage.py 2023-08-07 19:07:53.410940+00:00
@@ -12,10 +12,11 @@
import torch
import torch_tensorrt
# %%
+
# We begin by defining a model
class Model(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
We are adding: - mypy: Type checking - ruff: Linting and common bug detection - isort: import order cleanup These tools will run as part of the precommit system and are required for all code in //py and //examples. They are suggested for the rest Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/ts): Make torch_tensorrt.ts.TorchScriptInput mypy compliant Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/ts): Making compile mypy compliant Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): Make Device conform to mypy Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): Making Input mypy compilaint Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/ts): Make compile_spec conform to mypy Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): more _Input.py mypy stuff Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): conforming logging to mypy Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): _compile.py conforms to mypy Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/dynamo/backend): Backend is mypy conforming Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): subsitutions mypy compliant Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/dynamo/lowering): mypy compilance Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/dynamo): Tracer mypy compliance Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): ptq mypy compliance Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt): __init__.py mypy conforming Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore(//py/torch_tensorrt/dynamo/conversion): mypy conforming Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: mypy compliance with 3.11 syntax Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: Buildifier on build files Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: linting for the remaining files Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: All code is now flake8 compliant Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: adding isort to pre-commit Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com> chore: ready to start review Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to C++ style guidelines
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code conforms to Python style guidelines
Description
Installing a number of standard tools to make the codebase more maintainable longterm
Adds:
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: