-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support activation dynamo converters #2254
Conversation
a9c002b
to
4e0288a
Compare
network: TRTNetwork, | ||
target: Target, | ||
args: Tuple[Argument, ...], | ||
kwargs: Dict[str, Argument], | ||
name: str, | ||
) -> Union[TRTTensor, Sequence[TRTTensor]]: | ||
return impl.normalization.layer_norm( | ||
return impl.actv.sigmoid( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use the full name (activation)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, but got error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/__init__.py", line 86, in <module>
from torch_tensorrt._compile import * # noqa: F403
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/_compile.py", line 13, in <module>
from torch_tensorrt.dynamo.compile import compile as dynamo_compile
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/__init__.py", line 13, in <module>
from .compile import compile # noqa: F403
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/compile.py", line 13, in <module>
from torch_tensorrt.dynamo import CompilationSettings, partitioning
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/partitioning/__init__.py", line 1, in <module>
from ._adjacency_partitioner import partition as fast_partition
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/partitioning/_adjacency_partitioner.py", line 16, in <module>
from torch_tensorrt.dynamo.conversion.converter_registry import (
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/__init__.py", line 2, in <module>
from .aten_ops_converters import * # noqa: F403
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/aten_ops_converters.py", line 8, in <module>
from torch_tensorrt.dynamo.conversion import impl
File "/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/impl/__init__.py", line 3, in <module>
from . import (
ImportError: cannot import name 'activation' from partially initialized module 'torch_tensorrt.dynamo.conversion.impl' (most likely due to a circular import) (/opt/circleci/.pyenv/versions/3.10.9/lib/python3.10/site-packages/torch_tensorrt/dynamo/conversion/impl/__init__.py)
Exited with code exit status 1
CircleCI received exit code 1
Did you come across the similar error by any chance?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've seen this error before. The issue seems to be here:
from torch_tensorrt.dynamo.conversion.impl.activation.base import convert_activation |
It is trying to import a function which is in the module being currently initialized. I think it can be fixed by replacing with
from .base import convert_activation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks George! I modified as you suggested but it looks like not the problem. I got the same error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for testing this - I actually tried this branch on my own machine and I'm not seeing this error, with either the from torch_tensorrt.dynamo.conversion.impl.activation.base import convert_activation
or from .base import convert_activation
. Do you see the error locally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your help! No, I don't. I typically push every time after passing all tests on my local machine. That's weird...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem! Definitely strange - could try a rebase to main
to resolve the merge conflict and see if anything changes with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I rebased, but it doesn't work (I used from .base import convert_activation
). I guess it's not the problem of torch_tensorrt.dynamo.conversion.impl.activation.base import convert_activation
because the unary folder uses the similar from torch_tensorrt.dynamo.conversion.impl.unary.base import convert_unary
but it works. 😵
|
||
plugin = get_trt_plugin(plugin_name, field_collection, plugin_version) | ||
|
||
layer = network.add_plugin_v2([input_val], plugin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This need to be handled in lowering instead of the plugin @peri044
f83615a
to
e878a8c
Compare
|
||
plugin = get_trt_plugin(plugin_name, field_collection, plugin_version) | ||
|
||
layer = network.add_plugin_v2([input_val], plugin) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should use a lowering pass and run it natively instead of in a plugin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is currently an aten.gelu
decomposition enabled on main
, so this can potentially be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few comments on switching to functional implementations of operators. Additionally, please rebase to main
to resolve merge conflicts.
operation_type = trt.ActivationType.SELU | ||
|
||
def selu_dyn_range_fn(dyn_range): | ||
return (torch.nn.SELU(dyn_range[0]), torch.nn.SELU(dyn_range[1])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch to torch.nn.functional.selu
, to use functional implementation.
operation_type = trt.ActivationType.SOFTSIGN | ||
|
||
def softsign_dyn_range_fn(dyn_range): | ||
return (torch.nn.Softsign(dyn_range[0]), torch.nn.Softsign(dyn_range[1])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch to torch.nn.functional.softsign
, to use functional implementation.
torch.nn.Softplus(dyn_range[0], beta), | ||
torch.nn.Softplus(dyn_range[1], beta), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly here for torch.nn.functional.softplus
|
||
def scaled_tanh_dyn_range_fn(dyn_range): | ||
def scaled_tanh_fn(x): | ||
return alpha * torch.nn.Tanh(beta * x) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch.nn.ELU(dyn_range[0], alpha), | ||
torch.nn.ELU(dyn_range[1], alpha), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly here: torch.nn.functional.elu
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gs-olive Thanks! I modified and rebased.
b8d189a
to
60169bb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cyclic import issue may be caused by the fact that a new directory/module was created, but not added to the setup.py
. Could you try adding "torch_tensorrt.dynamo.conversion.impl.activation"
here:
Lines 396 to 397 in e49ef6d
"torch_tensorrt.dynamo.conversion.impl.unary", | |
"torch_tensorrt.dynamo.lowering", |
As well as
"torch_tensorrt.dynamo.conversion.impl.activation": "py/torch_tensorrt/dynamo/conversion/impl/activation"
here:Lines 422 to 423 in e49ef6d
"torch_tensorrt.dynamo.conversion.impl.unary": "py/torch_tensorrt/dynamo/conversion/impl/unary", | |
"torch_tensorrt.dynamo.lowering": "py/torch_tensorrt/dynamo/lowering", |
from torch_tensorrt.dynamo._SourceIR import SourceIR | ||
from torch_tensorrt.fx.types import TRTNetwork, TRTTensor | ||
|
||
from .base import convert_activation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be changed back to from torch_tensorrt.dynamo.conversion.impl.activation.base import convert_activation
60169bb
to
a4722d9
Compare
@gs-olive Oh I didn't even know this before. I guess that's the problem! thanks! updated the code! |
@zewenli98 - sure, no problem! I think since the |
lint test file fix bugs: circular import delete gelu change function calls from nn.Module to nn.functional
a4722d9
to
27b2dcf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me!
Description
Support activation dynamo converters, including
relu
,sigmoid
,tanh
,leaky_relu
,elu
,selu
,softplus
,clip
,hardsigmoid
.Fixes #2201
Type of change
Checklist: