-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 [Bug] Unable to compile the model using torch tensorrt #1565
Comments
Can you enable debug logging and provide the full log?
|
code
please find the error log below,
hope this will help you. |
@peri044 did you get a chance to look into this? |
I looked into the issue and I think the error could be related to the behavior of this model when scripted/traced. Since the model is passed in as an TensorRT/py/torch_tensorrt/_compile.py Lines 119 to 127 in 8adcacc
When I run scripted_model = torch.jit.script(model) , and then call the scripted model on a tensor of shape (1, 3, 720, 1080) , TorchScript throws an error, as it seems to expect a list of (C, H, W) images as input. Additionally, the output type of this model appears to be a Python dictionary, which may also be contributing to the issue. Will update with any further findings/workarounds.
|
@peri044 @gs-olive , I can see that compile function script model and compiles it according the precision. However, I'm using pytorch inbuilt model here |
Thanks for the update. Upon further investigation, it seems that the dictionary output is not the root cause of the issue. The error occurs here, on line 172: TensorRT/core/lowering/lowering.cpp Lines 171 to 172 in 835abf0
The Torch lowering code throws an error, shown here, because the model code itself sets class attributes from within the forward function, as shown in this snippet from the MobileNet V3 model code. Will update with any workarounds that make the compilation functional
|
As an update on this issue, we are investigating the FX path for this model, and are addressing some failures with the model currently (see pytorch/pytorch#96151) |
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days |
Hello - I have verified that this model is successfully compiling with our import torch
import torch_tensorrt
...
optimized_model = torch.compile(detectron, backend="tensorrt", options={...})
optimized_model(*inputs) The current version of |
Bug Description
Hi team, I have built the object detection model using torchvision fasterrcnn model. I need to deploy this model in Nvidia Triton server, so I’m trying to compile the model using torch_tensorrt but its failing.
@narendasan @gs-olive
To Reproduce
Steps to reproduce the behavior:
Expected behavior
pytorch model should be compiled using
torch_tensorrt
libraryEnvironment
OS : ubuntu 20.04
Python : 3.10.8
tensorrt version : 8.5.2.2
conda
,pip
,libtorch
, source): condaAdditional context
** please find the error message below **
The text was updated successfully, but these errors were encountered: