Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] Encountered bug when using TRTModuleNext in Dynamo #2053

Closed
gs-olive opened this issue Jun 22, 2023 · 0 comments · Fixed by #2054
Closed

🐛 [Bug] Encountered bug when using TRTModuleNext in Dynamo #2053

gs-olive opened this issue Jun 22, 2023 · 0 comments · Fixed by #2054
Assignees
Labels
bug Something isn't working

Comments

@gs-olive
Copy link
Collaborator

Bug Description

When compiling the fasterrcnn_mobilenet_v3_large_320_fpn with torch_tensorrt.dynamo.compile, using the use_experimental_rt=True argument, the following error is encountered:

  Outputs: [
    id: 0
      name: output0
      shape: [2, 960, 13, 14]
      dtype: Float
    id: 1
      name: output1
      shape: [1, 160, 1, 1]
      dtype: Float
    id: 2
      name: output2
      shape: [1, 960, 1, 1]
      dtype: Float
    id: 3
      name: output3
      shape: [2, 960, 1, 1]
      dtype: Float
  }

...

DEBUG: [Torch-TensorRT - Debug Build] - Output Name: output0 Shape: [2, 960, 13, 14]
DEBUG: [Torch-TensorRT - Debug Build] - Output Name: output0 Shape: [2, 960, 13, 14]
DEBUG: [Torch-TensorRT - Debug Build] - Output Name: output0 Shape: [2, 960, 13, 14]
DEBUG: [Torch-TensorRT - Debug Build] - Output Name: output3 Shape: [2, 960, 1, 1]
ERROR: [Torch-TensorRT - Debug Build] - 3: [executionContext.cpp::enqueueV3::2666] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueV3::2666, condition: mContext.profileObliviousBindings.at(profileObliviousIndex) || getPtrOrNull(mOutputAllocators, profileObliviousIndex)

Note the difference in the logged outputs and the engine-reported outputs.

To Reproduce

Steps to reproduce the behavior:

model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn().eval().cuda()
inp = torch.rand((3, 300, 400)).cuda()
inp2 = torch.rand((3, 500, 400)).cuda()
model_acc = torch_tensorrt.dynamo.compile(model, [inp, inp2], pass_through_build_failures=True, use_experimental_rt=True)

Expected behavior

Model should compile without errors.

Environment

  • Torch-TensorRT Version (e.g. 1.0.0): 075a028
  • PyTorch Version (e.g. 1.0): 2.1.0.dev20230606+cu118

Additional context

Related: #1565, #1995

@gs-olive gs-olive added the bug Something isn't working label Jun 22, 2023
@gs-olive gs-olive self-assigned this Jun 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant