🐛 [Bug] Encountered bug when using Torch-TensorRT #2328
Labels
bug
Something isn't working
component: dynamo
Issues relating to the `torch.compile` or `torch._dynamo.export` paths
Bug Description
This might not be a bug, maybe it's a feature request, not sure.
I wanted to compile torch.einsum with torch_tensorrt and I get back an error
I was reading this tutorial about compiling transformers:
https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_transformers_example.html
Based on this I created a small example module containing einsum, and I get this error:
torch.einsum is either not a supported op yet, or if it is, it's buggy I think
It's not listed under supported ops here:
https://github.com/pytorch/TensorRT/blob/8ebb5991f8bc46fea6179593b882d5c160bc1a53/docs/_sources/indices/supported_ops.rst.txt
TensorRT supports it according to this: (IEinsumLayer)
https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-861/operators/index.html
So I don't see why it wouldn't be supported in torch-tensorrt.
I see some issues/PR-s that relate to einsum, but I don't know if they, closest issue I found is
#277
But it's closed due to inactivity
Other issues/PRs:
#1385
#1985
#1420
#1005
To Reproduce
Steps to reproduce the behavior:
Only compile option 3 works, but I don't know what the difference is between any of these 3 options, can somebody clear that up? option 1 and 2 I think are the same, but option 3?
Expected behavior
I expect all 3 options to work, but only the 3rd compile option seems to work.
Environment
I'm using this docker image:
nvcr.io/nvidia/pytorch:23.08-py3
https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-23-08.html
The text was updated successfully, but these errors were encountered: