-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Address .numpy()
issue on fake tensors
#1949
Conversation
.numpy()
issue on fake tensors.numpy()
issue on fake tensors
@@ -52,6 +52,7 @@ def aot_torch_tensorrt_aten_backend( | |||
) | |||
|
|||
|
|||
@fake_tensor_unsupported |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the ultimate backend everything goes through right? so does that mean we cant work on fake tensors? is this different than symbolic shapes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also could we just turn a "fake_tensor" into an "ITensor" immediately. sounds like they are similar
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes this is the backend everything goes through, and my understanding is that FakeTensors
are for use at compile time, and are distinct from SymInt
s which are the symbolic shape representations.
The challenge with fake tensors right now is that any tensors instantiated during the call are fake, which means that the constant tensors which we need to provide to TensorRT, as in the code snippet below, are "fake" and thus contain no value to be parsed. I think a long-term solution could be to support Fake Tensors fully, but this solution temporarily resolves TRT/Torch compatibility issues.
TensorRT/py/torch_tensorrt/fx/converters/converter_utils.py
Lines 247 to 255 in c7e79b2
if isinstance(value, int): | |
value = torch.IntTensor([value]) | |
if isinstance(value, float): | |
value = torch.Tensor([value]) | |
if dtype: | |
value = value.to(dtype) | |
constant = network.add_constant(value.shape, to_numpy(value)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instantiated #1951 with a feature proposal and additional discussion
- Add `fake_tensor_unsupported` decorator to helper backend - Refactor `conversion` implementation to use compilation settings object as well, to reduce code duplication and encourage reuse - Improve debugger messages by pre-formatting support string
6d65cbf
to
618615b
Compare
Description
fake_tensor_unsupported
decorator to helper backendconversion
implementation to use compilation settings object as well, to reduce code duplication and encourage reuseType of change
Checklist: