You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generally the input must have a floating-point type (or kINT8 as a quantized float), except for the following operations:
kSIGN accepts a floating-point or Int32 tensor.
kNOT requires a Bool tensor.
To Reproduce
Steps to reproduce the behavior:
Attempt to compile a model with an aten::abs op with integer inputs.
Expected behavior
This can be supported with an element-wise implementation of the op in cases where the unarylayer does not support the input type. abs(x) = max(x, x*-1)
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
Torch-TensorRT Version (e.g. 1.0.0):
PyTorch Version (e.g. 1.0):
CPU Architecture:
OS (e.g., Linux):
How you installed PyTorch (conda, pip, libtorch, source):
Build command you used (if compiling from source):
Are you using local sources or building from archives:
Python version:
CUDA version:
GPU models and configuration:
Any other relevant information:
Additional context
The text was updated successfully, but these errors were encountered:
Adds support for aten::abs with integer input. Previous implementation relied on the UnaryLayer kABS implementation which does not support integers.
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_network_definition.html#a77831224c9a72ad02587a56ded93c672
```
Generally the input must have a floating-point type (or kINT8 as a quantized float), except for the following operations:
kSIGN accepts a floating-point or Int32 tensor.
kNOT requires a Bool tensor.
```
Fixes # (pytorch#1231)
Please delete options that are not relevant and/or add your own.
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update
- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified
Signed-off-by: Michael Feliz <michael.feliz@getcruise.com>
Bug Description
The current implementation of the aten::abs converter relies on the UnaryLayer kABS implementation which does not support integers
TensorRT/core/conversion/converters/impl/unary.cpp
Line 16 in 84ffb67
https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_network_definition.html#a77831224c9a72ad02587a56ded93c672
To Reproduce
Steps to reproduce the behavior:
Expected behavior
This can be supported with an element-wise implementation of the op in cases where the unarylayer does not support the input type. abs(x) = max(x, x*-1)
Environment
conda
,pip
,libtorch
, source):Additional context
The text was updated successfully, but these errors were encountered: