Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 [Bug] aten::abs converter does not support int32 #1231

Closed
mfeliz-cruise opened this issue Aug 4, 2022 · 0 comments
Closed

🐛 [Bug] aten::abs converter does not support int32 #1231

mfeliz-cruise opened this issue Aug 4, 2022 · 0 comments
Assignees
Labels
bug Something isn't working component: converters Issues re: Specific op converters

Comments

@mfeliz-cruise
Copy link
Contributor

Bug Description

The current implementation of the aten::abs converter relies on the UnaryLayer kABS implementation which does not support integers

auto unary = ctx->net->addUnary(*in, nvinfer1::UnaryOperation::trt_type); \
.

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_network_definition.html#a77831224c9a72ad02587a56ded93c672

Generally the input must have a floating-point type (or kINT8 as a quantized float), except for the following operations:

kSIGN accepts a floating-point or Int32 tensor.
kNOT requires a Bool tensor.

To Reproduce

Steps to reproduce the behavior:

  1. Attempt to compile a model with an aten::abs op with integer inputs.

Expected behavior

This can be supported with an element-wise implementation of the op in cases where the unarylayer does not support the input type. abs(x) = max(x, x*-1)

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@mfeliz-cruise mfeliz-cruise added the bug Something isn't working label Aug 4, 2022
mfeliz-cruise added a commit to mfeliz-cruise/Torch-TensorRT that referenced this issue Aug 4, 2022
Adds support for aten::abs with integer input. Previous implementation relied on the UnaryLayer kABS implementation which does not support integers.

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_network_definition.html#a77831224c9a72ad02587a56ded93c672
```
Generally the input must have a floating-point type (or kINT8 as a quantized float), except for the following operations:

kSIGN accepts a floating-point or Int32 tensor.
kNOT requires a Bool tensor.
```

Fixes # (pytorch#1231)

Please delete options that are not relevant and/or add your own.

- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- This change requires a documentation update

- [ ] My code follows the style guidelines of this project (You can use the linters)
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas and hacks
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests to verify my fix or my feature
- [ ] New and existing unit tests pass locally with my changes
- [ ] I have added the relevant labels to my PR in so that relevant reviewers are notified

Signed-off-by: Michael Feliz <michael.feliz@getcruise.com>
@narendasan narendasan added the component: converters Issues re: Specific op converters label Aug 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working component: converters Issues re: Specific op converters
Projects
None yet
Development

No branches or pull requests

4 participants