-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update usage of PyTorch's custom op API #2193
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes look good - thanks for this update. Added a few comments/questions. Additionally, please run the Python linting on this PR. You can do so by running the following from the root directory of the repository:
pip install -r requirements-dev.txt
pre-commit install
Subsequent commits will be automatically linted.
This PR is currently blocked pending a Torch nightly upgrade on the repository.
Hi @zou3519 - we recently updated the stack to use the latest nightly distribution of PyTorch. Could you run linting and rebase to the latest main? Thank you |
Will do! |
9615b1c
to
813742e
Compare
Fixes #2328 |
813742e
to
b890dec
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just pending the Python linting fixes mentioned in the comments below.
@custom_op( | ||
qualname="tensorrt::einsum", | ||
manual_schema="(str equation, Tensor[] tensors) -> Tensor", | ||
library.custom_op( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove one space above this call; add one more space below line 15
manual_schema="(Tensor x, int[1] kernel_size, int[1] stride, int[1] padding, int[1] dilation, bool ceil_mode) -> Tensor", | ||
library.custom_op( | ||
"tensorrt::maxpool1d", | ||
"(Tensor x, int[1] kernel_size, int[1] stride, int[1] padding, int[1] dilation, bool ceil_mode) -> Tensor" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add comma at end of this line; add another space below line 26
Thanks; I'm not actually sure how to run the linter (do I need to undo the commit and then redo it? for the pre-commit hook to pick it up?) |
Yes, you would need to somehow change those files for the pre-commit hook to track it. Any small change to the file (like an additional newline at the end) will cause the whole file to lint, which also works. |
b890dec
to
6e34e4c
Compare
Perfect, that did the trick (newline at the end) |
Great! I think |
6e34e4c
to
6ba9466
Compare
Hi, I maintain the custom ops story in PyTorch. This PR updates the the usage of PyTorch's private custom op API to a newer API. This API is still private but closer to what we want it to be. Test Plan: - wait for CI
6ba9466
to
3c65d4f
Compare
@gs-olive I think the linting passed, but I'm not sure the test failures are significant. Thoughts? |
@zou3519 - I reran the tests and will merge once they pass - I don't believe the failures are related to the content of this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Now passing CI
Description
Hi, I maintain the custom ops story in PyTorch. This PR updates the the usage of PyTorch's private custom op API to a newer API. This API is still private but closer to what we want it to be.
Type of change
API migration.