-
Notifications
You must be signed in to change notification settings - Fork 533
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
E2e implementation for aten.cat
,aten.gather
, aten.bmm
#312
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
6860ddf
to
4903eb5
Compare
silvasean
requested changes
Sep 20, 2021
external/torch-mlir/lib/Dialect/Torch/Transforms/ReduceOpVariants.cpp
Outdated
Show resolved
Hide resolved
external/torch-mlir/lib/Dialect/Torch/Transforms/ReduceOpVariants.cpp
Outdated
Show resolved
Hide resolved
silvasean
reviewed
Sep 20, 2021
4903eb5
to
71a9120
Compare
Need a fix in the upstream https://reviews.llvm.org/D110176 to make the CI pass. |
silvasean
approved these changes
Sep 22, 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: recommend removing {Non,}ValueTensorType::getFromShaped entirely.
872fed6
to
636f4fa
Compare
Also contains the following changes: - Remove derefineOp canonicalizer because it's not safe. - Support for optional tensor and list tensors in reduceOpVariant. This only works for some special detected and easy to handle cases. For list, it covers the case list is got from a `ListConstruct`. For optional, it covers the case optional is constructed from a `DerefineOp`. - Remove the `inferReturnTypes` for `FromBuiltinTensorOp` because it's not safe to deduce types from the input. For example, a built-in tensor of i8 could be converted to si8 or ui8. It's better to let the user specify the return type explicitly.
636f4fa
to
e9c5fee
Compare
qedawkins
pushed a commit
to nod-ai/torch-mlir
that referenced
this pull request
Oct 3, 2022
* Use ubuntu focal for ppc64le. * Rebuild prereq docker. * Rebuild prereq docker. * Limit ppc64le build to use two threads only. * Rebuild prereq docker.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Also contain the following changes:
only works for some special detected and easy to handle cases. For list,
it covers the case list is got from a
ListConstruct
. For optional, itcovers the case optional is constructed from a
DerefineOp
.inferReturnTypes
forFromBuiltinTensorOp
because it'snot safe to deduce types from the input. For example, a built-in tensor
of i8 could be converted to si8 or ui8. It's better to let the user
specify the return type explicitly. Also, delete the
getFromShaped
from{Non,}ValueTensorType.