-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decompose aten.fmod into aten.mul,sub,div etc. #3689
Decompose aten.fmod into aten.mul,sub,div etc. #3689
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srinathava Since this commit adds the decomposition for the AtenFmodTensorOp
, the existing lowering for the same should be removed from here:
torch-mlir/lib/Conversion/TorchToLinalg/Uncategorized.cpp
Lines 1285 to 1307 in 0474082
if (auto fmod = dyn_cast<AtenFmodTensorOp>(op)) { | |
Type newResultType = | |
cast<RankedTensorType>(converter->convertType(fmod.getType())) | |
.getElementType(); | |
Value self = convertScalarToDtype(b, loc, payloadArgs[0], newResultType); | |
Value other = convertScalarToDtype(b, loc, payloadArgs[1], newResultType); | |
Value result; | |
if (isa<mlir::FloatType>(newResultType)) { | |
Value n = b.create<arith::DivFOp>(loc, self, other); | |
n = b.create<math::TruncOp>(loc, n); | |
Value n_y = b.create<arith::MulFOp>(loc, n, other); | |
result = b.create<arith::SubFOp>(loc, self, n_y); | |
} else if (isa<mlir::IntegerType>(newResultType)) { | |
Value n = b.create<arith::DivSIOp>(loc, self, other); | |
Value n_y = b.create<arith::MulIOp>(loc, n, other); | |
result = b.create<arith::SubIOp>(loc, self, n_y); | |
} else { | |
fmod.emitError("Unsupported type encountered for AtenFmodTensorOp."); | |
} | |
return result; | |
} |
@vivekkhandelwal1, thanks for pointing that out. I'll send out a follow-up PR for the cleanup shortly. |
Follow up cleanup for [this PR](#3689), which introduced a decomposition for `aten.fmod.Tensor`. This means that the lowering for this operator in linalg is no longer needed. Thanks to @vivekkhandelwal1 for pointing this out. --------- Co-authored-by: Srinath Avadhanula <srinath.avadhanula@getcruise.com>
As titled, create a new decomposition for
aten.fmod.Tensor
toaten.div
,aten.trunc
,aten.mul
andaten.sub
. Note that we only useaten.trunc
for floating point operations. This further gets decomposed toaten.where
etc. by other existing decompositions.This decomposition now makes TOSA pass for a simple model with
aten.fmod
while it makesstablehlo
fail. For now, we disallow this decomposition forstablehlo