Skip to content

Commit

Permalink
Remove some unneeded code from division morphing (#53464)
Browse files Browse the repository at this point in the history
* Remove GTF_UNSIGNED check from the condition

It is not necessary: GTF_UNSIGNED does not have anything
to do with the operands being unsigned.

Some positive diffs in runtime tests for win-x86 and
one regression in System.Net.WebSockets.ManagedWebSocket.ApplyMask.
The regressions is because we generate two "div"s for a long UMOD
on x86 with a constant divisor, always, even for powers of two.
Something to improve for sure.

Naturally, no diffs for win-x64, linux-x64 or linux-arm.

* Don't fold casts from constants in UMOD morphing

It used to be that "ldc.i4.1 conv.i8" sequences survived
importation, and since UMOD morphing is sensitive to
constant divisors, morph tried to fold them. This is
no longer the case, so stop doing that.

Of course, morph can be called from anywhere at any
point, but if some code is creating casts from constants,
the proper place to fix is that code.

No diffs for win-x86 or win-x64 or linux-arm.

* Some code modernization

Use modern helpers and move comments around.
  • Loading branch information
SingleAccretion authored Jun 29, 2021
1 parent 9fb6e81 commit 3ad32a5
Showing 1 changed file with 10 additions and 33 deletions.
43 changes: 10 additions & 33 deletions src/coreclr/jit/morph.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11252,11 +11252,6 @@ GenTree* Compiler::fgMorphSmpOp(GenTree* tree, MorphAddrContext* mac)
}
#endif
#endif // !TARGET_64BIT

if (op2->gtOper == GT_CAST && op2->AsOp()->gtOp1->IsCnsIntOrI())
{
op2 = gtFoldExprConst(op2);
}
break;

case GT_UDIV:
Expand Down Expand Up @@ -11324,43 +11319,30 @@ GenTree* Compiler::fgMorphSmpOp(GenTree* tree, MorphAddrContext* mac)
// Note for TARGET_ARMARCH we don't have a remainder instruction, so we don't do this optimization
//
#else // TARGET_XARCH
/* If this is an unsigned long mod with op2 which is a cast to long from a
constant int, then don't morph to a call to the helper. This can be done
faster inline using idiv.
*/
// If this is an unsigned long mod with a constant divisor,
// then don't morph to a helper call - it can be done faster inline using idiv.

noway_assert(op2);
if ((typ == TYP_LONG) && opts.OptEnabled(CLFLG_CONSTANTFOLD) &&
((tree->gtFlags & GTF_UNSIGNED) == (op1->gtFlags & GTF_UNSIGNED)) &&
((tree->gtFlags & GTF_UNSIGNED) == (op2->gtFlags & GTF_UNSIGNED)))
if ((typ == TYP_LONG) && opts.OptEnabled(CLFLG_CONSTANTFOLD))
{
if (op2->gtOper == GT_CAST && op2->AsCast()->CastOp()->gtOper == GT_CNS_INT &&
op2->AsCast()->CastOp()->AsIntCon()->gtIconVal >= 2 &&
op2->AsCast()->CastOp()->AsIntCon()->gtIconVal <= 0x3fffffff &&
(tree->gtFlags & GTF_UNSIGNED) == (op2->AsCast()->CastOp()->gtFlags & GTF_UNSIGNED))
{
tree->AsOp()->gtOp2 = op2 = fgMorphCast(op2);
noway_assert(op2->gtOper == GT_CNS_NATIVELONG);
}

if (op2->gtOper == GT_CNS_NATIVELONG && op2->AsIntConCommon()->LngValue() >= 2 &&
if (op2->OperIs(GT_CNS_NATIVELONG) && op2->AsIntConCommon()->LngValue() >= 2 &&
op2->AsIntConCommon()->LngValue() <= 0x3fffffff)
{
tree->AsOp()->gtOp1 = op1 = fgMorphTree(op1);
noway_assert(op1->TypeGet() == TYP_LONG);
noway_assert(op1->TypeIs(TYP_LONG));

// Update flags for op1 morph
// Update flags for op1 morph.
tree->gtFlags &= ~GTF_ALL_EFFECT;

tree->gtFlags |= (op1->gtFlags & GTF_ALL_EFFECT); // Only update with op1 as op2 is a constant
// Only update with op1 as op2 is a constant.
tree->gtFlags |= (op1->gtFlags & GTF_ALL_EFFECT);

// If op1 is a constant, then do constant folding of the division operator
if (op1->gtOper == GT_CNS_NATIVELONG)
// If op1 is a constant, then do constant folding of the division operator.
if (op1->OperIs(GT_CNS_NATIVELONG))
{
tree = gtFoldExpr(tree);
}

// We may fail to fold
if (!tree->OperIsConst())
{
tree->AsOp()->CheckDivideByConstOptimized(this);
Expand Down Expand Up @@ -11414,11 +11396,6 @@ GenTree* Compiler::fgMorphSmpOp(GenTree* tree, MorphAddrContext* mac)
#endif
#endif // !TARGET_64BIT

if (op2->gtOper == GT_CAST && op2->AsOp()->gtOp1->IsCnsIntOrI())
{
op2 = gtFoldExprConst(op2);
}

#ifdef TARGET_ARM64
// For ARM64 we don't have a remainder instruction,
// The architecture manual suggests the following transformation to
Expand Down

0 comments on commit 3ad32a5

Please sign in to comment.