Skip to content

Commit

Permalink
Some additional zero-extension related optimizations in simplify-rtx.
Browse files Browse the repository at this point in the history
This patch implements some additional zero-extension and sign-extension
related optimizations in simplify-rtx.cc.  The original motivation comes
from PR rtl-optimization/71775, where in comment #2 Andrew Pinksi sees:

Failed to match this instruction:
(set (reg:DI 88 [ _1 ])
    (sign_extend:DI (subreg:SI (ctz:DI (reg/v:DI 86 [ x ])) 0)))

On many platforms the result of DImode CTZ is constrained to be a
small unsigned integer (between 0 and 64), hence the truncation to
32-bits (using a SUBREG) and the following sign extension back to
64-bits are effectively a no-op, so the above should ideally (often)
be simplified to "(set (reg:DI 88) (ctz:DI (reg/v:DI 86 [ x ]))".

To implement this, and some closely related transformations, we build
upon the existing val_signbit_known_clear_p predicate.  In the first
chunk, nonzero_bits knows that FFS and ABS can't leave the sign-bit
bit set, so the simplification of of ABS (ABS (x)) and ABS (FFS (x))
can itself be simplified.  The second transformation is that we can
canonicalized SIGN_EXTEND to ZERO_EXTEND (as in the PR 71775 case above)
when the operand's sign-bit is known to be clear.  The final two chunks
are for SIGN_EXTEND of a truncating SUBREG, and ZERO_EXTEND of a
truncating SUBREG respectively.  The nonzero_bits of a truncating
SUBREG pessimistically thinks that the upper bits may have an
arbitrary value (by taking the SUBREG), so we need look deeper at the
SUBREG's operand to confirm that the high bits are known to be zero.

Unfortunately, for PR rtl-optimization/71775, ctz:DI on x86_64 with
default architecture options is undefined at zero, so we can't be sure
the upper bits of reg:DI 88 will be sign extended (all zeros or all ones).
nonzero_bits knows this, so the above transformations don't trigger,
but the transformations themselves are perfectly valid for other
operations such as FFS, POPCOUNT and PARITY, and on other targets/-march
settings where CTZ is defined at zero.

2022-08-03  Roger Sayle  <roger@nextmovesoftware.com>
	    Segher Boessenkool  <segher@kernel.crashing.org>
	    Richard Sandiford  <richard.sandiford@arm.com>

gcc/ChangeLog
	* simplify-rtx.cc (simplify_unary_operation_1) <ABS>: Add
	optimizations for CLRSB, PARITY, POPCOUNT, SS_ABS and LSHIFTRT
	that are all positive to complement the existing FFS and
	idempotent ABS simplifications.
	<SIGN_EXTEND>: Canonicalize SIGN_EXTEND to ZERO_EXTEND when
	val_signbit_known_clear_p is true of the operand.
	Simplify sign extensions of SUBREG truncations of operands
	that are already suitably (zero) extended.
	<ZERO_EXTEND>: Simplify zero extensions of SUBREG truncations
	of operands that are already suitably zero extended.
  • Loading branch information
rogersayle committed Aug 3, 2022
1 parent 969a989 commit c23a9c8
Showing 1 changed file with 57 additions and 3 deletions.
60 changes: 57 additions & 3 deletions gcc/simplify-rtx.cc
Original file line number Diff line number Diff line change
Expand Up @@ -1366,11 +1366,35 @@ simplify_context::simplify_unary_operation_1 (rtx_code code, machine_mode mode,
break;

/* If operand is something known to be positive, ignore the ABS. */
if (GET_CODE (op) == FFS || GET_CODE (op) == ABS
|| val_signbit_known_clear_p (GET_MODE (op),
nonzero_bits (op, GET_MODE (op))))
if (val_signbit_known_clear_p (GET_MODE (op),
nonzero_bits (op, GET_MODE (op))))
return op;

/* Using nonzero_bits doesn't (currently) work for modes wider than
HOST_WIDE_INT, so the following transformations help simplify
ABS for TImode and wider. */
switch (GET_CODE (op))
{
case ABS:
case CLRSB:
case FFS:
case PARITY:
case POPCOUNT:
case SS_ABS:
return op;

case LSHIFTRT:
if (CONST_INT_P (XEXP (op, 1))
&& INTVAL (XEXP (op, 1)) > 0
&& is_a <scalar_int_mode> (mode, &int_mode)
&& INTVAL (XEXP (op, 1)) < GET_MODE_PRECISION (int_mode))
return op;
break;

default:
break;
}

/* If operand is known to be only -1 or 0, convert ABS to NEG. */
if (is_a <scalar_int_mode> (mode, &int_mode)
&& (num_sign_bit_copies (op, int_mode)
Expand Down Expand Up @@ -1615,6 +1639,24 @@ simplify_context::simplify_unary_operation_1 (rtx_code code, machine_mode mode,
}
}

/* We can canonicalize SIGN_EXTEND (op) as ZERO_EXTEND (op) when
we know the sign bit of OP must be clear. */
if (val_signbit_known_clear_p (GET_MODE (op),
nonzero_bits (op, GET_MODE (op))))
return simplify_gen_unary (ZERO_EXTEND, mode, op, GET_MODE (op));

/* (sign_extend:DI (subreg:SI (ctz:DI ...))) is (ctz:DI ...). */
if (GET_CODE (op) == SUBREG
&& subreg_lowpart_p (op)
&& GET_MODE (SUBREG_REG (op)) == mode
&& is_a <scalar_int_mode> (mode, &int_mode)
&& is_a <scalar_int_mode> (GET_MODE (op), &op_mode)
&& GET_MODE_PRECISION (int_mode) <= HOST_BITS_PER_WIDE_INT
&& GET_MODE_PRECISION (op_mode) < GET_MODE_PRECISION (int_mode)
&& (nonzero_bits (SUBREG_REG (op), mode)
& ~(GET_MODE_MASK (op_mode) >> 1)) == 0)
return SUBREG_REG (op);

#if defined(POINTERS_EXTEND_UNSIGNED)
/* As we do not know which address space the pointer is referring to,
we can do this only if the target does not support different pointer
Expand Down Expand Up @@ -1765,6 +1807,18 @@ simplify_context::simplify_unary_operation_1 (rtx_code code, machine_mode mode,
op0_mode);
}

/* (zero_extend:DI (subreg:SI (ctz:DI ...))) is (ctz:DI ...). */
if (GET_CODE (op) == SUBREG
&& subreg_lowpart_p (op)
&& GET_MODE (SUBREG_REG (op)) == mode
&& is_a <scalar_int_mode> (mode, &int_mode)
&& is_a <scalar_int_mode> (GET_MODE (op), &op_mode)
&& GET_MODE_PRECISION (int_mode) <= HOST_BITS_PER_WIDE_INT
&& GET_MODE_PRECISION (op_mode) < GET_MODE_PRECISION (int_mode)
&& (nonzero_bits (SUBREG_REG (op), mode)
& ~GET_MODE_MASK (op_mode)) == 0)
return SUBREG_REG (op);

#if defined(POINTERS_EXTEND_UNSIGNED)
/* As we do not know which address space the pointer is referring to,
we can do this only if the target does not support different pointer
Expand Down

0 comments on commit c23a9c8

Please sign in to comment.