Skip to content

Commit

Permalink
aarch64: Correct the maximum shift amount for shifted operands
Browse files Browse the repository at this point in the history
The aarch64 ISA specification allows a left shift amount to be applied
after extension in the range of 0 to 4 (encoded in the imm3 field).

This is true for at least the following instructions:

 * ADD (extend register)
 * ADDS (extended register)
 * SUB (extended register)

The result of this patch can be seen, when compiling the following code:

uint64_t myadd(uint64_t a, uint64_t b)
{
    return a+(((uint8_t)b)<<4);
}

Without the patch the following sequence will be generated:

0000000000000000 <myadd>:
   0:	d37c1c21 	ubfiz	x1, x1, #4, #8
   4:	8b000020 	add	x0, x1, x0
   8:	d65f03c0 	ret

With the patch the ubfiz will be merged into the add instruction:

0000000000000000 <myadd>:
   0:	8b211000 	add	x0, x0, w1, uxtb #4
   4:	d65f03c0 	ret

gcc/ChangeLog:

	* config/aarch64/aarch64.cc (aarch64_uxt_size): fix an
	off-by-one in checking the permissible shift-amount.
  • Loading branch information
ptomsich committed Jan 28, 2023
1 parent 38bce6f commit 2f2101c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion gcc/config/aarch64/aarch64.cc
Original file line number Diff line number Diff line change
Expand Up @@ -13022,7 +13022,7 @@ aarch64_output_casesi (rtx *operands)
int
aarch64_uxt_size (int shift, HOST_WIDE_INT mask)
{
if (shift >= 0 && shift <= 3)
if (shift >= 0 && shift <= 4)
{
int size;
for (size = 8; size <= 32; size *= 2)
Expand Down

0 comments on commit 2f2101c

Please sign in to comment.