Commit graph

206788 commits

Author SHA1 Message Date
Juzhe-Zhong
d1eacedc6d RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31]
Notice we have this following situation:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.

After this patch:

	vsetivli	zero,4,e32,m1,tu,ma
	addi	a4,a5,400
	vlseg4e32.v	v12,(a3)
	vfadd.vf	v3,v13,fa0
	vfadd.vf	v1,v12,fa1
	vlseg4e32.v	v4,(a4)
	vfadd.vf	v2,v14,fa1
	vfmul.vv	v17,v3,v5
	vfmul.vv	v16,v1,v5

Tested on both RV32 and RV64 no regression.

Ok for trunk ?

gcc/ChangeLog:

	* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
	(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
	(expand_cond_len_op): Ditto.
	(expand_gather_scatter): Ditto.
	(expand_lanes_load_store): Ditto.
	(expand_fold_extract_last): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
	* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
2023-12-29 08:38:03 +08:00
GCC Administrator
7de05ad450 Daily bump. 2023-12-29 00:17:56 +00:00
Rimvydas Jasinskas
2cb93e6686 Fortran: Add Developer Options mini-section to documentation
Separate out -fdump-* options to the new section.  Sort by option name.

While there, document -save-temps intermediates.

gcc/fortran/ChangeLog:

	PR fortran/81615
	* invoke.texi: Add Developer Options section.  Move '-fdump-*'
	to it.  Add small examples about changed '-save-temps' behavior.

Signed-off-by: Rimvydas Jasinskas <rimvydas.jas@gmail.com>
2023-12-28 21:07:10 +01:00
David Edelsohn
bf5c00d7ee testsuite: XFAIL linkage testcases on AIX.
The template linkage2.C and linkage3.C testcases expect a
decoration that does not match AIX assembler syntax.  Expect failure.

gcc/testsuite/ChangeLog:
	* g++.dg/template/linkage2.C: XFAIL on AIX.
	* g++.dg/template/linkage3.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-28 14:55:04 -05:00
Uros Bizjak
d74cceb6d4 i386: Cleanup ix86_expand_{unary|binary}_operator issues
Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.

No functional changes.

gcc/ChangeLog:

	* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
	* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
	* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
	and ix86_expand_{unary|binary}_operator prototypes.
	* config/i386/i386.md: Cosmetic changes with the usage of
	TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
	and ix86_{unary|binary}_operator_ok function calls.
2023-12-28 12:31:30 +01:00
Juzhe-Zhong
76f5542c48 RISC-V: Make dynamic LMUL cost model more accurate for conversion codes
Notice current dynamic LMUL is not accurate for conversion codes.
Refine for it, there is current case is changed from choosing LMUL = 4 into LMUL = 8.

Tested no regression, committed.

Before this patch (LMUL = 4):                  After this patch (LMUL = 8):
        lw      a7,56(sp)                             lw	a7,56(sp)
        ld      t5,0(sp)                              ld	t5,0(sp)
        ld      t1,8(sp)                              ld	t1,8(sp)
        ld      t6,16(sp)                             ld	t6,16(sp)
        ld      t0,24(sp)                             ld	t0,24(sp)
        ld      t3,32(sp)                             ld	t3,32(sp)
        ld      t4,40(sp)                             ld	t4,40(sp)
        ble     a7,zero,.L5                           ble	a7,zero,.L5
.L3:                                               .L3:
        vsetvli a4,a7,e32,m2,ta,ma                    vsetvli	a4,a7,e32,m4,ta
        vle8.v  v1,0(a2)                              vle8.v	v3,0(a2)
        vle8.v  v4,0(a1)                              vle8.v	v16,0(t0)
        vsext.vf4       v8,v1                         vle8.v	v7,0(a1)
        vsext.vf4       v2,v4                         vle8.v	v12,0(t6)
        vsetvli zero,zero,e8,mf2,ta,ma                vle8.v	v2,0(a5)
        vadd.vv v4,v4,v1                              vle8.v	v1,0(t5)
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4	v20,v3
        vle8.v  v5,0(t0)                              vsext.vf4	v8,v7
        vle8.v  v6,0(t6)                              vadd.vv	v8,v8,v20
        vadd.vv v2,v2,v8                              vadd.vv	v8,v8,v8
        vadd.vv v2,v2,v2                              vadd.vv	v8,v8,v20
        vadd.vv v2,v2,v8                              vsetvli	zero,zero,e8,m1
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv	v15,v12,v16
        vadd.vv v6,v6,v5                              vsetvli	zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4	v12,v15
        vle8.v  v8,0(t5)                              vadd.vv	v8,v8,v12
        vle8.v  v9,0(a5)                              vsetvli	zero,zero,e8,m1
        vsext.vf4       v10,v4                        vadd.vv	v7,v7,v3
        vsext.vf4       v12,v6                        vsetvli	zero,zero,e32,m4
        vadd.vv v2,v2,v12                             vsext.vf4	v4,v7
        vadd.vv v2,v2,v10                             vadd.vv	v8,v8,v4
        vsetvli zero,zero,e16,m1,ta,ma                vsetvli	zero,zero,e16,m2
        vncvt.x.x.w     v4,v2                         vncvt.x.x.w	v4,v8
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli	zero,zero,e8,m1
        vadd.vv v6,v2,v2                              vncvt.x.x.w	v4,v4
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv	v15,v3,v4
        vncvt.x.x.w     v4,v4                         vadd.vv	v2,v2,v4
        vadd.vv v5,v5,v4                              vse8.v	v15,0(t4)
        vadd.vv v9,v9,v4                              vadd.vv	v3,v16,v4
        vadd.vv v1,v1,v4                              vse8.v	v2,0(a3)
        vadd.vv v4,v8,v4                              vadd.vv	v1,v1,v4
        vse8.v  v1,0(t4)                              vse8.v	v1,0(a6)
        vse8.v  v9,0(a3)                              vse8.v	v3,0(t1)
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli	zero,zero,e32,m4
        vse8.v  v4,0(a6)                              vsext.vf4	v4,v3
        vsext.vf4       v8,v5                         vadd.vv	v4,v4,v8
        vse8.v  v5,0(t1)                              vsetvli	zero,zero,e64,m8
        vadd.vv v2,v8,v2                              vsext.vf2	v16,v4
        vsetvli zero,zero,e64,m4,ta,ma                vse64.v	v16,0(t3)
        vsext.vf2       v8,v2                         vsetvli	zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vadd.vv	v8,v8,v8
        slli    t2,a4,3                               vsext.vf4	v4,v15
        vse64.v v8,0(t3)                              slli	t2,a4,3
        vsext.vf4       v2,v1                         vadd.vv	v4,v8,v4
        sub     a7,a7,a4                              sub	a7,a7,a4
        vadd.vv v2,v6,v2                              vsetvli	zero,zero,e64,m8
        vsetvli zero,zero,e64,m4,ta,ma                vsext.vf2	v8,v4
        vsext.vf2       v4,v2                         vse64.v	v8,0(a0)
        vse64.v v4,0(a0)                              add	a1,a1,a4
        add     a2,a2,a4                              add	a2,a2,a4
        add     a1,a1,a4                              add	a5,a5,a4
        add     t6,t6,a4                              add	t5,t5,a4
        add     t0,t0,a4                              add	t6,t6,a4
        add     a5,a5,a4                              add	t0,t0,a4
        add     t5,t5,a4                              add	t4,t4,a4
        add     t4,t4,a4                              add	a3,a3,a4
        add     a3,a3,a4                              add	a6,a6,a4
        add     a6,a6,a4                              add	t1,t1,a4
        add     t1,t1,a4                              add	t3,t3,t2
        add     t3,t3,t2                              add	a0,a0,t2
        add     a0,a0,t2                              bne	a7,zero,.L3
        bne     a7,zero,.L3                         .L5:
.L5:                                                  ret
        ret

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): Change interface.
	(get_live_range): New function.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Adapt test.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
2023-12-28 10:32:51 +08:00
GCC Administrator
fb57e402d0 Daily bump. 2023-12-28 00:19:23 +00:00
Xi Ruoyao
f19ceb2d49
LoongArch: Fix infinite secondary reloading of FCCmode [PR113148]
The GCC internal doc says:

     X might be a pseudo-register or a 'subreg' of a pseudo-register,
     which could either be in a hard register or in memory.  Use
     'true_regnum' to find out; it will return -1 if the pseudo is in
     memory and the hard register number if it is in a register.

So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory.  This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.

Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.

gcc/ChangeLog:

	PR target/113148
	* config/loongarch/loongarch.cc (loongarch_secondary_reload):
	Check if regno == -1 besides MEM_P (x) for reloading FCCmode
	from/to FPR to/from memory.

gcc/testsuite/ChangeLog:

	PR target/113148
	* gcc.target/loongarch/pr113148.c: New test.
2023-12-27 19:02:04 +08:00
Xi Ruoyao
80b8f1e535
LoongArch: Expand left rotate to right rotate with negated amount
gcc/ChangeLog:

	* config/loongarch/loongarch.md (rotl<mode>3):
	New define_expand.
	* config/loongarch/simd.md (vrotl<mode>3): Likewise.
	(rotl<mode>3): Likewise.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/rotl-with-rotr.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.
2023-12-27 19:02:03 +08:00
Juzhe-Zhong
c4ac073d4f RISC-V: Make known NITERS loop be aware of dynamic lmul cost model liveness information
Consider this following case:

int f[12][100];

void bad1(int v1, int v2)
{
  for (int r = 0; r < 100; r += 4)
    {
      int i = r + 1;
      f[0][r] = f[1][r] * (f[2][r]) - f[1][i] * (f[2][i]);
      f[0][i] = f[1][r] * (f[2][i]) + f[1][i] * (f[2][r]);
      f[0][r+2] = f[1][r+2] * (f[2][r+2]) - f[1][i+2] * (f[2][i+2]);
      f[0][i+2] = f[1][r+2] * (f[2][i+2]) + f[1][i+2] * (f[2][r+2]);
    }
}

Pick up LMUL = 8 VLS blindly:

        lui     a4,%hi(f)
        addi    a4,a4,%lo(f)
        addi    sp,sp,-592
        addi    a3,a4,800
        lui     a5,%hi(.LANCHOR0)
        vl8re32.v       v24,0(a3)
        addi    a5,a5,%lo(.LANCHOR0)
        addi    a1,a4,400
        addi    a3,sp,140
        vl8re32.v       v16,0(a1)
        vl4re16.v       v4,0(a5)
        addi    a7,a5,192
        vs4r.v  v4,0(a3)
        addi    t0,a5,64
        addi    a3,sp,336
        li      t2,32
        addi    a2,a5,128
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v4
        vmul.vv v8,v8,v24
        vl8re32.v       v0,0(a7)
        vs8r.v  v8,0(a3)
        vmsltu.vx       v8,v0,t2
        addi    a3,sp,12
        addi    t2,sp,204
        vsm.v   v8,0(t2)
        vl4re16.v       v4,0(t0)
        vl4re16.v       v0,0(a2)
        vs4r.v  v4,0(a3)
        addi    t0,sp,336
        vrgatherei16.vv v8,v24,v4
        addi    a3,sp,208
        vrgatherei16.vv v24,v16,v0
        vs4r.v  v0,0(a3)
        vmul.vv v8,v8,v24
        vlm.v   v0,0(t2)
        vl8re32.v       v24,0(t0)
        addi    a3,sp,208
        vsub.vv v16,v24,v8
        addi    t6,a4,528
        vadd.vv v8,v24,v8
        addi    t5,a4,928
        vmerge.vvm      v8,v8,v16,v0
        addi    t3,a4,128
        vs8r.v  v8,0(a4)
        addi    t4,a4,1056
        addi    t1,a4,656
        addi    a0,a4,256
        addi    a6,a4,1184
        addi    a1,a4,784
        addi    a7,a4,384
        addi    a4,sp,140
        vl4re16.v       v0,0(a3)
        vl8re32.v       v24,0(t6)
        vl4re16.v       v4,0(a4)
        vrgatherei16.vv v16,v24,v0
        addi    a3,sp,12
        vs8r.v  v16,0(t0)
        vl8re32.v       v8,0(t5)
        vrgatherei16.vv v16,v24,v4
        vl4re16.v       v4,0(a3)
        vrgatherei16.vv v24,v8,v4
        vmul.vv v16,v16,v8
        vl8re32.v       v8,0(t0)
        vmul.vv v8,v8,v24
        vsub.vv v24,v16,v8
        vlm.v   v0,0(t2)
        addi    a3,sp,208
        vadd.vv v8,v8,v16
        vl8re32.v       v16,0(t4)
        vmerge.vvm      v8,v8,v24,v0
        vrgatherei16.vv v24,v16,v4
        vs8r.v  v24,0(t0)
        vl4re16.v       v28,0(a3)
        addi    a3,sp,464
        vs8r.v  v8,0(t3)
        vl8re32.v       v8,0(t1)
        vrgatherei16.vv v0,v8,v28
        vs8r.v  v0,0(a3)
        addi    a3,sp,140
        vl4re16.v       v24,0(a3)
        addi    a3,sp,464
        vrgatherei16.vv v0,v8,v24
        vl8re32.v       v24,0(t0)
        vmv8r.v v8,v0
        vl8re32.v       v0,0(a3)
        vmul.vv v8,v8,v16
        vmul.vv v24,v24,v0
        vsub.vv v16,v8,v24
        vadd.vv v8,v8,v24
        vsetivli        zero,4,e32,m8,ta,ma
        vle32.v v24,0(a6)
        vsetvli a4,zero,e32,m8,ta,ma
        addi    a4,sp,12
        vlm.v   v0,0(t2)
        vmerge.vvm      v8,v8,v16,v0
        vl4re16.v       v16,0(a4)
        vrgatherei16.vv v0,v24,v16
        vsetivli        zero,4,e32,m8,ta,ma
        vs8r.v  v0,0(a4)
        addi    a4,sp,208
        vl4re16.v       v0,0(a4)
        vs8r.v  v8,0(a0)
        vle32.v v16,0(a1)
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v0
        vs8r.v  v8,0(a4)
        addi    a4,sp,140
        vl4re16.v       v4,0(a4)
        addi    a5,sp,12
        vrgatherei16.vv v8,v16,v4
        vl8re32.v       v0,0(a5)
        vsetivli        zero,4,e32,m8,ta,ma
        addi    a5,sp,208
        vmv8r.v v16,v8
        vl8re32.v       v8,0(a5)
        vmul.vv v24,v24,v16
        vmul.vv v8,v0,v8
        vsub.vv v16,v24,v8
        vadd.vv v8,v8,v24
        vsetvli a5,zero,e8,m2,ta,ma
        vlm.v   v0,0(t2)
        vsetivli        zero,4,e32,m8,ta,ma
        vmerge.vvm      v8,v8,v16,v0
        vse32.v v8,0(a7)
        addi    sp,sp,592
        jr      ra

This patch makes loop with known NITERS be aware of liveness estimation, after this patch, choosing LMUL = 4:

	lui	a5,%hi(f)
	addi	a5,a5,%lo(f)
	addi	a3,a5,400
	addi	a4,a5,800
	vsetivli	zero,8,e32,m2,ta,ma
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v8,v16
	addi	a3,a5,528
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a4,a5,928
	vmul.vv	v2,v12,v22
	vmul.vv	v6,v8,v18
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmul.vv	v4,v12,v20
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	vsseg4e32.v	v24,(a5)
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v8,v16
	addi	a6,a5,128
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a0,a5,656
	vmul.vv	v2,v12,v22
	addi	a1,a5,1056
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmul.vv	v6,v8,v18
	vmul.vv	v4,v12,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	vsseg4e32.v	v24,(a6)
	vlseg4e32.v	v16,(a0)
	vlseg4e32.v	v8,(a1)
	vmul.vv	v2,v8,v16
	addi	a2,a5,256
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a3,a5,784
	vmul.vv	v2,v12,v22
	addi	a4,a5,1184
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmul.vv	v6,v8,v18
	vmul.vv	v4,v12,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	addi	a5,a5,384
	vsseg4e32.v	v24,(a2)
	vsetivli	zero,1,e32,m2,ta,ma
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v16,v8
	vmul.vv	v6,v18,v8
	vmv.v.v	v24,v18
	vnmsub.vv	v24,v10,v2
	vmul.vv	v4,v20,v12
	vmul.vv	v2,v22,v12
	vmv.v.v	v26,v6
	vmacc.vv	v26,v16,v10
	vmv.v.v	v28,v22
	vnmsub.vv	v28,v14,v4
	vmv.v.v	v30,v2
	vmacc.vv	v30,v20,v14
	vsseg4e32.v	v24,(a5)
	ret

Tested on both RV32 and RV64 no regressions.

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): New function.
	(get_first_lane_point): Ditto.
	(get_last_lane_point): Ditto.
	(max_number_of_live_regs): Refine live point dump.
	(compute_estimated_lmul): Make unknown NITERS loop be aware of liveness.
	(costs::better_main_loop_than_p): Ditto.
	* config/riscv/riscv-vector-costs.h (struct stmt_point): Add new member.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c:
	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-3.c: New test.
2023-12-27 17:19:35 +08:00
Chenghui Pan
feaff27b29 LoongArch: Fix ICE when passing two same vector argument consecutively
Following code will cause ICE on LoongArch target:

  #include <lsxintrin.h>

  extern void bar (__m128i, __m128i);

  __m128i a;

  void
  foo ()
  {
    bar (a, a);
  }

It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.

This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().

gcc/ChangeLog:

	* config/loongarch/lasx.md: Use loongarch_split_move and
	loongarch_split_move_p directly.
	* config/loongarch/loongarch-protos.h
	(loongarch_split_move): Remove unnecessary argument.
	(loongarch_split_move_insn_p): Delete.
	(loongarch_split_move_insn): Delete.
	* config/loongarch/loongarch.cc
	(loongarch_split_move_insn_p): Delete.
	(loongarch_load_store_insns): Use loongarch_split_move_p
	directly.
	(loongarch_split_move): remove the unnecessary processing.
	(loongarch_split_move_insn): Delete.
	* config/loongarch/lsx.md: Use loongarch_split_move and
	loongarch_split_move_p directly.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.
2023-12-27 14:54:40 +08:00
Chenghui Pan
183a51935c LoongArch: Fix insn output of vec_concat templates for LASX.
When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:

  vldx    $vr3,$r5,$r6
  vld     $vr2,$r5,0
  xvpermi.q       $xr2,$xr3,0x20

causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.

According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:

  (set (op0) (vec_concat (op1) (op2)))

For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.

gcc/ChangeLog:

	* config/loongarch/lasx.md (vec_concatv4di): Delete.
	(vec_concatv8si): Delete.
	(vec_concatv16hi): Delete.
	(vec_concatv32qi): Delete.
	(vec_concatv4df): Delete.
	(vec_concatv8sf): Delete.
	(vec_concat<mode>): New template with insn output fixed.
2023-12-27 14:54:03 +08:00
Li Wei
245c9ef2b8 LoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.
We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on.  After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as
r14-3773-g5b857e87201335.  Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.

gcc/ChangeLog:

	* config/loongarch/loongarch.md: Adjust.
2023-12-27 14:53:50 +08:00
Haochen Gui
d92d26ff36 rs6000: Clean up the pre-checkings of expand_block_compare
Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.

gcc/
	* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
	only P7 above can enter this function.  Remove P7 CPU test and let
	P7 BE do the expand.

gcc/testsuite/
	* gcc.target/powerpc/block-cmp-4.c: New.
2023-12-27 10:35:14 +08:00
Haochen Gui
daea7777ce rs6000: Call library for block memory compare when optimizing for size
gcc/
	* config/rs6000/rs6000.md (cmpmemsi): Fail when optimizing for size.

gcc/testsuite/
	* gcc.target/powerpc/block-cmp-3.c: New.
2023-12-27 10:35:14 +08:00
Haochen Gui
78bd9e2560 rs6000: Correct definition of macro of fixed point efficient unaligned
Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc
to guard the platform which is efficient on fixed point unaligned
load/store.  It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX
which is enabled from P8 and can be disabled by mno-vsx option. So the
definition is improper.  This patch corrects it and call
slow_unaligned_access to judge if fixed point unaligned load/store is
efficient or not.

gcc/
	* config/rs6000/rs6000.h (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED):
	Remove.
	* config/rs6000/rs6000-string.cc (select_block_compare_mode):
	Replace TARGET_EFFICIENT_OVERLAPPING_UNALIGNED with
	targetm.slow_unaligned_access.
	(expand_block_compare_gpr): Likewise.
	(expand_block_compare): Likewise.
	(expand_strncmp_gpr_sequence): Likewise.

gcc/testsuite/
	* gcc.target/powerpc/block-cmp-1.c: New.
	* gcc.target/powerpc/block-cmp-2.c: New.
2023-12-27 10:35:13 +08:00
David Edelsohn
f2d47aa70e testsuite: 32 bit AIX 2 byte wchar
32 bit AIX supports 2 byte wchar.  The wchar-multi1.C testcase assumes
4 byte wchar.  Update the testcase to require 4 byte wchar.

gcc/testsuite/ChangeLog:
	* g++.dg/cpp23/wchar-multi1.C: Require 4 byte wchar_t.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-27 00:44:34 +00:00
David Edelsohn
5b7f5e6261 testsuite: AIX csect section name.
AIX sections use the csect directive to name a section.  Check for
csect name in attr-section testcases.

gcc/testsuite/ChangeLog:
	* g++.dg/ext/attr-section1.C: Test for csect section directive.
	* g++.dg/ext/attr-section1a.C: Same.
	* g++.dg/ext/attr-section2.C: Same.
	* g++.dg/ext/attr-section2a.C: Same.
	* g++.dg/ext/attr-section2b.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-26 19:44:11 -05:00
GCC Administrator
4fe33bf9f3 Daily bump. 2023-12-27 00:18:20 +00:00
David Edelsohn
86f535cb46 testsuite: Skip analyzer out-of-bounds-diagram on AIX.
The out-of-bounds diagram tests fail on AIX.

gcc/testsuite/ChangeLog:
	* gcc.dg/analyzer/out-of-bounds-diagram-17.c: Skip on AIX.
	* gcc.dg/analyzer/out-of-bounds-diagram-18.c: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-26 16:44:09 +00:00
David Edelsohn
a004a59f1c testsuite: Skip split DWARF on AIX.
AIX does not support split DWARF.

gcc/testsuite/ChangeLog:
	* gcc.dg/pr111409.c: Skip on AIX.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-26 11:42:51 -05:00
David Edelsohn
9773ca5198 testsuite: Disable strub on AIX.
AIX does not support stack scrubbing. Set strub as unsupported on AIX
and ensure that testcases check for strub support.

gcc/testsuite/ChangeLog:
	* c-c++-common/strub-unsupported-2.c: Require strub.
	* c-c++-common/strub-unsupported-3.c: Same.
	* c-c++-common/strub-unsupported.c: Same.
	* lib/target-supports.exp (check_effective_target_strub): Return 0
	for AIX.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-26 11:38:30 -05:00
Juzhe-Zhong
87dfd70723 RISC-V: Fix typo
gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Fix typo.
2023-12-26 19:54:07 +08:00
Juzhe-Zhong
f83cfb8148 RISC-V: Some minior tweak on dynamic LMUL cost model
Tweak some codes of dynamic LMUL cost model to make computation more predictable and accurate.

Tested on both RV32 and RV64 no regression.

Committed.

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Tweak LMUL estimation.
	(has_unexpected_spills_p): Ditto.
	(costs::record_potential_unexpected_spills): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Add more checks.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-12.c: New test.
	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-2.c: New test.
2023-12-26 17:15:41 +08:00
Di Zhao
6cec7b06b3 Fix compile options of pr110279-1.c and pr110279-2.c
The two testcases are for targets that support FMA. And
pr110279-2.c assumes reassoc_width of FMUL to be 4.

This patch adds missing options, to fix regression test failures
on nvptx/GCN (default reassoc_width of FMUL is 1) and x86_64
(need "-mfma").

gcc/testsuite/ChangeLog:

	* gcc.dg/pr110279-1.c: Add "-mcpu=generic" for aarch64; add
	"-mfma" for x86_64.
	* gcc.dg/pr110279-2.c: Replace "-march=armv8.2-a" with
	"-mcpu=generic"; limit the check to be on aarch64.
2023-12-26 16:38:21 +08:00
Jeevitha
7c6615692e testsuite: Add dg-require-effective-target powerpc_pcrel for testcase [PR110320]
Add dg-require-effective-target directive that allows the test case to run specifically
on powerpc_pcrel target.

2023-12-26  Jeevitha Palanisamy  <jeevitha@linux.ibm.com>

gcc/testsuite/
	PR target/110320
	* gcc.target/powerpc/pr110320-1.c: Add dg-require-effective-target powerpc_pcrel.
2023-12-25 22:17:54 -06:00
GCC Administrator
07ee6d7b2c Daily bump. 2023-12-26 00:19:10 +00:00
David Edelsohn
8d412b97ce testsuite: Skip analyzer tests on AIX.
Some new analyzer tests fail on AIX.

gcc/testsuite/ChangeLog:
	* c-c++-common/analyzer/capacity-1.c: Skip on AIX.
	* c-c++-common/analyzer/capacity-2.c: Same.
	* c-c++-common/analyzer/fd-glibc-byte-stream-socket.c: Same.
	* c-c++-common/analyzer/fd-manpage-getaddrinfo-client.c: Same.
	* c-c++-common/analyzer/fd-mappage-getaddrinfo-server.c: Same.
	* gcc.dg/analyzer/fd-glibc-byte-stream-connection-server.c: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-25 12:43:46 -05:00
Juzhe-Zhong
ed60b2868a RISC-V: Move RVV V_REGS liveness computation into analyze_loop_vinfo
Currently, we compute RVV V_REGS liveness during better_main_loop_than_p which is not appropriate
time to do that since we for example, when have the codes will finally pick LMUL = 8 vectorization
factor, we compute liveness for LMUL = 8 multiple times which are redundant.

Since we have leverage the current ARM SVE COST model:

  /* Do one-time initialization based on the vinfo.  */
  loop_vec_info loop_vinfo = dyn_cast<loop_vec_info> (m_vinfo);
  if (!m_analyzed_vinfo)
    {
      if (loop_vinfo)
	analyze_loop_vinfo (loop_vinfo);

      m_analyzed_vinfo = true;
    }

Analyze COST model only once for each cost model.

So here we move dynamic LMUL liveness information into analyze_loop_vinfo.

/* Do one-time initialization of the costs given that we're
   costing the loop vectorization described by LOOP_VINFO.  */
void
costs::analyze_loop_vinfo (loop_vec_info loop_vinfo)
{
  ...

  /* Detect whether the LOOP has unexpected spills.  */
  record_potential_unexpected_spills (loop_vinfo);
}

So that we can avoid redundant computations and the current dynamic LMUL cost model flow is much
more reasonable and consistent with others.

Tested on RV32 and RV64 no regressions.

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Allow
	fractional vecrtor.
	(preferred_new_lmul_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo.
	(has_unexpected_spills_p): New function.
	(costs::record_potential_unexpected_spills): Ditto.
	(costs::better_main_loop_than_p): Move RVV V_REGS liveness computation into
	analyze_loop_vinfo.
	* config/riscv/riscv-vector-costs.h: New functions and variables.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul-mixed-1.c: Robostify test.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/no-dynamic-lmul-1.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/pr111848.c: Ditto.
	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Ditto.
2023-12-25 20:47:59 +08:00
Tamar Christina
fd032cce21 middle-end: explicitly initialize vec_stmts [PR113132]
when configured with --enable-checking=release we get a false
positive on the use of vec_stmts as the compiler seems unable
to notice it gets initialized through the pass-by-reference.

This explicitly initializes the local.

gcc/ChangeLog:

	PR bootstrap/113132
	* tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize vec_stmts;
2023-12-25 10:58:40 +00:00
Jeevitha
1bbb169fe6 rs6000: Change GPR2 to volatile & non-fixed register for function that does not use TOC [PR110320]
Normally, GPR2 is the TOC pointer and is defined as a fixed and non-volatile
register. However, it can be used as volatile for PCREL addressing. Therefore,
modified r2 to be non-fixed in FIXED_REGISTERS and set it to fixed if it is not
PCREL and also when the user explicitly requests TOC or fixed. If the register
r2 is fixed, it is made as non-volatile. Changes in register preservation roles
can be accomplished with the help of available target hooks
(TARGET_CONDITIONAL_REGISTER_USAGE).

2023-12-24  Jeevitha Palanisamy  <jeevitha@linux.ibm.com>

gcc/
	PR target/110320
	* config/rs6000/rs6000.cc (rs6000_conditional_register_usage): Change
	GPR2 to volatile and non-fixed register for PCREL.
	* config/rs6000/rs6000.h (FIXED_REGISTERS): Modify GPR2 to not fixed.

gcc/testsuite/
	PR target/110320
	* gcc.target/powerpc/pr110320-1.c: New testcase.
	* gcc.target/powerpc/pr110320-2.c: New testcase.
	* gcc.target/powerpc/pr110320-3.c: New testcase.

Co-authored-by: Peter Bergner <bergner@linux.ibm.com>
2023-12-25 04:06:54 -06:00
Juzhe-Zhong
0beeddd6b1 RISC-V: Add one more ASM check in PR113112-1.c
gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Add one more ASM check.
2023-12-25 14:36:44 +08:00
Andrew Pinski
59ecd5ff09 match: Improve (a != b) ? (a + b) : (2 * a) pattern [PR19832]
In the testcase provided, we would match f_plus but not g_plus
due to a missing `:c` on the plus operator. This fixes the oversight
there.

Note this was noted in https://github.com/llvm/llvm-project/issues/76318 .

Committed as obvious after bootstrap/test on x86_64-linux-gnu.

	PR tree-optimization/19832

gcc/ChangeLog:

	* match.pd (`(a != b) ? (a + b) : (2 * a)`): Add `:c`
	on the plus operator.

gcc/testsuite/ChangeLog:

	* gcc.dg/tree-ssa/phi-opt-same-2.c: New test.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
2023-12-24 20:01:32 -08:00
GCC Administrator
f0269df25a Daily bump. 2023-12-25 00:18:09 +00:00
Tamar Christina
a657c7e351 testsuite: un-xfail TSVC loops that check for exit control flow vectorization
The following three tests now correctly work for targets that have an
implementation of cbranch for vectors so XFAILs are conditionally removed gated
on vect_early_break support.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/tsvc/vect-tsvc-s332.c: Remove xfail when early break
	supported.
	* gcc.dg/vect/tsvc/vect-tsvc-s481.c: Likewise.
	* gcc.dg/vect/tsvc/vect-tsvc-s482.c: Likewise.
2023-12-24 19:30:09 +00:00
Tamar Christina
c5232ec149 testsuite: Add tests for early break vectorization
This adds new test to check for all the early break functionality.
It includes a number of codegen and runtime tests checking the values at
different needles in the array.

They also check the values on different array sizes and peeling positions,
datatypes, VL, ncopies and every other variant I could think of.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.

On the tests I have disabled x86_64 on it's because the target is missing
cbranch for all types.  I think it should be possible to add them for the
missing type since all we care about is if a bit is set or not.

Bootstrap and Regtest on arm-none-linux-gnueabihf still running
and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running.

gcc/ChangeLog:

	* doc/sourcebuild.texi (check_effective_target_vect_early_break_hw,
	check_effective_target_vect_early_break): Document.

gcc/testsuite/ChangeLog:

	* lib/target-supports.exp (add_options_for_vect_early_break,
	check_effective_target_vect_early_break_hw,
	check_effective_target_vect_early_break): New.
	* g++.dg/vect/vect-early-break_1.cc: New test.
	* g++.dg/vect/vect-early-break_2.cc: New test.
	* g++.dg/vect/vect-early-break_3.cc: New test.
	* gcc.dg/vect/vect-early-break-run_1.c: New test.
	* gcc.dg/vect/vect-early-break-run_10.c: New test.
	* gcc.dg/vect/vect-early-break-run_2.c: New test.
	* gcc.dg/vect/vect-early-break-run_3.c: New test.
	* gcc.dg/vect/vect-early-break-run_4.c: New test.
	* gcc.dg/vect/vect-early-break-run_5.c: New test.
	* gcc.dg/vect/vect-early-break-run_6.c: New test.
	* gcc.dg/vect/vect-early-break-run_7.c: New test.
	* gcc.dg/vect/vect-early-break-run_8.c: New test.
	* gcc.dg/vect/vect-early-break-run_9.c: New test.
	* gcc.dg/vect/vect-early-break-template_1.c: New test.
	* gcc.dg/vect/vect-early-break-template_2.c: New test.
	* gcc.dg/vect/vect-early-break_1.c: New test.
	* gcc.dg/vect/vect-early-break_10.c: New test.
	* gcc.dg/vect/vect-early-break_11.c: New test.
	* gcc.dg/vect/vect-early-break_12.c: New test.
	* gcc.dg/vect/vect-early-break_13.c: New test.
	* gcc.dg/vect/vect-early-break_14.c: New test.
	* gcc.dg/vect/vect-early-break_15.c: New test.
	* gcc.dg/vect/vect-early-break_16.c: New test.
	* gcc.dg/vect/vect-early-break_17.c: New test.
	* gcc.dg/vect/vect-early-break_18.c: New test.
	* gcc.dg/vect/vect-early-break_19.c: New test.
	* gcc.dg/vect/vect-early-break_2.c: New test.
	* gcc.dg/vect/vect-early-break_20.c: New test.
	* gcc.dg/vect/vect-early-break_21.c: New test.
	* gcc.dg/vect/vect-early-break_22.c: New test.
	* gcc.dg/vect/vect-early-break_23.c: New test.
	* gcc.dg/vect/vect-early-break_24.c: New test.
	* gcc.dg/vect/vect-early-break_25.c: New test.
	* gcc.dg/vect/vect-early-break_26.c: New test.
	* gcc.dg/vect/vect-early-break_27.c: New test.
	* gcc.dg/vect/vect-early-break_28.c: New test.
	* gcc.dg/vect/vect-early-break_29.c: New test.
	* gcc.dg/vect/vect-early-break_3.c: New test.
	* gcc.dg/vect/vect-early-break_30.c: New test.
	* gcc.dg/vect/vect-early-break_31.c: New test.
	* gcc.dg/vect/vect-early-break_32.c: New test.
	* gcc.dg/vect/vect-early-break_33.c: New test.
	* gcc.dg/vect/vect-early-break_34.c: New test.
	* gcc.dg/vect/vect-early-break_35.c: New test.
	* gcc.dg/vect/vect-early-break_36.c: New test.
	* gcc.dg/vect/vect-early-break_37.c: New test.
	* gcc.dg/vect/vect-early-break_38.c: New test.
	* gcc.dg/vect/vect-early-break_39.c: New test.
	* gcc.dg/vect/vect-early-break_4.c: New test.
	* gcc.dg/vect/vect-early-break_40.c: New test.
	* gcc.dg/vect/vect-early-break_41.c: New test.
	* gcc.dg/vect/vect-early-break_42.c: New test.
	* gcc.dg/vect/vect-early-break_43.c: New test.
	* gcc.dg/vect/vect-early-break_44.c: New test.
	* gcc.dg/vect/vect-early-break_45.c: New test.
	* gcc.dg/vect/vect-early-break_46.c: New test.
	* gcc.dg/vect/vect-early-break_47.c: New test.
	* gcc.dg/vect/vect-early-break_48.c: New test.
	* gcc.dg/vect/vect-early-break_49.c: New test.
	* gcc.dg/vect/vect-early-break_5.c: New test.
	* gcc.dg/vect/vect-early-break_50.c: New test.
	* gcc.dg/vect/vect-early-break_51.c: New test.
	* gcc.dg/vect/vect-early-break_52.c: New test.
	* gcc.dg/vect/vect-early-break_53.c: New test.
	* gcc.dg/vect/vect-early-break_54.c: New test.
	* gcc.dg/vect/vect-early-break_55.c: New test.
	* gcc.dg/vect/vect-early-break_56.c: New test.
	* gcc.dg/vect/vect-early-break_57.c: New test.
	* gcc.dg/vect/vect-early-break_58.c: New test.
	* gcc.dg/vect/vect-early-break_59.c: New test.
	* gcc.dg/vect/vect-early-break_6.c: New test.
	* gcc.dg/vect/vect-early-break_60.c: New test.
	* gcc.dg/vect/vect-early-break_61.c: New test.
	* gcc.dg/vect/vect-early-break_62.c: New test.
	* gcc.dg/vect/vect-early-break_63.c: New test.
	* gcc.dg/vect/vect-early-break_64.c: New test.
	* gcc.dg/vect/vect-early-break_65.c: New test.
	* gcc.dg/vect/vect-early-break_66.c: New test.
	* gcc.dg/vect/vect-early-break_67.c: New test.
	* gcc.dg/vect/vect-early-break_68.c: New test.
	* gcc.dg/vect/vect-early-break_69.c: New test.
	* gcc.dg/vect/vect-early-break_7.c: New test.
	* gcc.dg/vect/vect-early-break_70.c: New test.
	* gcc.dg/vect/vect-early-break_71.c: New test.
	* gcc.dg/vect/vect-early-break_72.c: New test.
	* gcc.dg/vect/vect-early-break_73.c: New test.
	* gcc.dg/vect/vect-early-break_74.c: New test.
	* gcc.dg/vect/vect-early-break_75.c: New test.
	* gcc.dg/vect/vect-early-break_76.c: New test.
	* gcc.dg/vect/vect-early-break_77.c: New test.
	* gcc.dg/vect/vect-early-break_78.c: New test.
	* gcc.dg/vect/vect-early-break_79.c: New test.
	* gcc.dg/vect/vect-early-break_8.c: New test.
	* gcc.dg/vect/vect-early-break_80.c: New test.
	* gcc.dg/vect/vect-early-break_81.c: New test.
	* gcc.dg/vect/vect-early-break_82.c: New test.
	* gcc.dg/vect/vect-early-break_83.c: New test.
	* gcc.dg/vect/vect-early-break_84.c: New test.
	* gcc.dg/vect/vect-early-break_85.c: New test.
	* gcc.dg/vect/vect-early-break_86.c: New test.
	* gcc.dg/vect/vect-early-break_87.c: New test.
	* gcc.dg/vect/vect-early-break_88.c: New test.
	* gcc.dg/vect/vect-early-break_89.c: New test.
	* gcc.dg/vect/vect-early-break_9.c: New test.
	* gcc.dg/vect/vect-early-break_90.c: New test.
	* gcc.dg/vect/vect-early-break_91.c: New test.
	* gcc.dg/vect/vect-early-break_92.c: New test.
	* gcc.dg/vect/vect-early-break_93.c: New test.
2023-12-24 19:30:09 +00:00
Tamar Christina
1bcc07aeb4 AArch64: Add implementation for vector cbranch for Advanced SIMD
Hi All,

This adds an implementation for conditional branch optab for AArch64.

For e.g.

void f1 ()
{
  for (int i = 0; i < N; i++)
    {
      b[i] += a[i];
      if (a[i] > 0)
	break;
    }
}

For 128-bit vectors we generate:

        cmgt    v1.4s, v1.4s, #0
        umaxp   v1.4s, v1.4s, v1.4s
        fmov    x3, d1
        cbnz    x3, .L8

and of 64-bit vector we can omit the compression:

        cmgt    v1.2s, v1.2s, #0
        fmov    x2, d1
        cbz     x2, .L13

gcc/ChangeLog:

	* config/aarch64/aarch64-simd.md (cbranch<mode>4): New.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/sve/vect-early-break-cbranch.c: New test.
	* gcc.target/aarch64/vect-early-break-cbranch.c: New test.
2023-12-24 19:30:09 +00:00
Tamar Christina
01f4251b87 middle-end: Support vectorization of loops with multiple exits.
Hi All,

This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.

Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND.  This is however only checked
then the comparison would produce multiple statements.

This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit.  Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.

After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit.  This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.

In such situations the loop is reflowed which enabled vectorization of many
other loop forms.

Concretely the kind of loops supported are of the forms:

 for (int i = 0; i < N; i++)
 {
   <statements1>
   if (<condition>)
     {
       ...
       <action>;
     }
   <statements2>
 }

where <action> can be:
 - break
 - return
 - goto

Any number of statements can be used before the <action> occurs.

Since this is an initial version for GCC 14 it has the following limitations and
features:

- Only fixed sized iterations and buffers are supported.  That is to say any
  vectors loaded or stored must be to statically allocated arrays with known
  sizes. N must also be known.  This limitation is because our primary target
  for this optimization is SVE.  For VLA SVE we can't easily do cross page
  iteraion checks. The result is likely to also not be beneficial. For that
  reason we punt support for variable buffers till we have First-Faulting
  support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
  <condition>.  Loads are fine as long as they don't have the possibility to
  alias.  More concretely, we block RAW dependencies when the intermediate value
  can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization.  The only epilogue supported is the
  scalar final one.  Peeling code supports it but the code motion code cannot
  find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.

With the help of IPA and LTO this still gets hit quite often.  During bootstrap
it hit rather frequently.  Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.

This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.

For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations.  For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.

That is to say, the first vector loop executes so long as the early exit isn't
needed.  Once the exit is taken, the scalar code will perform at most VF extra
iterations.  The exact number depending on peeling and iteration start and which
exit was taken (natural or early).   For this scalar loop, all early exits are
treated the same.

When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break.  If this is not possible we decline to vectorize.  The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.

This means that we check at the start of iterations whether we are going to exit
or not.  During the analyis phase we check whether we are allowed to do this
moving of statements.  Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.

With this the vector flow no longer necessarily needs to match that of the
scalar code.  In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.

Codegen:

for e.g.

unsigned vect_a[N];
unsigned vect_b[N];

unsigned test4(unsigned x)
{
 unsigned ret = 0;
 for (int i = 0; i < N; i++)
 {
   vect_b[i] = x + i;
   if (vect_a[i] > x)
     break;
   vect_a[i] = x;

 }
 return ret;
}

We generate for Adv. SIMD:

test4:
        adrp    x2, .LC0
        adrp    x3, .LANCHOR0
        dup     v2.4s, w0
        add     x3, x3, :lo12:.LANCHOR0
        movi    v4.4s, 0x4
        add     x4, x3, 3216
        ldr     q1, [x2, #:lo12:.LC0]
        mov     x1, 0
        mov     w2, 0
        .p2align 3,,7
.L3:
        ldr     q0, [x3, x1]
        add     v3.4s, v1.4s, v2.4s
        add     v1.4s, v1.4s, v4.4s
        cmhi    v0.4s, v0.4s, v2.4s
        umaxp   v0.4s, v0.4s, v0.4s
        fmov    x5, d0
        cbnz    x5, .L6
        add     w2, w2, 1
        str     q3, [x1, x4]
        str     q2, [x3, x1]
        add     x1, x1, 16
        cmp     w2, 200
        bne     .L3
        mov     w7, 3
.L2:
        lsl     w2, w2, 2
        add     x5, x3, 3216
        add     w6, w2, w0
        sxtw    x4, w2
        ldr     w1, [x3, x4, lsl 2]
        str     w6, [x5, x4, lsl 2]
        cmp     w0, w1
        bcc     .L4
        add     w1, w2, 1
        str     w0, [x3, x4, lsl 2]
        add     w6, w1, w0
        sxtw    x1, w1
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        add     w4, w2, 2
        str     w0, [x3, x1, lsl 2]
        sxtw    x1, w4
        add     w6, w1, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
        add     w2, w2, 3
        cmp     w7, 3
        beq     .L4
        sxtw    x1, w2
        add     w2, w2, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w2, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
.L4:
        mov     w0, 0
        ret
        .p2align 2,,3
.L6:
        mov     w7, 4
        b       .L2

and for SVE:

test4:
        adrp    x2, .LANCHOR0
        add     x2, x2, :lo12:.LANCHOR0
        add     x5, x2, 3216
        mov     x3, 0
        mov     w1, 0
        cntw    x4
        mov     z1.s, w0
        index   z0.s, #0, #1
        ptrue   p1.b, all
        ptrue   p0.s, all
        .p2align 3,,7
.L3:
        ld1w    z2.s, p1/z, [x2, x3, lsl 2]
        add     z3.s, z0.s, z1.s
        cmplo   p2.s, p0/z, z1.s, z2.s
        b.any   .L2
        st1w    z3.s, p1, [x5, x3, lsl 2]
        add     w1, w1, 1
        st1w    z1.s, p1, [x2, x3, lsl 2]
        add     x3, x3, x4
        incw    z0.s
        cmp     w3, 803
        bls     .L3
.L5:
        mov     w0, 0
        ret
        .p2align 2,,3
.L2:
        cntw    x5
        mul     w1, w1, w5
        cbz     w5, .L5
        sxtw    x1, w1
        sub     w5, w5, #1
        add     x5, x5, x1
        add     x6, x2, 3216
        b       .L6
        .p2align 2,,3
.L14:
        str     w0, [x2, x1, lsl 2]
        cmp     x1, x5
        beq     .L5
        mov     x1, x4
.L6:
        ldr     w3, [x2, x1, lsl 2]
        add     w4, w0, w1
        str     w4, [x6, x1, lsl 2]
        add     x4, x1, 1
        cmp     w0, w3
        bcs     .L14
        mov     w0, 0
        ret

On the workloads this work is based on we see between 2-3x performance uplift
using this patch.

Follow up plan:
 - Boolean vectorization has several shortcomings.  I've filed PR110223 with the
   bigger ones that cause vectorization to fail with this patch.
 - SLP support.  This is planned for GCC 15 as for majority of the cases build
   SLP itself fails.  This means I'll need to spend time in making this more
   robust first.  Additionally it requires:
     * Adding support for vectorizing CFG (gconds)
     * Support for CFG to differ between vector and scalar loops.
   Both of which would be disruptive to the tree and I suspect I'll be handling
   fallouts from this patch for a while.  So I plan to work on the surrounding
   building blocks first for the remainder of the year.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a
 -mcpu=neoverse-v1
 -mcpu=neoverse-n2

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.

gcc/ChangeLog:

	* tree-if-conv.cc (idx_within_array_bound): Expose.
	* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
	(vect_analyze_data_ref_dependences): Use it.
	* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
	(vect_set_loop_controls_directly,
	vect_set_loop_condition_partial_vectors,
	vect_set_loop_condition_partial_vectors_avx512,
	vect_set_loop_condition_normal): Support multiple exits.
	(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
	multiple exits.
	(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
	count and instead look at loop shape.
	(vect_update_ivs_after_vectorizer): Drop asserts.
	(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
	(vect_do_peeling): Support multiple exits.
	(vect_loop_versioning): Likewise.
	* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
	early_breaks.
	(vect_analyze_loop_form): Support loop flows with more than single BB
	loop body.
	(vect_create_loop_vinfo): Support niters analysis for multiple exits.
	(vect_analyze_loop): Likewise.
	(vect_get_vect_def): New.
	(vect_create_epilog_for_reduction): Support early exit reductions.
	(vectorizable_live_operation_1): New.
	(find_connected_edge): New.
	(vectorizable_live_operation): Support early exit live operations.
	(move_early_exit_stmts): New.
	(vect_transform_loop): Use it.
	* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
	(vect_recog_bitfield_ref_pattern): Support gconds and bools.
	(vect_recog_gcond_pattern): New.
	(possible_vector_mask_operation_p): Support gcond masks.
	(vect_determine_mask_precision): Likewise.
	(vect_mark_pattern_stmts): Set gcond def type.
	(can_vectorize_live_stmts): Force early break inductions to be live.
	* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
	early breaks.
	(vect_mark_stmts_to_be_vectorized): Process gcond usage.
	(perm_mask_for_reverse): Expose.
	(vectorizable_comparison_1): New.
	(vectorizable_early_exit): New.
	(vect_analyze_stmt): Support early break and gcond.
	(vect_transform_stmt): Likewise.
	(vect_is_simple_use): Likewise.
	(vect_get_vector_types_for_stmt): Likewise.
	* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
	numbering.
	* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
	(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
	LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
	LOOP_VINFO_EARLY_BRK_VUSES): New.
	(is_loop_header_bb_p): Drop assert.
	(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
	early_break_vuses.
	(vect_iv_increment_position, perm_mask_for_reverse,
	ref_within_array_bound): New.
	(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
2023-12-24 19:29:32 +00:00
Tamar Christina
f1dcc0fe37 middle-end: prevent LIM from hoising vector compares from gconds if target does not support it.
LIM notices that in some cases the condition and the results are loop
invariant and tries to move them out of the loop.

While the resulting code is operationally sound, moving the compare out of the
gcond results in generating code that no longer branches, so cbranch is no
longer applicable.  As such I now add code to check during this motion to see
if the target supports flag setting vector comparison as general operation.

I have tried writing a GIMPLE testcase for this but the gimple FE seems to be
having some trouble with the vector types.  It seems to fail parsing.

The early break code testsuite however has a test for this
(vect-early-break_67.c).

gcc/ChangeLog:

	* tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h
	and optabs-tree.h and check for vector compare motion out of gcond.
2023-12-24 19:17:13 +00:00
Tamar Christina
0994ddd86f testsuite: Add more pragma novector to new tests
This updates the testsuite and adds more #pragma GCC novector to various tests
that would otherwise vectorize the vector result checking code.

This cleans out the testsuite since the last rebase and prepares for the landing
of the early break patch.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/no-scevccp-slp-30.c: Add pragma GCC novector to abort
	loop.
	* gcc.dg/vect/no-scevccp-slp-31.c: Likewise.
	* gcc.dg/vect/no-section-anchors-vect-69.c: Likewise.
	* gcc.target/aarch64/vect-xorsign_exec.c: Likewise.
	* gcc.target/i386/avx512er-vrcp28ps-3.c: Likewise.
	* gcc.target/i386/avx512er-vrsqrt28ps-3.c: Likewise.
	* gcc.target/i386/avx512er-vrsqrt28ps-5.c: Likewise.
	* gcc.target/i386/avx512f-ceil-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceil-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceilf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceilf-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floor-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floor-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floorf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floorf-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-rint-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-rintf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-round-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-roundf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-trunc-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-truncf-vec-1.c: Likewise.
	* gcc.target/i386/vect-alignment-peeling-1.c: Likewise.
	* gcc.target/i386/vect-alignment-peeling-2.c: Likewise.
	* gcc.target/i386/vect-pack-trunc-1.c: Likewise.
	* gcc.target/i386/vect-pack-trunc-2.c: Likewise.
	* gcc.target/i386/vect-perm-even-1.c: Likewise.
	* gcc.target/i386/vect-unpack-1.c: Likewise.
2023-12-24 19:16:40 +00:00
John David Anglin
7dbde0c56a hppa: Fix pr110279-1.c on hppa
2023-12-24  John David Anglin  <danglin@gcc.gnu.org>

gcc/testsuite/ChangeLog:

	* gcc.dg/pr110279-1.c: Add -march=2.0 option on hppa*-*-*.
2023-12-24 19:03:59 +00:00
Pan Li
bd901d7673 RISC-V: XFail the signbit-5 run test for RVV
This patch would like to XFail the signbit-5 run test case for
the RVV.  Given the case has one limitation like "This test does not
work when the truth type does not match vector type." in the beginning
of the test file.  Aka, the RVV vector truth type is not integer type.

The target board of riscv-sim like below will pick up `-march=rv64gcv`
when building the run test elf. Thus, the RVV cannot bypass this test
case like aarch64_sve with additional option `-march=armv8-a`.

  riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow

For RVV, we leverage dg-xfail-run-if for this case like `amdgcn`.

The signbit-5.c passed test with below configurations but we need
further investigation for the failures of other configurations.

* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64imafdcv/-mabi=lp64d/-mcmodel=medlow

gcc/testsuite/ChangeLog:

	* gcc.dg/signbit-5.c: XFail for the riscv_v.

Signed-off-by: Pan Li <pan2.li@intel.com>
2023-12-24 09:49:52 +08:00
Hans-Peter Nilsson
3d03630b12 CRIS: Fix PR middle-end/113109; "throw" failing
TL;DR: the "dse1" pass removed the eh-return-address store.  The
PA also marks its EH_RETURN_HANDLER_RTX as volatile, for the same
reason, as does visum.  See PR32769 - it's the same thing on PA.

Conceptually, it's logical that stores to incoming args are
optimized out on the return path or if no loads are seen -
at least before epilogue expansion, when the subsequent load
isn't seen in the RTL, as is the case for the "dse1" pass.

I haven't looked into why this problem, that appeared for the PA
already in 2007, was seen for CRIS only recently (with
r14-6674-g4759383245ac97).

	PR middle-end/113109
	* config/cris/cris.cc (cris_eh_return_handler_rtx): New function.
	* config/cris/cris-protos.h (cris_eh_return_handler_rtx): Prototype.
	* config/cris/cris.h (EH_RETURN_HANDLER_RTX): Redefine to call
	cris_eh_return_handler_rtx.
2023-12-24 01:40:58 +01:00
GCC Administrator
d2ae7cb2ef Daily bump. 2023-12-24 00:17:37 +00:00
Xi Ruoyao
310dc75e70
LoongArch: Add sign_extend pattern for 32-bit rotate shift
Remove a redundant sign extension.

gcc/ChangeLog:

	* config/loongarch/loongarch.md (rotrsi3_extend): New
	define_insn.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/rotrw.c: New test.
2023-12-23 20:58:34 +08:00
Xi Ruoyao
78607d1229
LoongArch: Implement FCCmode reload and cstore<ANYF:mode>4
We used a branch to load floating-point comparison results into GPR.
This is very slow when the branch is not predictable.

Implement movfcc so we can reload FCCmode into GPRs, FPRs, and MEM.
Then implement cstore<ANYF:mode>4.

gcc/ChangeLog:

	* config/loongarch/loongarch-tune.h
	(loongarch_rtx_cost_data::movcf2gr): New field.
	(loongarch_rtx_cost_data::movcf2gr_): New method.
	(loongarch_rtx_cost_data::use_movcf2gr): New method.
	* config/loongarch/loongarch-def.cc
	(loongarch_rtx_cost_data::loongarch_rtx_cost_data): Set movcf2gr
	to COSTS_N_INSNS (7) and movgr2cf to COSTS_N_INSNS (15), based
	on timing on LA464.
	(loongarch_cpu_rtx_cost_data): Set movcf2gr and movgr2cf to
	COSTS_N_INSNS (1) for LA664.
	(loongarch_rtx_cost_optimize_size): Set movcf2gr and movgr2cf to
	COSTS_N_INSNS (1) + 1.
	* config/loongarch/predicates.md (loongarch_fcmp_operator): New
	predicate.
	* config/loongarch/loongarch.md (movfcc): Change to
	define_expand.
	(movfcc_internal): New define_insn.
	(fcc_to_<X:mode>): New define_insn.
	(cstore<ANYF:mode>4): New define_expand.
	* config/loongarch/loongarch.cc
	(loongarch_hard_regno_mode_ok_uncached): Allow FCCmode in GPRs
	and GPRs.
	(loongarch_secondary_reload): Reload FCCmode via FPR and/or GPR.
	(loongarch_emit_float_compare): Call gen_reg_rtx instead of
	loongarch_allocate_fcc.
	(loongarch_allocate_fcc): Remove.
	(loongarch_move_to_gpr_cost): Handle FCC_REGS -> GR_REGS.
	(loongarch_move_from_gpr_cost): Handle GR_REGS -> FCC_REGS.
	(loongarch_register_move_cost): Handle FCC_REGS -> FCC_REGS,
	FCC_REGS -> FP_REGS, and FP_REGS -> FCC_REGS.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/movcf2gr.c: New test.
	* gcc.target/loongarch/movcf2gr-via-fr.c: New test.
2023-12-23 20:58:33 +08:00
Thomas Schwinge
c0bf7ea189 GCN, nvptx: Basic '__cxa_guard_{acquire,abort,release}' for C++ static local variables support
For now, for single-threaded GCN, nvptx target use only; extension for
multi-threaded offloading use is to follow later.  Eventually switch to
libstdc++-v3/libsupc++ proper.

	libgcc/
	* c++-minimal/README: New.
	* c++-minimal/guard.c: New.
	* config/gcn/t-amdgcn (LIB2ADD): Add it.
	* config/nvptx/t-nvptx (LIB2ADD): Likewise.
2023-12-23 10:10:02 +01:00
YunQiang Su
079455458e MIPS: Don't add nan2008 option for -mtune=native
Users may wish just use -mtune=native for performance tuning only.
Let's don't make trouble for its case.

gcc/

	* config/mips/driver-native.cc (host_detect_local_cpu):
	don't add nan2008 option for -mtune=native.
2023-12-23 16:46:55 +08:00
YunQiang Su
384dbb0b4e MIPS: Put the ret to the end of args of reconcat [PR112759]
The function `reconcat` cannot append string(s) to NULL,
as the concat process will stop at the first NULL.

Let's always put the `ret` to the end, as it may be NULL.
We keep use reconcat here, due to that reconcat can make it
easier if we add more hardware features detecting, for example
by hwcap.

gcc/

	PR target/112759
	* config/mips/driver-native.cc (host_detect_local_cpu):
	Put the ret to the end of args of reconcat.
2023-12-23 16:46:23 +08:00
Juzhe-Zhong
2902300340 RISC-V: Make PHI initial value occupy live V_REG in dynamic LMUL cost model analysis
Consider this following case:

foo:
        ble     a0,zero,.L11
        lui     a2,%hi(.LANCHOR0)
        addi    sp,sp,-128
        addi    a2,a2,%lo(.LANCHOR0)
        mv      a1,a0
        vsetvli a6,zero,e32,m8,ta,ma
        vid.v   v8
        vs8r.v  v8,0(sp)                     ---> spill
.L3:
        vl8re32.v       v16,0(sp)            ---> reload
        vsetvli a4,a1,e8,m2,ta,ma
        li      a3,0
        vsetvli a5,zero,e32,m8,ta,ma
        vmv8r.v v0,v16
        vmv.v.x v8,a4
        vmv.v.i v24,0
        vadd.vv v8,v16,v8
        vmv8r.v v16,v24
        vs8r.v  v8,0(sp)                    ---> spill
.L4:
        addiw   a3,a3,1
        vadd.vv v8,v0,v16
        vadd.vi v16,v16,1
        vadd.vv v24,v24,v8
        bne     a0,a3,.L4
        vsetvli zero,a4,e32,m8,ta,ma
        sub     a1,a1,a4
        vse32.v v24,0(a2)
        slli    a4,a4,2
        add     a2,a2,a4
        bne     a1,zero,.L3
        li      a0,0
        addi    sp,sp,128
        jr      ra
.L11:
        li      a0,0
        ret

Pick unexpected LMUL = 8.

The root cause is we didn't involve PHI initial value in the dynamic LMUL calculation:

  # j_17 = PHI <j_11(9), 0(5)>                       ---> # vect_vec_iv_.8_24 = PHI <_25(9), { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }(5)>

We didn't count { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } in consuming vector register but it does allocate an vector register group for it.

This patch fixes this missing count. Then after this patch we pick up perfect LMUL (LMUL = M4)

foo:
	ble	a0,zero,.L9
	lui	a4,%hi(.LANCHOR0)
	addi	a4,a4,%lo(.LANCHOR0)
	mv	a2,a0
	vsetivli	zero,16,e32,m4,ta,ma
	vid.v	v20
.L3:
	vsetvli	a3,a2,e8,m1,ta,ma
	li	a5,0
	vsetivli	zero,16,e32,m4,ta,ma
	vmv4r.v	v16,v20
	vmv.v.i	v12,0
	vmv.v.x	v4,a3
	vmv4r.v	v8,v12
	vadd.vv	v20,v20,v4
.L4:
	addiw	a5,a5,1
	vmv4r.v	v4,v8
	vadd.vi	v8,v8,1
	vadd.vv	v4,v16,v4
	vadd.vv	v12,v12,v4
	bne	a0,a5,.L4
	slli	a5,a3,2
	vsetvli	zero,a3,e32,m4,ta,ma
	sub	a2,a2,a3
	vse32.v	v12,0(a4)
	add	a4,a4,a5
	bne	a2,zero,.L3
.L9:
	li	a0,0
	ret

Tested on --with-arch=gcv no regression.

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (max_number_of_live_regs): Refine dump information.
	(preferred_new_lmul_p): Make PHI initial value into live regs calculation.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: New test.
2023-12-23 08:59:03 +08:00