Commit graph

206824 commits

Author SHA1 Message Date
Jakub Jelinek
a945c346f5 Update copyright years. 2024-01-03 12:19:35 +01:00
Jakub Jelinek
9afc19159c Small tweaks for update-copyright.py
update-copyright.py --this-year FAILs on two spots in the modula2
directories.
One is gpl_v3_without_node.texi, I think that is similar to other
license files which we already exclude from updates.
And the other is GmcOptions.cc, which has lines like
  mcPrintf_printf0 ((const char *) "Copyright ", 10);
  mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
  mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
which update-copyhright.py obviously can't grok.  The file is generated
and doesn't contain normal Copyright year which should be updated, so I think
it is also ok to skip it.

2024-01-03  Jakub Jelinek  <jakub@redhat.com>

	* update-copyright.py (GenericFilter): Skip gpl_v3_without_node.texi.
	(GCCFilter): Skip GmcOptions.cc.
2024-01-03 12:11:32 +01:00
Jakub Jelinek
4e053a7e19 Update copyright dates.
Manual part of copyright year updates.

2024-01-03  Jakub Jelinek  <jakub@redhat.com>

gcc/
	* gcc.cc (process_command): Update copyright notice dates.
	* gcov-dump.cc (print_version): Ditto.
	* gcov.cc (print_version): Ditto.
	* gcov-tool.cc (print_version): Ditto.
	* gengtype.cc (create_file): Ditto.
	* doc/cpp.texi: Bump @copying's copyright year.
	* doc/cppinternals.texi: Ditto.
	* doc/gcc.texi: Ditto.
	* doc/gccint.texi: Ditto.
	* doc/gcov.texi: Ditto.
	* doc/install.texi: Ditto.
	* doc/invoke.texi: Ditto.
gcc/ada/
	* gnat_ugn.texi: Bump @copying's copyright year.
	* gnat_rm.texi: Likewise.
gcc/d/
	* gdc.texi: Bump @copyrights-d year.
gcc/fortran/
	* gfortranspec.cc (lang_specific_driver): Update copyright notice
	dates.
	* gfc-internals.texi: Bump @copying's copyright year.
	* gfortran.texi: Ditto.
	* intrinsic.texi: Ditto.
	* invoke.texi: Ditto.
gcc/go/
	* gccgo.texi: Bump @copyrights-go year.
libgomp/
	* libgomp.texi: Bump @copying's copyright year.
libitm/
	* libitm.texi: Bump @copying's copyright year.
libquadmath/
	* libquadmath.texi: Bump @copying's copyright year.
2024-01-03 11:44:34 +01:00
Jakub Jelinek
6a720d41ff Update Copyright year in ChangeLog files
2023 -> 2024
2024-01-03 11:35:18 +01:00
Jakub Jelinek
8c22aed4b0 Rotate ChangeLog files.
Rotate ChangeLog files for ChangeLogs with yearly cadence.
2024-01-03 11:29:39 +01:00
Xi Ruoyao
87acfc3619
LoongArch: Provide fmin/fmax RTL pattern for vectors
We already had smin/smax RTL pattern using vfmin/vfmax instructions.
But for smin/smax, it's unspecified what will happen if either operand
contains any NaN operands.  So we would not vectorize the loop with
-fno-finite-math-only (the default for all optimization levels expect
-Ofast).

But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
can also use them and vectorize the loop.

gcc/ChangeLog:

	* config/loongarch/simd.md (fmax<mode>3): New define_insn.
	(fmin<mode>3): Likewise.
	(reduc_fmax_scal_<mode>3): New define_expand.
	(reduc_fmin_scal_<mode>3): Likewise.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/vfmax-vfmin.c: New test.
2024-01-03 18:23:48 +08:00
Juzhe-Zhong
a43bd82554 RISC-V: Make liveness be aware of rgroup number of LENS[dynamic LMUL]
This patch fixes the following situation:
vl4re16.v       v12,0(a5)
...
vl4re16.v       v16,0(a3)
vs4r.v  v12,0(a5)
...
vl4re16.v       v4,0(a0)
vs4r.v  v16,0(a3)
...
vsetvli a3,zero,e16,m4,ta,ma
...
vmv.v.x v8,t6
vmsgeu.vv       v2,v16,v8
vsub.vv v16,v16,v8
vs4r.v  v16,0(a5)
...
vs4r.v  v4,0(a0)
vmsgeu.vv       v1,v4,v8
...
vsub.vv v4,v4,v8
slli    a6,a4,2
vs4r.v  v4,0(a5)
...
vsub.vv v4,v12,v8
vmsgeu.vv       v3,v12,v8
vs4r.v  v4,0(a5)
...

There are many spills which are 'vs4r.v'.  The root cause is that we don't count
vector REG liveness referencing the rgroup controls.

_29 = _25->iatom[0]; is transformed into the following vect statement with 4 different loop_len (loop_len_74, loop_len_75, loop_len_76, loop_len_77).

  vect__29.11_78 = .MASK_LEN_LOAD (vectp_sb.9_72, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_74, 0);
  vect__29.12_80 = .MASK_LEN_LOAD (vectp_sb.9_79, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_75, 0);
  vect__29.13_82 = .MASK_LEN_LOAD (vectp_sb.9_81, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_76, 0);
  vect__29.14_84 = .MASK_LEN_LOAD (vectp_sb.9_83, 32B, { -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 }, loop_len_77, 0);

which are the LENS number (LOOP_VINFO_LENS (loop_vinfo).length ()).

Count liveness according to LOOP_VINFO_LENS (loop_vinfo).length () to compute liveness more accurately:

vsetivli	zero,8,e16,m1,ta,ma
vmsgeu.vi	v19,v14,8
vadd.vi	v18,v14,-8
vmsgeu.vi	v17,v1,8
vadd.vi	v16,v1,-8
vlm.v	v15,0(a5)
...

Tested no regression, ok for trunk ?

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Add rgroup info.
	(max_number_of_live_regs): Ditto.
	(has_unexpected_spills_p): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-5.c: New test.
2024-01-03 17:20:56 +08:00
Patrick Palka
a138b99646 libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175]
The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
inadvertently increased the execution time of this test by over 5x due
to making the two main loops actually run in the signed_p case instead
of being dead code.

To compensate, this patch cuts the relevant loops' range [-1000,1000] by
10x as proposed in the PR.  This shouldn't significantly weaken the test
since the same important edge cases are still checked in the smaller range
and/or elsewhere.  On my machine this reduces the test's execution time by
roughly 10x (and 1.6x relative to before r14-205).

	PR testsuite/113175

libstdc++-v3/ChangeLog:

	* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
	'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
	(test03): Likewise.
2024-01-02 21:31:20 -05:00
GCC Administrator
45c807b794 Daily bump. 2024-01-03 00:17:41 +00:00
Jun Sha (Joshua)
152cd65bf4 RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns
This patch replaces csr_operand by vector_length_operand in the vsetvl
patterns.  This allows future changes in the vector code (i.e. in the
vector_length_operand predicate) without affecting scalar patterns that
use the csr_operand predicate.

gcc/ChangeLog:

	* config/riscv/vector.md:
	Use vector_length_operand for vsetvl patterns.

Co-authored-by: Jin Ma <jinma@linux.alibaba.com>
Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com>
Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
2024-01-02 20:37:19 +01:00
Andreas Schwab
ae11ee8f85 libsanitizer: Enable LSan and TSan for riscv64
libsanitizer:
	* configure.tgt (riscv64-*-linux*): Enable LSan and TSan.
2024-01-02 18:52:30 +01:00
Szabolcs Nagy
046cea56fd aarch64: fortran: Adjust vect-8.f90 for libmvec
With new glibc one more loop can be vectorized via simd exp in libmvec.

Found by the Linaro TCWG CI.

gcc/testsuite/ChangeLog:

	* gfortran.dg/vect/vect-8.f90: Accept more vectorized loops.
2024-01-02 10:54:10 +00:00
Juzhe-Zhong
76f069fef7 RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern
In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
causes redundant vsetvli in the following case:

	vsetvli	a5,a2,e8,m1,ta,ma
	vle32.v	v8,0(a0)
	vsetivli	zero,16,e32,m4,tu,mu   ----> We should apply VLMAX instead of a CONST_INT AVL
	slli	a4,a5,2
	vand.vv	v0,v8,v16
	vand.vv	v4,v8,v12
	vmseq.vi	v0,v0,0
	sub	a2,a2,a5
	vneg.v	v4,v8,v0.t
	vsetvli	zero,a5,e32,m4,ta,ma

The root cause above is the following codes:

is_vlmax_len_p (...)
   return poly_int_rtx_p (len, &value)
        && known_eq (value, GET_MODE_NUNITS (mode))
        && !satisfies_constraint_K (len);            ---> incorrect check.

Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].

After removing the the check above, we will have this following issue:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.

So, after this patch. Both cases are optimal codegen now:

case 1:
	vsetvli	a5,a2,e32,m1,ta,mu
	vle32.v	v2,0(a0)
	slli	a4,a5,2
	vand.vv	v1,v2,v3
	vand.vv	v0,v2,v4
	sub	a2,a2,a5
	vmseq.vi	v0,v0,0
	vneg.v	v1,v2,v0.t
	vse32.v	v1,0(a1)

case 2:
	vsetivli zero,4,e32,m1,tu,ma
	addi a4,a5,400
	vlseg4e32.v v12,(a3)
	vfadd.vf v3,v13,fa0
	vfadd.vf v1,v12,fa1
	vlseg4e32.v v4,(a4)
	vfadd.vf v2,v14,fa1
	vfmul.vv v17,v3,v5
	vfmul.vv v16,v1,v5

This patch is just additional fix of previous approved patch.
Tested on both RV32 and RV64 newlib no regression. Committed.

gcc/ChangeLog:

	* config/riscv/riscv-v.cc (is_vlmax_len_p): Remove satisfies_constraint_K.
	(expand_cond_len_op): Add simplification of dummy len and dummy mask.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/base/vf_avl-3.c: New test.
2024-01-02 17:12:15 +08:00
Di Zhao
b041bd4ec2 aarch64: add 'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA'
This patch adds a new tuning option
'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
pipelined FMAs in reassociation. Also, set this option by default
for Ampere CPUs.

gcc/ChangeLog:

	* config/aarch64/aarch64-tuning-flags.def
	(AARCH64_EXTRA_TUNING_OPTION): New tuning option
	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
	* config/aarch64/aarch64.cc
	(aarch64_override_options_internal): Set
	param_fully_pipelined_fma according to tuning option.
	* config/aarch64/tuning_models/ampere1.h: Add
	AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
	* config/aarch64/tuning_models/ampere1a.h: Likewise.
	* config/aarch64/tuning_models/ampere1b.h: Likewise.
2024-01-02 12:35:03 +08:00
Feng Wang
6be6305fb6 RISC-V: Modify copyright year of vector-crypto.md
gcc/ChangeLog:
	* config/riscv/vector-crypto.md: Modify copyright year.
2024-01-02 02:22:52 +00:00
Juzhe-Zhong
d2e40f2867 RISC-V: Declare STMT_VINFO_TYPE (...) as local variable
Committed.

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc: Move STMT_VINFO_TYPE (...) to local.
2024-01-02 10:10:27 +08:00
Lulu Cheng
3c20e6263a LoongArch: Added TLS Le Relax support.
Check whether the assembler supports tls le relax. If it supports it, the assembly
instruction sequence of tls le relax will be generated by default.

The original way to obtain the tls le symbol address:
    lu12i.w $rd, %le_hi20(sym)
    ori $rd, $rd, %le_lo12(sym)
    add.{w/d} $rd, $rd, $tp

If the assembler supports tls le relax, the following sequence is generated:

    lu12i.w $rd, %le_hi20_r(sym)
    add.{w/d} $rd,$rd,$tp,%le_add_r(sym)
    addi.{w/d} $rd,$rd,%le_lo12_r(sym)

gcc/ChangeLog:

	* config.in: Regenerate.
	* config/loongarch/loongarch-opts.h (HAVE_AS_TLS_LE_RELAXATION): Define.
	* config/loongarch/loongarch.cc (loongarch_legitimize_tls_address):
	Added TLS Le Relax support.
	(loongarch_print_operand_reloc): Add the output string of TLS Le Relax.
	* config/loongarch/loongarch.md (@add_tls_le_relax<mode>): New template.
	* configure: Regenerate.
	* configure.ac: Check if binutils supports TLS le relax.

gcc/testsuite/ChangeLog:

	* lib/target-supports.exp: Add a function to check whether binutil supports
	TLS Le Relax.
	* gcc.target/loongarch/tls-le-relax.c: New test.
2024-01-02 09:33:27 +08:00
Feng Wang
d3d6a96d45 RISC-V: Add crypto machine descriptions
Co-Authored by: Songhe Zhu <zhusonghe@eswincomputing.com>
Co-Authored by: Ciyan Pan <panciyan@eswincomputing.com>
gcc/ChangeLog:

	* config/riscv/iterators.md: Add rotate insn name.
	* config/riscv/riscv.md: Add new insns name for crypto vector.
	* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
	* config/riscv/vector.md: Add the corresponding attr for crypto vector.
	* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
2024-01-02 01:17:23 +00:00
Juzhe-Zhong
9a29b00365 RISC-V: Count pointer type SSA into RVV regs liveness for dynamic LMUL cost model
This patch fixes the following choosing unexpected big LMUL which cause register spillings.

Before this patch, choosing LMUL = 4:

	addi	sp,sp,-160
	addiw	t1,a2,-1
	li	a5,7
	bleu	t1,a5,.L16
	vsetivli	zero,8,e64,m4,ta,ma
	vmv.v.x	v4,a0
	vs4r.v	v4,0(sp)                        ---> spill to the stack.
	vmv.v.x	v4,a1
	addi	a5,sp,64
	vs4r.v	v4,0(a5)                        ---> spill to the stack.

The root cause is the following codes:

                  if (poly_int_tree_p (var)
                      || (is_gimple_val (var)
                         && !POINTER_TYPE_P (TREE_TYPE (var))))

We count the variable as consuming a RVV reg group when it is not POINTER_TYPE.

It is right for load/store STMT for example:

_1 = (MEM)*addr -->  addr won't be allocated an RVV vector group.

However, we find it is not right for non-load/store STMT:

_3 = _1 == x_8(D);

_1 is pointer type too but we does allocate a RVV register group for it.

So after this patch, we are choosing the perfect LMUL for the testcase in this patch:

	ble	a2,zero,.L17
	addiw	a7,a2,-1
	li	a5,3
	bleu	a7,a5,.L15
	srliw	a5,a7,2
	slli	a6,a5,1
	add	a6,a6,a5
	lui	a5,%hi(replacements)
	addi	t1,a5,%lo(replacements)
	slli	a6,a6,5
	lui	t4,%hi(.LANCHOR0)
	lui	t3,%hi(.LANCHOR0+8)
	lui	a3,%hi(.LANCHOR0+16)
	lui	a4,%hi(.LC1)
	vsetivli	zero,4,e16,mf2,ta,ma
	addi	t4,t4,%lo(.LANCHOR0)
	addi	t3,t3,%lo(.LANCHOR0+8)
	addi	a3,a3,%lo(.LANCHOR0+16)
	addi	a4,a4,%lo(.LC1)
	add	a6,t1,a6
	addi	a5,a5,%lo(replacements)
	vle16.v	v18,0(t4)
	vle16.v	v17,0(t3)
	vle16.v	v16,0(a3)
	vmsgeu.vi	v25,v18,4
	vadd.vi	v24,v18,-4
	vmsgeu.vi	v23,v17,4
	vadd.vi	v22,v17,-4
	vlm.v	v21,0(a4)
	vmsgeu.vi	v20,v16,4
	vadd.vi	v19,v16,-4
	vsetvli	zero,zero,e64,m2,ta,mu
	vmv.v.x	v12,a0
	vmv.v.x	v14,a1
.L4:
	vlseg3e64.v	v6,(a5)
	vmseq.vv	v2,v6,v12
	vmseq.vv	v0,v8,v12
	vmsne.vv	v1,v8,v12
	vmand.mm	v1,v1,v2
	vmerge.vvm	v2,v8,v14,v0
	vmv1r.v	v0,v1
	addi	a4,a5,24
	vmerge.vvm	v6,v6,v14,v0
	vmerge.vim	v2,v2,0,v0
	vrgatherei16.vv	v4,v6,v18
	vmv1r.v	v0,v25
	vrgatherei16.vv	v4,v2,v24,v0.t
	vs1r.v	v4,0(a5)
	addi	a3,a5,48
	vmv1r.v	v0,v21
	vmv2r.v	v4,v2
	vcompress.vm	v4,v6,v0
	vs1r.v	v4,0(a4)
	vmv1r.v	v0,v23
	addi	a4,a5,72
	vrgatherei16.vv	v4,v6,v17
	vrgatherei16.vv	v4,v2,v22,v0.t
	vs1r.v	v4,0(a3)
	vmv1r.v	v0,v20
	vrgatherei16.vv	v4,v6,v16
	addi	a5,a5,96
	vrgatherei16.vv	v4,v2,v19,v0.t
	vs1r.v	v4,0(a4)
	bne	a6,a5,.L4

No spillings, no "sp" register used.

Tested on both RV32 and RV64, no regression.

Ok for trunk ?

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (compute_nregs_for_mode): Fix
	pointer type liveness count.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-4.c: New test.
2024-01-02 08:22:58 +08:00
GCC Administrator
c8170fe5d4 Daily bump. 2024-01-02 00:18:25 +00:00
GCC Administrator
3a7dd24ead Daily bump. 2024-01-01 00:18:40 +00:00
Roger Sayle
79e1b23b91 i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c
This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux.

The function:

unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
  return x+y;
}

currently generates:

foo:    movq    %rdx, %rcx
        movq    %rdi, %rax
        movq    %rsi, %rdx
        addq    %rcx, %rax
        adcq    $0, %rdx
        ret

and with this patch, we now generate:

foo:	movq    %rdi, %rax
        addq    %rdx, %rax
        movq    %rsi, %rdx
        adcq    $0, %rdx

which is optimal.

2023-12-31  Uros Bizjak  <ubizjak@gmail.com>
	    Roger Sayle  <roger@nextmovesoftware.com>

gcc/ChangeLog
	PR target/43644
	* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
	order of instructions after split, to minimize number of moves.

gcc/testsuite/ChangeLog
	PR target/43644
	* gcc.target/i386/pr43644-2.c: Expect 2 movq instructions.
2023-12-31 21:37:24 +00:00
Hans-Peter Nilsson
26fe2808d8 libstdc++ testsuite/20_util/hash/quality.cc: Increase timeout 3x
Testing for mmix (a 64-bit target using Knuth's simulator).  The test
is largely pruned for simulators, but still needs 5m57s on my laptop
from 3.5 years ago to run to successful completion.  Perhaps slow
hosted targets could also have problems so increasing the timeout
limit, not just for simulators but for everyone, and by more than a
factor 2.

	* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
2023-12-31 18:17:13 +01:00
François Dumont
505110bb91 libstdc++: [_Hashtable] Extend the small size optimization
A number of methods were still not using the small size optimization which
is to prefer an O(N) research to a hash computation as long as N is small.

libstdc++-v3/ChangeLog:

	* include/bits/hashtable.h: Move comment about all equivalent values
	being next to each other in the class documentation header.
	(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
	(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
2023-12-31 18:00:29 +01:00
François Dumont
91b334d027 libstdc++: [_Hashtable] Enhance performance benches
Add benches on insert with hint and before begin cache.

libstdc++-v3/ChangeLog:

	* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
	w/o copy to see potential impact of memory fragmentation enhancements.
	* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
	functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
	slow or not to bench w/o hash code cache.
	* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
	previous one but using std::unordered_set.
	* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
	Check performance of range-insertion compared to individual insertions.
	* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
	but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
2023-12-31 17:53:27 +01:00
GCC Administrator
03fb8f274c Daily bump. 2023-12-31 00:16:46 +00:00
Martin Uecker
38c33fd2b5 C: Fix type compatibility for structs with variable sized fields.
This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f
which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
inconsistently for types with and without variable-sized field.  This
is fixed by testing for DECL_ALIGN instead.  The code is further
simplified by removing some unnecessary conditions, i.e. anon_field is
set unconditionaly and all fields are assumed to be DECL_FIELDs.

gcc/c:
	* c-typeck.cc (tagged_types_tu_compatible_p): Revise.

gcc/testsuite:
	* gcc.dg/c23-tag-9.c: New test.
2023-12-30 22:32:51 +01:00
Joseph Myers
77f30e22f1 MAINTAINERS: Update my email address
There will be another update in January.

	* MAINTAINERS: Update my email address.
2023-12-30 00:27:32 +00:00
GCC Administrator
ab7f670157 Daily bump. 2023-12-30 00:17:48 +00:00
Jan Hubicka
467cc398e6 Disable FMADD in chains for Zen4 and generic
this patch disables use of FMA in matrix multiplication loop for generic (for
x86-64-v3) and zen4.  I tested this on zen4 and Xenon Gold Gold 6212U.

For Intel this is neutral both on the matrix multiplication microbenchmark
(attached) and spec2k17 where the difference was within noise for Core.

On core the micro-benchmark runs as follows:

With FMA:

       578,500,241      cycles:u                         #    3.645 GHz
                ( +-  0.12% )
       753,318,477      instructions:u                   #    1.30  insn per
cycle              ( +-  0.00% )
       125,417,701      branches:u                       #  790.227 M/sec
                ( +-  0.00% )
          0.159146 +- 0.000363 seconds time elapsed  ( +-  0.23% )

No FMA:

       577,573,960      cycles:u                         #    3.514 GHz
                ( +-  0.15% )
       878,318,479      instructions:u                   #    1.52  insn per
cycle              ( +-  0.00% )
       125,417,702      branches:u                       #  763.035 M/sec
                ( +-  0.00% )
          0.164734 +- 0.000321 seconds time elapsed  ( +-  0.19% )

So the cycle count is unchanged and discrete multiply+add takes same time as
FMA.

While on zen:

With FMA:
         484875179      cycles:u                         #    3.599 GHz
             ( +-  0.05% )  (82.11%)
         752031517      instructions:u                   #    1.55  insn per
cycle
         125106525      branches:u                       #  928.712 M/sec
             ( +-  0.03% )  (85.09%)
            128356      branch-misses:u                  #    0.10% of all
branches          ( +-  0.06% )  (83.58%)

No FMA:
         375875209      cycles:u                         #    3.592 GHz
             ( +-  0.08% )  (80.74%)
         875725341      instructions:u                   #    2.33  insn per
cycle
         124903825      branches:u                       #    1.194 G/sec
             ( +-  0.04% )  (84.59%)
          0.105203 +- 0.000188 seconds time elapsed  ( +-  0.18% )

The diffrerence is that Cores understand the fact that fmadd does not need
all three parameters to start computation, while Zen cores doesn't.

Since this seems noticeable win on zen and not loss on Core it seems like good
default for generic.

float a[SIZE][SIZE];
float b[SIZE][SIZE];
float c[SIZE][SIZE];

void init(void)
{
   int i, j, k;
   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         a[i][j] = (float)i + j;
         b[i][j] = (float)i - j;
         c[i][j] = 0.0f;
      }
   }
}

void mult(void)
{
   int i, j, k;

   for(i=0; i<SIZE; ++i)
   {
      for(j=0; j<SIZE; ++j)
      {
         for(k=0; k<SIZE; ++k)
         {
            c[i][j] += a[i][k] * b[k][j];
         }
      }
   }
}

int main(void)
{
   clock_t s, e;

   init();
   s=clock();
   mult();
   e=clock();
   printf("        mult took %10d clocks\n", (int)(e-s));

   return 0;

}

gcc/ChangeLog:

	* config/i386/x86-tune.def (X86_TUNE_AVOID_128FMA_CHAINS,
	X86_TUNE_AVOID_256FMA_CHAINS): Enable for znver4 and Core.
2023-12-29 23:51:03 +01:00
Tamar Christina
984bdeaa39 AArch64: Update costing for vector conversions [PR110625]
In gimple the operation

short _8;
double _9;
_9 = (double) _8;

denotes two operations on AArch64.  First we have to widen from short to
long and then convert this integer to a double.

Currently however we only count the widen/truncate operations:

(double) _5 6 times vec_promote_demote costs 12 in body
(double) _5 12 times vec_promote_demote costs 24 in body

but not the actual conversion operation, which needs an additional 12
instructions in the attached testcase.   Without this the attached testcase ends
up incorrectly thinking that it's beneficial to vectorize the loop at a very
high VF = 8 (4x unrolled).

Because we can't change the mid-end to account for this the costing code in the
backend now keeps track of whether the previous operation was a
promotion/demotion and ajdusts the expected number of instructions to:

1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
   different, double it, since we need to convert and promote.
2. If it's the previous operation was a demonition/promotion then reduce the
   cost of the current operation by the amount we added extra in the last.

with the patch we get:

(double) _5 6 times vec_promote_demote costs 24 in body
(double) _5 12 times vec_promote_demote costs 36 in body

which correctly accounts for 30 operations.

This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
and using the new generic Armv9-a cost model.

gcc/ChangeLog:

	PR target/110625
	* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
	Adjust throughput and latency calculations for vector conversions.
	(class aarch64_vector_costs): Add m_num_last_promote_demote.

gcc/testsuite/ChangeLog:

	PR target/110625
	* gcc.target/aarch64/pr110625_4.c: New test.
	* gcc.target/aarch64/sve/unpack_fcvt_signed_1.c: Add
	--param aarch64-sve-compare-costs=0.
	* gcc.target/aarch64/sve/unpack_fcvt_unsigned_1.c: Likewise
2023-12-29 15:58:29 +00:00
Xi Ruoyao
748a4e9069
LoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC)
gcc/ChangeLog:

	* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
	For the condition, remove unneeded trailing "\" and move "&&" to
	follow GNU coding style.  NFC.
2023-12-29 20:07:54 +08:00
Xi Ruoyao
8b61d109b1
LoongArch: Replace -mexplicit-relocs=auto simple-used address peephole2 with combine
The problem with peephole2 is it uses a naive sliding-window algorithm
and misses many cases.  For example:

    float a[10000];
    float t() { return a[0] + a[8000]; }

is compiled to:

    la.local    $r13,a
    la.local    $r12,a+32768
    fld.s       $f1,$r13,0
    fld.s       $f0,$r12,-768
    fadd.s      $f0,$f1,$f0

by trunk.  But as we've explained in r14-4851, the following would be
better with -mexplicit-relocs=auto:

    pcalau12i   $r13,%pc_hi20(a)
    pcalau12i   $r12,%pc_hi20(a+32000)
    fld.s       $f1,$r13,%pc_lo12(a)
    fld.s       $f0,$r12,%pc_lo12(a+32000)
    fadd.s      $f0,$f1,$f0

However the sliding-window algorithm just won't detect the pcalau12i/fld
pair to be optimized.  Use a define_insn_and_rewrite in combine pass
will work around the issue.

gcc/ChangeLog:

	* config/loongarch/predicates.md
	(symbolic_pcrel_offset_operand): New define_predicate.
	(mem_simple_ldst_operand): Likewise.
	* config/loongarch/loongarch-protos.h
	(loongarch_rewrite_mem_for_simple_ldst): Declare.
	* config/loongarch/loongarch.cc
	(loongarch_rewrite_mem_for_simple_ldst): Implement.
	* config/loongarch/loongarch.md (simple_load<mode>): New
	define_insn_and_rewrite.
	(simple_load_<su>ext<SUBDI:mode><GPR:mode>): Likewise.
	(simple_store<mode>): Likewise.
	(define_peephole2): Remove la.local/[f]ld peepholes.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
	New test.
	* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
	New test.
2023-12-29 20:07:53 +08:00
Uros Bizjak
1e7f9abb89 i386: Fix TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter [PR113133]
The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
The splitter changes SFmode of the output operand to V4SFmode, but the vector
mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
returns NULL_RTX in this case and the compilation fails with invalid RTX.

The patch removes support for x/ymm16+ registers with TARGET_EVEX512.  The
support should be restored once ix86_hard_regno_mode_ok is fixed to allow
16-byte modes in x/ymm16+ with TARGET_EVEX512.

	PR target/113133

gcc/ChangeLog:

	* config/i386/i386.md
	(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
	Do not handle xmm16+ with TARGET_EVEX512.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr113133-1.c: New test.
	* gcc.target/i386/pr113133-2.c: New test.
2023-12-29 09:53:01 +01:00
Andrew Pinski
200531d5b9 Fix gen-vect-26.c testcase after loops with multiple exits [PR113167]
This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
`#pragma GCC novector` in front of the loop that is doing the checking
of the result. We only want to test the first loop to see if it can be
vectorize.

Committed as obvious after testing on x86_64-linux-gnu with -m32.

gcc/testsuite/ChangeLog:

	PR testsuite/113167
	* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
	as novector.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
2023-12-28 20:49:47 -08:00
Juzhe-Zhong
7dc868cb31 RISC-V: Robostify testcase pr113112-1.c
The redudant dump check is fragile and easily changed, not necessary.

Tested on both RV32/RV64 no regression.

Remove it and committed.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Remove redundant checks.
2023-12-29 09:41:16 +08:00
Juzhe-Zhong
d1eacedc6d RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31]
Notice we have this following situation:

        vsetivli        zero,4,e32,m1,ta,ma
        vlseg4e32.v     v4,(a5)
        vlseg4e32.v     v12,(a3)
        vsetvli a5,zero,e32,m1,tu,ma             ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
        vfadd.vf        v3,v13,fa0
        vfadd.vf        v1,v12,fa1
        vfmul.vv        v17,v3,v5
        vfmul.vv        v16,v1,v5

The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.

After this patch:

	vsetivli	zero,4,e32,m1,tu,ma
	addi	a4,a5,400
	vlseg4e32.v	v12,(a3)
	vfadd.vf	v3,v13,fa0
	vfadd.vf	v1,v12,fa1
	vlseg4e32.v	v4,(a4)
	vfadd.vf	v2,v14,fa1
	vfmul.vv	v17,v3,v5
	vfmul.vv	v16,v1,v5

Tested on both RV32 and RV64 no regression.

Ok for trunk ?

gcc/ChangeLog:

	* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
	(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
	(expand_cond_len_op): Ditto.
	(expand_gather_scatter): Ditto.
	(expand_lanes_load_store): Ditto.
	(expand_fold_extract_last): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
	* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
2023-12-29 08:38:03 +08:00
GCC Administrator
7de05ad450 Daily bump. 2023-12-29 00:17:56 +00:00
Rimvydas Jasinskas
2cb93e6686 Fortran: Add Developer Options mini-section to documentation
Separate out -fdump-* options to the new section.  Sort by option name.

While there, document -save-temps intermediates.

gcc/fortran/ChangeLog:

	PR fortran/81615
	* invoke.texi: Add Developer Options section.  Move '-fdump-*'
	to it.  Add small examples about changed '-save-temps' behavior.

Signed-off-by: Rimvydas Jasinskas <rimvydas.jas@gmail.com>
2023-12-28 21:07:10 +01:00
David Edelsohn
bf5c00d7ee testsuite: XFAIL linkage testcases on AIX.
The template linkage2.C and linkage3.C testcases expect a
decoration that does not match AIX assembler syntax.  Expect failure.

gcc/testsuite/ChangeLog:
	* g++.dg/template/linkage2.C: XFAIL on AIX.
	* g++.dg/template/linkage3.C: Same.

Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
2023-12-28 14:55:04 -05:00
Uros Bizjak
d74cceb6d4 i386: Cleanup ix86_expand_{unary|binary}_operator issues
Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.

No functional changes.

gcc/ChangeLog:

	* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
	* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
	* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
	and ix86_expand_{unary|binary}_operator prototypes.
	* config/i386/i386.md: Cosmetic changes with the usage of
	TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
	and ix86_{unary|binary}_operator_ok function calls.
2023-12-28 12:31:30 +01:00
Juzhe-Zhong
76f5542c48 RISC-V: Make dynamic LMUL cost model more accurate for conversion codes
Notice current dynamic LMUL is not accurate for conversion codes.
Refine for it, there is current case is changed from choosing LMUL = 4 into LMUL = 8.

Tested no regression, committed.

Before this patch (LMUL = 4):                  After this patch (LMUL = 8):
        lw      a7,56(sp)                             lw	a7,56(sp)
        ld      t5,0(sp)                              ld	t5,0(sp)
        ld      t1,8(sp)                              ld	t1,8(sp)
        ld      t6,16(sp)                             ld	t6,16(sp)
        ld      t0,24(sp)                             ld	t0,24(sp)
        ld      t3,32(sp)                             ld	t3,32(sp)
        ld      t4,40(sp)                             ld	t4,40(sp)
        ble     a7,zero,.L5                           ble	a7,zero,.L5
.L3:                                               .L3:
        vsetvli a4,a7,e32,m2,ta,ma                    vsetvli	a4,a7,e32,m4,ta
        vle8.v  v1,0(a2)                              vle8.v	v3,0(a2)
        vle8.v  v4,0(a1)                              vle8.v	v16,0(t0)
        vsext.vf4       v8,v1                         vle8.v	v7,0(a1)
        vsext.vf4       v2,v4                         vle8.v	v12,0(t6)
        vsetvli zero,zero,e8,mf2,ta,ma                vle8.v	v2,0(a5)
        vadd.vv v4,v4,v1                              vle8.v	v1,0(t5)
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4	v20,v3
        vle8.v  v5,0(t0)                              vsext.vf4	v8,v7
        vle8.v  v6,0(t6)                              vadd.vv	v8,v8,v20
        vadd.vv v2,v2,v8                              vadd.vv	v8,v8,v8
        vadd.vv v2,v2,v2                              vadd.vv	v8,v8,v20
        vadd.vv v2,v2,v8                              vsetvli	zero,zero,e8,m1
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv	v15,v12,v16
        vadd.vv v6,v6,v5                              vsetvli	zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vsext.vf4	v12,v15
        vle8.v  v8,0(t5)                              vadd.vv	v8,v8,v12
        vle8.v  v9,0(a5)                              vsetvli	zero,zero,e8,m1
        vsext.vf4       v10,v4                        vadd.vv	v7,v7,v3
        vsext.vf4       v12,v6                        vsetvli	zero,zero,e32,m4
        vadd.vv v2,v2,v12                             vsext.vf4	v4,v7
        vadd.vv v2,v2,v10                             vadd.vv	v8,v8,v4
        vsetvli zero,zero,e16,m1,ta,ma                vsetvli	zero,zero,e16,m2
        vncvt.x.x.w     v4,v2                         vncvt.x.x.w	v4,v8
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli	zero,zero,e8,m1
        vadd.vv v6,v2,v2                              vncvt.x.x.w	v4,v4
        vsetvli zero,zero,e8,mf2,ta,ma                vadd.vv	v15,v3,v4
        vncvt.x.x.w     v4,v4                         vadd.vv	v2,v2,v4
        vadd.vv v5,v5,v4                              vse8.v	v15,0(t4)
        vadd.vv v9,v9,v4                              vadd.vv	v3,v16,v4
        vadd.vv v1,v1,v4                              vse8.v	v2,0(a3)
        vadd.vv v4,v8,v4                              vadd.vv	v1,v1,v4
        vse8.v  v1,0(t4)                              vse8.v	v1,0(a6)
        vse8.v  v9,0(a3)                              vse8.v	v3,0(t1)
        vsetvli zero,zero,e32,m2,ta,ma                vsetvli	zero,zero,e32,m4
        vse8.v  v4,0(a6)                              vsext.vf4	v4,v3
        vsext.vf4       v8,v5                         vadd.vv	v4,v4,v8
        vse8.v  v5,0(t1)                              vsetvli	zero,zero,e64,m8
        vadd.vv v2,v8,v2                              vsext.vf2	v16,v4
        vsetvli zero,zero,e64,m4,ta,ma                vse64.v	v16,0(t3)
        vsext.vf2       v8,v2                         vsetvli	zero,zero,e32,m4
        vsetvli zero,zero,e32,m2,ta,ma                vadd.vv	v8,v8,v8
        slli    t2,a4,3                               vsext.vf4	v4,v15
        vse64.v v8,0(t3)                              slli	t2,a4,3
        vsext.vf4       v2,v1                         vadd.vv	v4,v8,v4
        sub     a7,a7,a4                              sub	a7,a7,a4
        vadd.vv v2,v6,v2                              vsetvli	zero,zero,e64,m8
        vsetvli zero,zero,e64,m4,ta,ma                vsext.vf2	v8,v4
        vsext.vf2       v4,v2                         vse64.v	v8,0(a0)
        vse64.v v4,0(a0)                              add	a1,a1,a4
        add     a2,a2,a4                              add	a2,a2,a4
        add     a1,a1,a4                              add	a5,a5,a4
        add     t6,t6,a4                              add	t5,t5,a4
        add     t0,t0,a4                              add	t6,t6,a4
        add     a5,a5,a4                              add	t0,t0,a4
        add     t5,t5,a4                              add	t4,t4,a4
        add     t4,t4,a4                              add	a3,a3,a4
        add     a3,a3,a4                              add	a6,a6,a4
        add     a6,a6,a4                              add	t1,t1,a4
        add     t1,t1,a4                              add	t3,t3,t2
        add     t3,t3,t2                              add	a0,a0,t2
        add     a0,a0,t2                              bne	a7,zero,.L3
        bne     a7,zero,.L3                         .L5:
.L5:                                                  ret
        ret

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): Change interface.
	(get_live_range): New function.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Adapt test.
	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
2023-12-28 10:32:51 +08:00
GCC Administrator
fb57e402d0 Daily bump. 2023-12-28 00:19:23 +00:00
Xi Ruoyao
f19ceb2d49
LoongArch: Fix infinite secondary reloading of FCCmode [PR113148]
The GCC internal doc says:

     X might be a pseudo-register or a 'subreg' of a pseudo-register,
     which could either be in a hard register or in memory.  Use
     'true_regnum' to find out; it will return -1 if the pseudo is in
     memory and the hard register number if it is in a register.

So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory.  This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.

Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.

gcc/ChangeLog:

	PR target/113148
	* config/loongarch/loongarch.cc (loongarch_secondary_reload):
	Check if regno == -1 besides MEM_P (x) for reloading FCCmode
	from/to FPR to/from memory.

gcc/testsuite/ChangeLog:

	PR target/113148
	* gcc.target/loongarch/pr113148.c: New test.
2023-12-27 19:02:04 +08:00
Xi Ruoyao
80b8f1e535
LoongArch: Expand left rotate to right rotate with negated amount
gcc/ChangeLog:

	* config/loongarch/loongarch.md (rotl<mode>3):
	New define_expand.
	* config/loongarch/simd.md (vrotl<mode>3): Likewise.
	(rotl<mode>3): Likewise.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/rotl-with-rotr.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
	* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
	* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.
2023-12-27 19:02:03 +08:00
Juzhe-Zhong
c4ac073d4f RISC-V: Make known NITERS loop be aware of dynamic lmul cost model liveness information
Consider this following case:

int f[12][100];

void bad1(int v1, int v2)
{
  for (int r = 0; r < 100; r += 4)
    {
      int i = r + 1;
      f[0][r] = f[1][r] * (f[2][r]) - f[1][i] * (f[2][i]);
      f[0][i] = f[1][r] * (f[2][i]) + f[1][i] * (f[2][r]);
      f[0][r+2] = f[1][r+2] * (f[2][r+2]) - f[1][i+2] * (f[2][i+2]);
      f[0][i+2] = f[1][r+2] * (f[2][i+2]) + f[1][i+2] * (f[2][r+2]);
    }
}

Pick up LMUL = 8 VLS blindly:

        lui     a4,%hi(f)
        addi    a4,a4,%lo(f)
        addi    sp,sp,-592
        addi    a3,a4,800
        lui     a5,%hi(.LANCHOR0)
        vl8re32.v       v24,0(a3)
        addi    a5,a5,%lo(.LANCHOR0)
        addi    a1,a4,400
        addi    a3,sp,140
        vl8re32.v       v16,0(a1)
        vl4re16.v       v4,0(a5)
        addi    a7,a5,192
        vs4r.v  v4,0(a3)
        addi    t0,a5,64
        addi    a3,sp,336
        li      t2,32
        addi    a2,a5,128
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v4
        vmul.vv v8,v8,v24
        vl8re32.v       v0,0(a7)
        vs8r.v  v8,0(a3)
        vmsltu.vx       v8,v0,t2
        addi    a3,sp,12
        addi    t2,sp,204
        vsm.v   v8,0(t2)
        vl4re16.v       v4,0(t0)
        vl4re16.v       v0,0(a2)
        vs4r.v  v4,0(a3)
        addi    t0,sp,336
        vrgatherei16.vv v8,v24,v4
        addi    a3,sp,208
        vrgatherei16.vv v24,v16,v0
        vs4r.v  v0,0(a3)
        vmul.vv v8,v8,v24
        vlm.v   v0,0(t2)
        vl8re32.v       v24,0(t0)
        addi    a3,sp,208
        vsub.vv v16,v24,v8
        addi    t6,a4,528
        vadd.vv v8,v24,v8
        addi    t5,a4,928
        vmerge.vvm      v8,v8,v16,v0
        addi    t3,a4,128
        vs8r.v  v8,0(a4)
        addi    t4,a4,1056
        addi    t1,a4,656
        addi    a0,a4,256
        addi    a6,a4,1184
        addi    a1,a4,784
        addi    a7,a4,384
        addi    a4,sp,140
        vl4re16.v       v0,0(a3)
        vl8re32.v       v24,0(t6)
        vl4re16.v       v4,0(a4)
        vrgatherei16.vv v16,v24,v0
        addi    a3,sp,12
        vs8r.v  v16,0(t0)
        vl8re32.v       v8,0(t5)
        vrgatherei16.vv v16,v24,v4
        vl4re16.v       v4,0(a3)
        vrgatherei16.vv v24,v8,v4
        vmul.vv v16,v16,v8
        vl8re32.v       v8,0(t0)
        vmul.vv v8,v8,v24
        vsub.vv v24,v16,v8
        vlm.v   v0,0(t2)
        addi    a3,sp,208
        vadd.vv v8,v8,v16
        vl8re32.v       v16,0(t4)
        vmerge.vvm      v8,v8,v24,v0
        vrgatherei16.vv v24,v16,v4
        vs8r.v  v24,0(t0)
        vl4re16.v       v28,0(a3)
        addi    a3,sp,464
        vs8r.v  v8,0(t3)
        vl8re32.v       v8,0(t1)
        vrgatherei16.vv v0,v8,v28
        vs8r.v  v0,0(a3)
        addi    a3,sp,140
        vl4re16.v       v24,0(a3)
        addi    a3,sp,464
        vrgatherei16.vv v0,v8,v24
        vl8re32.v       v24,0(t0)
        vmv8r.v v8,v0
        vl8re32.v       v0,0(a3)
        vmul.vv v8,v8,v16
        vmul.vv v24,v24,v0
        vsub.vv v16,v8,v24
        vadd.vv v8,v8,v24
        vsetivli        zero,4,e32,m8,ta,ma
        vle32.v v24,0(a6)
        vsetvli a4,zero,e32,m8,ta,ma
        addi    a4,sp,12
        vlm.v   v0,0(t2)
        vmerge.vvm      v8,v8,v16,v0
        vl4re16.v       v16,0(a4)
        vrgatherei16.vv v0,v24,v16
        vsetivli        zero,4,e32,m8,ta,ma
        vs8r.v  v0,0(a4)
        addi    a4,sp,208
        vl4re16.v       v0,0(a4)
        vs8r.v  v8,0(a0)
        vle32.v v16,0(a1)
        vsetvli a5,zero,e32,m8,ta,ma
        vrgatherei16.vv v8,v16,v0
        vs8r.v  v8,0(a4)
        addi    a4,sp,140
        vl4re16.v       v4,0(a4)
        addi    a5,sp,12
        vrgatherei16.vv v8,v16,v4
        vl8re32.v       v0,0(a5)
        vsetivli        zero,4,e32,m8,ta,ma
        addi    a5,sp,208
        vmv8r.v v16,v8
        vl8re32.v       v8,0(a5)
        vmul.vv v24,v24,v16
        vmul.vv v8,v0,v8
        vsub.vv v16,v24,v8
        vadd.vv v8,v8,v24
        vsetvli a5,zero,e8,m2,ta,ma
        vlm.v   v0,0(t2)
        vsetivli        zero,4,e32,m8,ta,ma
        vmerge.vvm      v8,v8,v16,v0
        vse32.v v8,0(a7)
        addi    sp,sp,592
        jr      ra

This patch makes loop with known NITERS be aware of liveness estimation, after this patch, choosing LMUL = 4:

	lui	a5,%hi(f)
	addi	a5,a5,%lo(f)
	addi	a3,a5,400
	addi	a4,a5,800
	vsetivli	zero,8,e32,m2,ta,ma
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v8,v16
	addi	a3,a5,528
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a4,a5,928
	vmul.vv	v2,v12,v22
	vmul.vv	v6,v8,v18
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmul.vv	v4,v12,v20
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	vsseg4e32.v	v24,(a5)
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v8,v16
	addi	a6,a5,128
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a0,a5,656
	vmul.vv	v2,v12,v22
	addi	a1,a5,1056
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmul.vv	v6,v8,v18
	vmul.vv	v4,v12,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	vsseg4e32.v	v24,(a6)
	vlseg4e32.v	v16,(a0)
	vlseg4e32.v	v8,(a1)
	vmul.vv	v2,v8,v16
	addi	a2,a5,256
	vmv.v.v	v24,v10
	vnmsub.vv	v24,v18,v2
	addi	a3,a5,784
	vmul.vv	v2,v12,v22
	addi	a4,a5,1184
	vmv.v.v	v30,v2
	vmacc.vv	v30,v14,v20
	vmul.vv	v6,v8,v18
	vmul.vv	v4,v12,v20
	vmv.v.v	v26,v6
	vmacc.vv	v26,v10,v16
	vmv.v.v	v28,v14
	vnmsub.vv	v28,v22,v4
	addi	a5,a5,384
	vsseg4e32.v	v24,(a2)
	vsetivli	zero,1,e32,m2,ta,ma
	vlseg4e32.v	v16,(a3)
	vlseg4e32.v	v8,(a4)
	vmul.vv	v2,v16,v8
	vmul.vv	v6,v18,v8
	vmv.v.v	v24,v18
	vnmsub.vv	v24,v10,v2
	vmul.vv	v4,v20,v12
	vmul.vv	v2,v22,v12
	vmv.v.v	v26,v6
	vmacc.vv	v26,v16,v10
	vmv.v.v	v28,v22
	vnmsub.vv	v28,v14,v4
	vmv.v.v	v30,v2
	vmacc.vv	v30,v20,v14
	vsseg4e32.v	v24,(a5)
	ret

Tested on both RV32 and RV64 no regressions.

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): New function.
	(get_first_lane_point): Ditto.
	(get_last_lane_point): Ditto.
	(max_number_of_live_regs): Refine live point dump.
	(compute_estimated_lmul): Make unknown NITERS loop be aware of liveness.
	(costs::better_main_loop_than_p): Ditto.
	* config/riscv/riscv-vector-costs.h (struct stmt_point): Add new member.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c:
	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-3.c: New test.
2023-12-27 17:19:35 +08:00
Chenghui Pan
feaff27b29 LoongArch: Fix ICE when passing two same vector argument consecutively
Following code will cause ICE on LoongArch target:

  #include <lsxintrin.h>

  extern void bar (__m128i, __m128i);

  __m128i a;

  void
  foo ()
  {
    bar (a, a);
  }

It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.

This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().

gcc/ChangeLog:

	* config/loongarch/lasx.md: Use loongarch_split_move and
	loongarch_split_move_p directly.
	* config/loongarch/loongarch-protos.h
	(loongarch_split_move): Remove unnecessary argument.
	(loongarch_split_move_insn_p): Delete.
	(loongarch_split_move_insn): Delete.
	* config/loongarch/loongarch.cc
	(loongarch_split_move_insn_p): Delete.
	(loongarch_load_store_insns): Use loongarch_split_move_p
	directly.
	(loongarch_split_move): remove the unnecessary processing.
	(loongarch_split_move_insn): Delete.
	* config/loongarch/lsx.md: Use loongarch_split_move and
	loongarch_split_move_p directly.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.
2023-12-27 14:54:40 +08:00
Chenghui Pan
183a51935c LoongArch: Fix insn output of vec_concat templates for LASX.
When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:

  vldx    $vr3,$r5,$r6
  vld     $vr2,$r5,0
  xvpermi.q       $xr2,$xr3,0x20

causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.

According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:

  (set (op0) (vec_concat (op1) (op2)))

For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.

gcc/ChangeLog:

	* config/loongarch/lasx.md (vec_concatv4di): Delete.
	(vec_concatv8si): Delete.
	(vec_concatv16hi): Delete.
	(vec_concatv32qi): Delete.
	(vec_concatv4df): Delete.
	(vec_concatv8sf): Delete.
	(vec_concat<mode>): New template with insn output fixed.
2023-12-27 14:54:03 +08:00
Li Wei
245c9ef2b8 LoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.
We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on.  After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as
r14-3773-g5b857e87201335.  Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.

gcc/ChangeLog:

	* config/loongarch/loongarch.md: Adjust.
2023-12-27 14:53:50 +08:00
Haochen Gui
d92d26ff36 rs6000: Clean up the pre-checkings of expand_block_compare
Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.

gcc/
	* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
	only P7 above can enter this function.  Remove P7 CPU test and let
	P7 BE do the expand.

gcc/testsuite/
	* gcc.target/powerpc/block-cmp-4.c: New.
2023-12-27 10:35:14 +08:00