Notice we have this following situation:
vsetivli zero,4,e32,m1,ta,ma
vlseg4e32.v v4,(a5)
vlseg4e32.v v12,(a3)
vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.
After this patch:
vsetivli zero,4,e32,m1,tu,ma
addi a4,a5,400
vlseg4e32.v v12,(a3)
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vlseg4e32.v v4,(a4)
vfadd.vf v2,v14,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
Tested on both RV32 and RV64 no regression.
Ok for trunk ?
gcc/ChangeLog:
* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
(expand_cond_len_op): Ditto.
(expand_gather_scatter): Ditto.
(expand_lanes_load_store): Ditto.
(expand_fold_extract_last): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
Separate out -fdump-* options to the new section. Sort by option name.
While there, document -save-temps intermediates.
gcc/fortran/ChangeLog:
PR fortran/81615
* invoke.texi: Add Developer Options section. Move '-fdump-*'
to it. Add small examples about changed '-save-temps' behavior.
Signed-off-by: Rimvydas Jasinskas <rimvydas.jas@gmail.com>
The template linkage2.C and linkage3.C testcases expect a
decoration that does not match AIX assembler syntax. Expect failure.
gcc/testsuite/ChangeLog:
* g++.dg/template/linkage2.C: XFAIL on AIX.
* g++.dg/template/linkage3.C: Same.
Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.
No functional changes.
gcc/ChangeLog:
* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
and ix86_expand_{unary|binary}_operator prototypes.
* config/i386/i386.md: Cosmetic changes with the usage of
TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
and ix86_{unary|binary}_operator_ok function calls.
The GCC internal doc says:
X might be a pseudo-register or a 'subreg' of a pseudo-register,
which could either be in a hard register or in memory. Use
'true_regnum' to find out; it will return -1 if the pseudo is in
memory and the hard register number if it is in a register.
So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory. This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.
Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.
gcc/ChangeLog:
PR target/113148
* config/loongarch/loongarch.cc (loongarch_secondary_reload):
Check if regno == -1 besides MEM_P (x) for reloading FCCmode
from/to FPR to/from memory.
gcc/testsuite/ChangeLog:
PR target/113148
* gcc.target/loongarch/pr113148.c: New test.
gcc/ChangeLog:
* config/loongarch/loongarch.md (rotl<mode>3):
New define_expand.
* config/loongarch/simd.md (vrotl<mode>3): Likewise.
(rotl<mode>3): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/rotl-with-rotr.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.
Following code will cause ICE on LoongArch target:
#include <lsxintrin.h>
extern void bar (__m128i, __m128i);
__m128i a;
void
foo ()
{
bar (a, a);
}
It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.
This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().
gcc/ChangeLog:
* config/loongarch/lasx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
* config/loongarch/loongarch-protos.h
(loongarch_split_move): Remove unnecessary argument.
(loongarch_split_move_insn_p): Delete.
(loongarch_split_move_insn): Delete.
* config/loongarch/loongarch.cc
(loongarch_split_move_insn_p): Delete.
(loongarch_load_store_insns): Use loongarch_split_move_p
directly.
(loongarch_split_move): remove the unnecessary processing.
(loongarch_split_move_insn): Delete.
* config/loongarch/lsx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.
When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:
vldx $vr3,$r5,$r6
vld $vr2,$r5,0
xvpermi.q $xr2,$xr3,0x20
causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.
According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:
(set (op0) (vec_concat (op1) (op2)))
For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.
gcc/ChangeLog:
* config/loongarch/lasx.md (vec_concatv4di): Delete.
(vec_concatv8si): Delete.
(vec_concatv16hi): Delete.
(vec_concatv32qi): Delete.
(vec_concatv4df): Delete.
(vec_concatv8sf): Delete.
(vec_concat<mode>): New template with insn output fixed.
We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on. After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as
r14-3773-g5b857e87201335. Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.
gcc/ChangeLog:
* config/loongarch/loongarch.md: Adjust.
Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.
gcc/
* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
only P7 above can enter this function. Remove P7 CPU test and let
P7 BE do the expand.
gcc/testsuite/
* gcc.target/powerpc/block-cmp-4.c: New.
Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc
to guard the platform which is efficient on fixed point unaligned
load/store. It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX
which is enabled from P8 and can be disabled by mno-vsx option. So the
definition is improper. This patch corrects it and call
slow_unaligned_access to judge if fixed point unaligned load/store is
efficient or not.
gcc/
* config/rs6000/rs6000.h (TARGET_EFFICIENT_OVERLAPPING_UNALIGNED):
Remove.
* config/rs6000/rs6000-string.cc (select_block_compare_mode):
Replace TARGET_EFFICIENT_OVERLAPPING_UNALIGNED with
targetm.slow_unaligned_access.
(expand_block_compare_gpr): Likewise.
(expand_block_compare): Likewise.
(expand_strncmp_gpr_sequence): Likewise.
gcc/testsuite/
* gcc.target/powerpc/block-cmp-1.c: New.
* gcc.target/powerpc/block-cmp-2.c: New.
AIX sections use the csect directive to name a section. Check for
csect name in attr-section testcases.
gcc/testsuite/ChangeLog:
* g++.dg/ext/attr-section1.C: Test for csect section directive.
* g++.dg/ext/attr-section1a.C: Same.
* g++.dg/ext/attr-section2.C: Same.
* g++.dg/ext/attr-section2a.C: Same.
* g++.dg/ext/attr-section2b.C: Same.
Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
The out-of-bounds diagram tests fail on AIX.
gcc/testsuite/ChangeLog:
* gcc.dg/analyzer/out-of-bounds-diagram-17.c: Skip on AIX.
* gcc.dg/analyzer/out-of-bounds-diagram-18.c: Same.
Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
AIX does not support stack scrubbing. Set strub as unsupported on AIX
and ensure that testcases check for strub support.
gcc/testsuite/ChangeLog:
* c-c++-common/strub-unsupported-2.c: Require strub.
* c-c++-common/strub-unsupported-3.c: Same.
* c-c++-common/strub-unsupported.c: Same.
* lib/target-supports.exp (check_effective_target_strub): Return 0
for AIX.
Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
The two testcases are for targets that support FMA. And
pr110279-2.c assumes reassoc_width of FMUL to be 4.
This patch adds missing options, to fix regression test failures
on nvptx/GCN (default reassoc_width of FMUL is 1) and x86_64
(need "-mfma").
gcc/testsuite/ChangeLog:
* gcc.dg/pr110279-1.c: Add "-mcpu=generic" for aarch64; add
"-mfma" for x86_64.
* gcc.dg/pr110279-2.c: Replace "-march=armv8.2-a" with
"-mcpu=generic"; limit the check to be on aarch64.
Add dg-require-effective-target directive that allows the test case to run specifically
on powerpc_pcrel target.
2023-12-26 Jeevitha Palanisamy <jeevitha@linux.ibm.com>
gcc/testsuite/
PR target/110320
* gcc.target/powerpc/pr110320-1.c: Add dg-require-effective-target powerpc_pcrel.
Currently, we compute RVV V_REGS liveness during better_main_loop_than_p which is not appropriate
time to do that since we for example, when have the codes will finally pick LMUL = 8 vectorization
factor, we compute liveness for LMUL = 8 multiple times which are redundant.
Since we have leverage the current ARM SVE COST model:
/* Do one-time initialization based on the vinfo. */
loop_vec_info loop_vinfo = dyn_cast<loop_vec_info> (m_vinfo);
if (!m_analyzed_vinfo)
{
if (loop_vinfo)
analyze_loop_vinfo (loop_vinfo);
m_analyzed_vinfo = true;
}
Analyze COST model only once for each cost model.
So here we move dynamic LMUL liveness information into analyze_loop_vinfo.
/* Do one-time initialization of the costs given that we're
costing the loop vectorization described by LOOP_VINFO. */
void
costs::analyze_loop_vinfo (loop_vec_info loop_vinfo)
{
...
/* Detect whether the LOOP has unexpected spills. */
record_potential_unexpected_spills (loop_vinfo);
}
So that we can avoid redundant computations and the current dynamic LMUL cost model flow is much
more reasonable and consistent with others.
Tested on RV32 and RV64 no regressions.
gcc/ChangeLog:
* config/riscv/riscv-vector-costs.cc (compute_estimated_lmul): Allow
fractional vecrtor.
(preferred_new_lmul_p): Move RVV V_REGS liveness computation into analyze_loop_vinfo.
(has_unexpected_spills_p): New function.
(costs::record_potential_unexpected_spills): Ditto.
(costs::better_main_loop_than_p): Move RVV V_REGS liveness computation into
analyze_loop_vinfo.
* config/riscv/riscv-vector-costs.h: New functions and variables.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul-mixed-1.c: Robostify test.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul1-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-10.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul4-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-10.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-11.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-2.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-3.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-4.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-5.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-6.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-7.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-8.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul8-9.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/no-dynamic-lmul-1.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/pr111848.c: Ditto.
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: Ditto.
when configured with --enable-checking=release we get a false
positive on the use of vec_stmts as the compiler seems unable
to notice it gets initialized through the pass-by-reference.
This explicitly initializes the local.
gcc/ChangeLog:
PR bootstrap/113132
* tree-vect-loop.cc (vect_create_epilog_for_reduction): Initialize vec_stmts;
Normally, GPR2 is the TOC pointer and is defined as a fixed and non-volatile
register. However, it can be used as volatile for PCREL addressing. Therefore,
modified r2 to be non-fixed in FIXED_REGISTERS and set it to fixed if it is not
PCREL and also when the user explicitly requests TOC or fixed. If the register
r2 is fixed, it is made as non-volatile. Changes in register preservation roles
can be accomplished with the help of available target hooks
(TARGET_CONDITIONAL_REGISTER_USAGE).
2023-12-24 Jeevitha Palanisamy <jeevitha@linux.ibm.com>
gcc/
PR target/110320
* config/rs6000/rs6000.cc (rs6000_conditional_register_usage): Change
GPR2 to volatile and non-fixed register for PCREL.
* config/rs6000/rs6000.h (FIXED_REGISTERS): Modify GPR2 to not fixed.
gcc/testsuite/
PR target/110320
* gcc.target/powerpc/pr110320-1.c: New testcase.
* gcc.target/powerpc/pr110320-2.c: New testcase.
* gcc.target/powerpc/pr110320-3.c: New testcase.
Co-authored-by: Peter Bergner <bergner@linux.ibm.com>
In the testcase provided, we would match f_plus but not g_plus
due to a missing `:c` on the plus operator. This fixes the oversight
there.
Note this was noted in https://github.com/llvm/llvm-project/issues/76318 .
Committed as obvious after bootstrap/test on x86_64-linux-gnu.
PR tree-optimization/19832
gcc/ChangeLog:
* match.pd (`(a != b) ? (a + b) : (2 * a)`): Add `:c`
on the plus operator.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/phi-opt-same-2.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
The following three tests now correctly work for targets that have an
implementation of cbranch for vectors so XFAILs are conditionally removed gated
on vect_early_break support.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/tsvc/vect-tsvc-s332.c: Remove xfail when early break
supported.
* gcc.dg/vect/tsvc/vect-tsvc-s481.c: Likewise.
* gcc.dg/vect/tsvc/vect-tsvc-s482.c: Likewise.
This adds new test to check for all the early break functionality.
It includes a number of codegen and runtime tests checking the values at
different needles in the array.
They also check the values on different array sizes and peeling positions,
datatypes, VL, ncopies and every other variant I could think of.
Additionally it also contains reduced cases from issues found running over
various codebases.
Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Also regtested with:
-march=armv8.3-a+sve
-march=armv8.3-a+nosve
-march=armv9-a
Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
On the tests I have disabled x86_64 on it's because the target is missing
cbranch for all types. I think it should be possible to add them for the
missing type since all we care about is if a bit is set or not.
Bootstrap and Regtest on arm-none-linux-gnueabihf still running
and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running.
gcc/ChangeLog:
* doc/sourcebuild.texi (check_effective_target_vect_early_break_hw,
check_effective_target_vect_early_break): Document.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp (add_options_for_vect_early_break,
check_effective_target_vect_early_break_hw,
check_effective_target_vect_early_break): New.
* g++.dg/vect/vect-early-break_1.cc: New test.
* g++.dg/vect/vect-early-break_2.cc: New test.
* g++.dg/vect/vect-early-break_3.cc: New test.
* gcc.dg/vect/vect-early-break-run_1.c: New test.
* gcc.dg/vect/vect-early-break-run_10.c: New test.
* gcc.dg/vect/vect-early-break-run_2.c: New test.
* gcc.dg/vect/vect-early-break-run_3.c: New test.
* gcc.dg/vect/vect-early-break-run_4.c: New test.
* gcc.dg/vect/vect-early-break-run_5.c: New test.
* gcc.dg/vect/vect-early-break-run_6.c: New test.
* gcc.dg/vect/vect-early-break-run_7.c: New test.
* gcc.dg/vect/vect-early-break-run_8.c: New test.
* gcc.dg/vect/vect-early-break-run_9.c: New test.
* gcc.dg/vect/vect-early-break-template_1.c: New test.
* gcc.dg/vect/vect-early-break-template_2.c: New test.
* gcc.dg/vect/vect-early-break_1.c: New test.
* gcc.dg/vect/vect-early-break_10.c: New test.
* gcc.dg/vect/vect-early-break_11.c: New test.
* gcc.dg/vect/vect-early-break_12.c: New test.
* gcc.dg/vect/vect-early-break_13.c: New test.
* gcc.dg/vect/vect-early-break_14.c: New test.
* gcc.dg/vect/vect-early-break_15.c: New test.
* gcc.dg/vect/vect-early-break_16.c: New test.
* gcc.dg/vect/vect-early-break_17.c: New test.
* gcc.dg/vect/vect-early-break_18.c: New test.
* gcc.dg/vect/vect-early-break_19.c: New test.
* gcc.dg/vect/vect-early-break_2.c: New test.
* gcc.dg/vect/vect-early-break_20.c: New test.
* gcc.dg/vect/vect-early-break_21.c: New test.
* gcc.dg/vect/vect-early-break_22.c: New test.
* gcc.dg/vect/vect-early-break_23.c: New test.
* gcc.dg/vect/vect-early-break_24.c: New test.
* gcc.dg/vect/vect-early-break_25.c: New test.
* gcc.dg/vect/vect-early-break_26.c: New test.
* gcc.dg/vect/vect-early-break_27.c: New test.
* gcc.dg/vect/vect-early-break_28.c: New test.
* gcc.dg/vect/vect-early-break_29.c: New test.
* gcc.dg/vect/vect-early-break_3.c: New test.
* gcc.dg/vect/vect-early-break_30.c: New test.
* gcc.dg/vect/vect-early-break_31.c: New test.
* gcc.dg/vect/vect-early-break_32.c: New test.
* gcc.dg/vect/vect-early-break_33.c: New test.
* gcc.dg/vect/vect-early-break_34.c: New test.
* gcc.dg/vect/vect-early-break_35.c: New test.
* gcc.dg/vect/vect-early-break_36.c: New test.
* gcc.dg/vect/vect-early-break_37.c: New test.
* gcc.dg/vect/vect-early-break_38.c: New test.
* gcc.dg/vect/vect-early-break_39.c: New test.
* gcc.dg/vect/vect-early-break_4.c: New test.
* gcc.dg/vect/vect-early-break_40.c: New test.
* gcc.dg/vect/vect-early-break_41.c: New test.
* gcc.dg/vect/vect-early-break_42.c: New test.
* gcc.dg/vect/vect-early-break_43.c: New test.
* gcc.dg/vect/vect-early-break_44.c: New test.
* gcc.dg/vect/vect-early-break_45.c: New test.
* gcc.dg/vect/vect-early-break_46.c: New test.
* gcc.dg/vect/vect-early-break_47.c: New test.
* gcc.dg/vect/vect-early-break_48.c: New test.
* gcc.dg/vect/vect-early-break_49.c: New test.
* gcc.dg/vect/vect-early-break_5.c: New test.
* gcc.dg/vect/vect-early-break_50.c: New test.
* gcc.dg/vect/vect-early-break_51.c: New test.
* gcc.dg/vect/vect-early-break_52.c: New test.
* gcc.dg/vect/vect-early-break_53.c: New test.
* gcc.dg/vect/vect-early-break_54.c: New test.
* gcc.dg/vect/vect-early-break_55.c: New test.
* gcc.dg/vect/vect-early-break_56.c: New test.
* gcc.dg/vect/vect-early-break_57.c: New test.
* gcc.dg/vect/vect-early-break_58.c: New test.
* gcc.dg/vect/vect-early-break_59.c: New test.
* gcc.dg/vect/vect-early-break_6.c: New test.
* gcc.dg/vect/vect-early-break_60.c: New test.
* gcc.dg/vect/vect-early-break_61.c: New test.
* gcc.dg/vect/vect-early-break_62.c: New test.
* gcc.dg/vect/vect-early-break_63.c: New test.
* gcc.dg/vect/vect-early-break_64.c: New test.
* gcc.dg/vect/vect-early-break_65.c: New test.
* gcc.dg/vect/vect-early-break_66.c: New test.
* gcc.dg/vect/vect-early-break_67.c: New test.
* gcc.dg/vect/vect-early-break_68.c: New test.
* gcc.dg/vect/vect-early-break_69.c: New test.
* gcc.dg/vect/vect-early-break_7.c: New test.
* gcc.dg/vect/vect-early-break_70.c: New test.
* gcc.dg/vect/vect-early-break_71.c: New test.
* gcc.dg/vect/vect-early-break_72.c: New test.
* gcc.dg/vect/vect-early-break_73.c: New test.
* gcc.dg/vect/vect-early-break_74.c: New test.
* gcc.dg/vect/vect-early-break_75.c: New test.
* gcc.dg/vect/vect-early-break_76.c: New test.
* gcc.dg/vect/vect-early-break_77.c: New test.
* gcc.dg/vect/vect-early-break_78.c: New test.
* gcc.dg/vect/vect-early-break_79.c: New test.
* gcc.dg/vect/vect-early-break_8.c: New test.
* gcc.dg/vect/vect-early-break_80.c: New test.
* gcc.dg/vect/vect-early-break_81.c: New test.
* gcc.dg/vect/vect-early-break_82.c: New test.
* gcc.dg/vect/vect-early-break_83.c: New test.
* gcc.dg/vect/vect-early-break_84.c: New test.
* gcc.dg/vect/vect-early-break_85.c: New test.
* gcc.dg/vect/vect-early-break_86.c: New test.
* gcc.dg/vect/vect-early-break_87.c: New test.
* gcc.dg/vect/vect-early-break_88.c: New test.
* gcc.dg/vect/vect-early-break_89.c: New test.
* gcc.dg/vect/vect-early-break_9.c: New test.
* gcc.dg/vect/vect-early-break_90.c: New test.
* gcc.dg/vect/vect-early-break_91.c: New test.
* gcc.dg/vect/vect-early-break_92.c: New test.
* gcc.dg/vect/vect-early-break_93.c: New test.
Hi All,
This adds an implementation for conditional branch optab for AArch64.
For e.g.
void f1 ()
{
for (int i = 0; i < N; i++)
{
b[i] += a[i];
if (a[i] > 0)
break;
}
}
For 128-bit vectors we generate:
cmgt v1.4s, v1.4s, #0
umaxp v1.4s, v1.4s, v1.4s
fmov x3, d1
cbnz x3, .L8
and of 64-bit vector we can omit the compression:
cmgt v1.2s, v1.2s, #0
fmov x2, d1
cbz x2, .L13
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (cbranch<mode>4): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/sve/vect-early-break-cbranch.c: New test.
* gcc.target/aarch64/vect-early-break-cbranch.c: New test.
Hi All,
This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.
Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND. This is however only checked
then the comparison would produce multiple statements.
This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit. Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.
After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit. This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.
In such situations the loop is reflowed which enabled vectorization of many
other loop forms.
Concretely the kind of loops supported are of the forms:
for (int i = 0; i < N; i++)
{
<statements1>
if (<condition>)
{
...
<action>;
}
<statements2>
}
where <action> can be:
- break
- return
- goto
Any number of statements can be used before the <action> occurs.
Since this is an initial version for GCC 14 it has the following limitations and
features:
- Only fixed sized iterations and buffers are supported. That is to say any
vectors loaded or stored must be to statically allocated arrays with known
sizes. N must also be known. This limitation is because our primary target
for this optimization is SVE. For VLA SVE we can't easily do cross page
iteraion checks. The result is likely to also not be beneficial. For that
reason we punt support for variable buffers till we have First-Faulting
support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
<condition>. Loads are fine as long as they don't have the possibility to
alias. More concretely, we block RAW dependencies when the intermediate value
can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization. The only epilogue supported is the
scalar final one. Peeling code supports it but the code motion code cannot
find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.
With the help of IPA and LTO this still gets hit quite often. During bootstrap
it hit rather frequently. Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.
This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.
For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations. For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.
That is to say, the first vector loop executes so long as the early exit isn't
needed. Once the exit is taken, the scalar code will perform at most VF extra
iterations. The exact number depending on peeling and iteration start and which
exit was taken (natural or early). For this scalar loop, all early exits are
treated the same.
When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break. If this is not possible we decline to vectorize. The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.
This means that we check at the start of iterations whether we are going to exit
or not. During the analyis phase we check whether we are allowed to do this
moving of statements. Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.
With this the vector flow no longer necessarily needs to match that of the
scalar code. In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.
Codegen:
for e.g.
unsigned vect_a[N];
unsigned vect_b[N];
unsigned test4(unsigned x)
{
unsigned ret = 0;
for (int i = 0; i < N; i++)
{
vect_b[i] = x + i;
if (vect_a[i] > x)
break;
vect_a[i] = x;
}
return ret;
}
We generate for Adv. SIMD:
test4:
adrp x2, .LC0
adrp x3, .LANCHOR0
dup v2.4s, w0
add x3, x3, :lo12:.LANCHOR0
movi v4.4s, 0x4
add x4, x3, 3216
ldr q1, [x2, #:lo12:.LC0]
mov x1, 0
mov w2, 0
.p2align 3,,7
.L3:
ldr q0, [x3, x1]
add v3.4s, v1.4s, v2.4s
add v1.4s, v1.4s, v4.4s
cmhi v0.4s, v0.4s, v2.4s
umaxp v0.4s, v0.4s, v0.4s
fmov x5, d0
cbnz x5, .L6
add w2, w2, 1
str q3, [x1, x4]
str q2, [x3, x1]
add x1, x1, 16
cmp w2, 200
bne .L3
mov w7, 3
.L2:
lsl w2, w2, 2
add x5, x3, 3216
add w6, w2, w0
sxtw x4, w2
ldr w1, [x3, x4, lsl 2]
str w6, [x5, x4, lsl 2]
cmp w0, w1
bcc .L4
add w1, w2, 1
str w0, [x3, x4, lsl 2]
add w6, w1, w0
sxtw x1, w1
ldr w4, [x3, x1, lsl 2]
str w6, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
add w4, w2, 2
str w0, [x3, x1, lsl 2]
sxtw x1, w4
add w6, w1, w0
ldr w4, [x3, x1, lsl 2]
str w6, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
str w0, [x3, x1, lsl 2]
add w2, w2, 3
cmp w7, 3
beq .L4
sxtw x1, w2
add w2, w2, w0
ldr w4, [x3, x1, lsl 2]
str w2, [x5, x1, lsl 2]
cmp w0, w4
bcc .L4
str w0, [x3, x1, lsl 2]
.L4:
mov w0, 0
ret
.p2align 2,,3
.L6:
mov w7, 4
b .L2
and for SVE:
test4:
adrp x2, .LANCHOR0
add x2, x2, :lo12:.LANCHOR0
add x5, x2, 3216
mov x3, 0
mov w1, 0
cntw x4
mov z1.s, w0
index z0.s, #0, #1
ptrue p1.b, all
ptrue p0.s, all
.p2align 3,,7
.L3:
ld1w z2.s, p1/z, [x2, x3, lsl 2]
add z3.s, z0.s, z1.s
cmplo p2.s, p0/z, z1.s, z2.s
b.any .L2
st1w z3.s, p1, [x5, x3, lsl 2]
add w1, w1, 1
st1w z1.s, p1, [x2, x3, lsl 2]
add x3, x3, x4
incw z0.s
cmp w3, 803
bls .L3
.L5:
mov w0, 0
ret
.p2align 2,,3
.L2:
cntw x5
mul w1, w1, w5
cbz w5, .L5
sxtw x1, w1
sub w5, w5, #1
add x5, x5, x1
add x6, x2, 3216
b .L6
.p2align 2,,3
.L14:
str w0, [x2, x1, lsl 2]
cmp x1, x5
beq .L5
mov x1, x4
.L6:
ldr w3, [x2, x1, lsl 2]
add w4, w0, w1
str w4, [x6, x1, lsl 2]
add x4, x1, 1
cmp w0, w3
bcs .L14
mov w0, 0
ret
On the workloads this work is based on we see between 2-3x performance uplift
using this patch.
Follow up plan:
- Boolean vectorization has several shortcomings. I've filed PR110223 with the
bigger ones that cause vectorization to fail with this patch.
- SLP support. This is planned for GCC 15 as for majority of the cases build
SLP itself fails. This means I'll need to spend time in making this more
robust first. Additionally it requires:
* Adding support for vectorizing CFG (gconds)
* Support for CFG to differ between vector and scalar loops.
Both of which would be disruptive to the tree and I suspect I'll be handling
fallouts from this patch for a while. So I plan to work on the surrounding
building blocks first for the remainder of the year.
Additionally it also contains reduced cases from issues found running over
various codebases.
Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.
Also regtested with:
-march=armv8.3-a+sve
-march=armv8.3-a+nosve
-march=armv9-a
-mcpu=neoverse-v1
-mcpu=neoverse-n2
Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.
gcc/ChangeLog:
* tree-if-conv.cc (idx_within_array_bound): Expose.
* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
(vect_analyze_data_ref_dependences): Use it.
* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
(vect_set_loop_controls_directly,
vect_set_loop_condition_partial_vectors,
vect_set_loop_condition_partial_vectors_avx512,
vect_set_loop_condition_normal): Support multiple exits.
(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
multiple exits.
(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
count and instead look at loop shape.
(vect_update_ivs_after_vectorizer): Drop asserts.
(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
(vect_do_peeling): Support multiple exits.
(vect_loop_versioning): Likewise.
* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
early_breaks.
(vect_analyze_loop_form): Support loop flows with more than single BB
loop body.
(vect_create_loop_vinfo): Support niters analysis for multiple exits.
(vect_analyze_loop): Likewise.
(vect_get_vect_def): New.
(vect_create_epilog_for_reduction): Support early exit reductions.
(vectorizable_live_operation_1): New.
(find_connected_edge): New.
(vectorizable_live_operation): Support early exit live operations.
(move_early_exit_stmts): New.
(vect_transform_loop): Use it.
* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
(vect_recog_bitfield_ref_pattern): Support gconds and bools.
(vect_recog_gcond_pattern): New.
(possible_vector_mask_operation_p): Support gcond masks.
(vect_determine_mask_precision): Likewise.
(vect_mark_pattern_stmts): Set gcond def type.
(can_vectorize_live_stmts): Force early break inductions to be live.
* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
early breaks.
(vect_mark_stmts_to_be_vectorized): Process gcond usage.
(perm_mask_for_reverse): Expose.
(vectorizable_comparison_1): New.
(vectorizable_early_exit): New.
(vect_analyze_stmt): Support early break and gcond.
(vect_transform_stmt): Likewise.
(vect_is_simple_use): Likewise.
(vect_get_vector_types_for_stmt): Likewise.
* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
numbering.
* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
LOOP_VINFO_EARLY_BRK_VUSES): New.
(is_loop_header_bb_p): Drop assert.
(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
early_break_vuses.
(vect_iv_increment_position, perm_mask_for_reverse,
ref_within_array_bound): New.
(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
LIM notices that in some cases the condition and the results are loop
invariant and tries to move them out of the loop.
While the resulting code is operationally sound, moving the compare out of the
gcond results in generating code that no longer branches, so cbranch is no
longer applicable. As such I now add code to check during this motion to see
if the target supports flag setting vector comparison as general operation.
I have tried writing a GIMPLE testcase for this but the gimple FE seems to be
having some trouble with the vector types. It seems to fail parsing.
The early break code testsuite however has a test for this
(vect-early-break_67.c).
gcc/ChangeLog:
* tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h
and optabs-tree.h and check for vector compare motion out of gcond.
This patch would like to XFail the signbit-5 run test case for
the RVV. Given the case has one limitation like "This test does not
work when the truth type does not match vector type." in the beginning
of the test file. Aka, the RVV vector truth type is not integer type.
The target board of riscv-sim like below will pick up `-march=rv64gcv`
when building the run test elf. Thus, the RVV cannot bypass this test
case like aarch64_sve with additional option `-march=armv8-a`.
riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow
For RVV, we leverage dg-xfail-run-if for this case like `amdgcn`.
The signbit-5.c passed test with below configurations but we need
further investigation for the failures of other configurations.
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64imafdcv/-mabi=lp64d/-mcmodel=medlow
gcc/testsuite/ChangeLog:
* gcc.dg/signbit-5.c: XFail for the riscv_v.
Signed-off-by: Pan Li <pan2.li@intel.com>
TL;DR: the "dse1" pass removed the eh-return-address store. The
PA also marks its EH_RETURN_HANDLER_RTX as volatile, for the same
reason, as does visum. See PR32769 - it's the same thing on PA.
Conceptually, it's logical that stores to incoming args are
optimized out on the return path or if no loads are seen -
at least before epilogue expansion, when the subsequent load
isn't seen in the RTL, as is the case for the "dse1" pass.
I haven't looked into why this problem, that appeared for the PA
already in 2007, was seen for CRIS only recently (with
r14-6674-g4759383245ac97).
PR middle-end/113109
* config/cris/cris.cc (cris_eh_return_handler_rtx): New function.
* config/cris/cris-protos.h (cris_eh_return_handler_rtx): Prototype.
* config/cris/cris.h (EH_RETURN_HANDLER_RTX): Redefine to call
cris_eh_return_handler_rtx.
We used a branch to load floating-point comparison results into GPR.
This is very slow when the branch is not predictable.
Implement movfcc so we can reload FCCmode into GPRs, FPRs, and MEM.
Then implement cstore<ANYF:mode>4.
gcc/ChangeLog:
* config/loongarch/loongarch-tune.h
(loongarch_rtx_cost_data::movcf2gr): New field.
(loongarch_rtx_cost_data::movcf2gr_): New method.
(loongarch_rtx_cost_data::use_movcf2gr): New method.
* config/loongarch/loongarch-def.cc
(loongarch_rtx_cost_data::loongarch_rtx_cost_data): Set movcf2gr
to COSTS_N_INSNS (7) and movgr2cf to COSTS_N_INSNS (15), based
on timing on LA464.
(loongarch_cpu_rtx_cost_data): Set movcf2gr and movgr2cf to
COSTS_N_INSNS (1) for LA664.
(loongarch_rtx_cost_optimize_size): Set movcf2gr and movgr2cf to
COSTS_N_INSNS (1) + 1.
* config/loongarch/predicates.md (loongarch_fcmp_operator): New
predicate.
* config/loongarch/loongarch.md (movfcc): Change to
define_expand.
(movfcc_internal): New define_insn.
(fcc_to_<X:mode>): New define_insn.
(cstore<ANYF:mode>4): New define_expand.
* config/loongarch/loongarch.cc
(loongarch_hard_regno_mode_ok_uncached): Allow FCCmode in GPRs
and GPRs.
(loongarch_secondary_reload): Reload FCCmode via FPR and/or GPR.
(loongarch_emit_float_compare): Call gen_reg_rtx instead of
loongarch_allocate_fcc.
(loongarch_allocate_fcc): Remove.
(loongarch_move_to_gpr_cost): Handle FCC_REGS -> GR_REGS.
(loongarch_move_from_gpr_cost): Handle GR_REGS -> FCC_REGS.
(loongarch_register_move_cost): Handle FCC_REGS -> FCC_REGS,
FCC_REGS -> FP_REGS, and FP_REGS -> FCC_REGS.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/movcf2gr.c: New test.
* gcc.target/loongarch/movcf2gr-via-fr.c: New test.
For now, for single-threaded GCN, nvptx target use only; extension for
multi-threaded offloading use is to follow later. Eventually switch to
libstdc++-v3/libsupc++ proper.
libgcc/
* c++-minimal/README: New.
* c++-minimal/guard.c: New.
* config/gcn/t-amdgcn (LIB2ADD): Add it.
* config/nvptx/t-nvptx (LIB2ADD): Likewise.
Users may wish just use -mtune=native for performance tuning only.
Let's don't make trouble for its case.
gcc/
* config/mips/driver-native.cc (host_detect_local_cpu):
don't add nan2008 option for -mtune=native.
The function `reconcat` cannot append string(s) to NULL,
as the concat process will stop at the first NULL.
Let's always put the `ret` to the end, as it may be NULL.
We keep use reconcat here, due to that reconcat can make it
easier if we add more hardware features detecting, for example
by hwcap.
gcc/
PR target/112759
* config/mips/driver-native.cc (host_detect_local_cpu):
Put the ret to the end of args of reconcat.