Commit graph

206953 commits

Author SHA1 Message Date
Tamar Christina
c5232ec149 testsuite: Add tests for early break vectorization
This adds new test to check for all the early break functionality.
It includes a number of codegen and runtime tests checking the values at
different needles in the array.

They also check the values on different array sizes and peeling positions,
datatypes, VL, ncopies and every other variant I could think of.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.

On the tests I have disabled x86_64 on it's because the target is missing
cbranch for all types.  I think it should be possible to add them for the
missing type since all we care about is if a bit is set or not.

Bootstrap and Regtest on arm-none-linux-gnueabihf still running
and test on arm-none-eabi -march=armv8.1-m.main+mve -mfpu=auto running.

gcc/ChangeLog:

	* doc/sourcebuild.texi (check_effective_target_vect_early_break_hw,
	check_effective_target_vect_early_break): Document.

gcc/testsuite/ChangeLog:

	* lib/target-supports.exp (add_options_for_vect_early_break,
	check_effective_target_vect_early_break_hw,
	check_effective_target_vect_early_break): New.
	* g++.dg/vect/vect-early-break_1.cc: New test.
	* g++.dg/vect/vect-early-break_2.cc: New test.
	* g++.dg/vect/vect-early-break_3.cc: New test.
	* gcc.dg/vect/vect-early-break-run_1.c: New test.
	* gcc.dg/vect/vect-early-break-run_10.c: New test.
	* gcc.dg/vect/vect-early-break-run_2.c: New test.
	* gcc.dg/vect/vect-early-break-run_3.c: New test.
	* gcc.dg/vect/vect-early-break-run_4.c: New test.
	* gcc.dg/vect/vect-early-break-run_5.c: New test.
	* gcc.dg/vect/vect-early-break-run_6.c: New test.
	* gcc.dg/vect/vect-early-break-run_7.c: New test.
	* gcc.dg/vect/vect-early-break-run_8.c: New test.
	* gcc.dg/vect/vect-early-break-run_9.c: New test.
	* gcc.dg/vect/vect-early-break-template_1.c: New test.
	* gcc.dg/vect/vect-early-break-template_2.c: New test.
	* gcc.dg/vect/vect-early-break_1.c: New test.
	* gcc.dg/vect/vect-early-break_10.c: New test.
	* gcc.dg/vect/vect-early-break_11.c: New test.
	* gcc.dg/vect/vect-early-break_12.c: New test.
	* gcc.dg/vect/vect-early-break_13.c: New test.
	* gcc.dg/vect/vect-early-break_14.c: New test.
	* gcc.dg/vect/vect-early-break_15.c: New test.
	* gcc.dg/vect/vect-early-break_16.c: New test.
	* gcc.dg/vect/vect-early-break_17.c: New test.
	* gcc.dg/vect/vect-early-break_18.c: New test.
	* gcc.dg/vect/vect-early-break_19.c: New test.
	* gcc.dg/vect/vect-early-break_2.c: New test.
	* gcc.dg/vect/vect-early-break_20.c: New test.
	* gcc.dg/vect/vect-early-break_21.c: New test.
	* gcc.dg/vect/vect-early-break_22.c: New test.
	* gcc.dg/vect/vect-early-break_23.c: New test.
	* gcc.dg/vect/vect-early-break_24.c: New test.
	* gcc.dg/vect/vect-early-break_25.c: New test.
	* gcc.dg/vect/vect-early-break_26.c: New test.
	* gcc.dg/vect/vect-early-break_27.c: New test.
	* gcc.dg/vect/vect-early-break_28.c: New test.
	* gcc.dg/vect/vect-early-break_29.c: New test.
	* gcc.dg/vect/vect-early-break_3.c: New test.
	* gcc.dg/vect/vect-early-break_30.c: New test.
	* gcc.dg/vect/vect-early-break_31.c: New test.
	* gcc.dg/vect/vect-early-break_32.c: New test.
	* gcc.dg/vect/vect-early-break_33.c: New test.
	* gcc.dg/vect/vect-early-break_34.c: New test.
	* gcc.dg/vect/vect-early-break_35.c: New test.
	* gcc.dg/vect/vect-early-break_36.c: New test.
	* gcc.dg/vect/vect-early-break_37.c: New test.
	* gcc.dg/vect/vect-early-break_38.c: New test.
	* gcc.dg/vect/vect-early-break_39.c: New test.
	* gcc.dg/vect/vect-early-break_4.c: New test.
	* gcc.dg/vect/vect-early-break_40.c: New test.
	* gcc.dg/vect/vect-early-break_41.c: New test.
	* gcc.dg/vect/vect-early-break_42.c: New test.
	* gcc.dg/vect/vect-early-break_43.c: New test.
	* gcc.dg/vect/vect-early-break_44.c: New test.
	* gcc.dg/vect/vect-early-break_45.c: New test.
	* gcc.dg/vect/vect-early-break_46.c: New test.
	* gcc.dg/vect/vect-early-break_47.c: New test.
	* gcc.dg/vect/vect-early-break_48.c: New test.
	* gcc.dg/vect/vect-early-break_49.c: New test.
	* gcc.dg/vect/vect-early-break_5.c: New test.
	* gcc.dg/vect/vect-early-break_50.c: New test.
	* gcc.dg/vect/vect-early-break_51.c: New test.
	* gcc.dg/vect/vect-early-break_52.c: New test.
	* gcc.dg/vect/vect-early-break_53.c: New test.
	* gcc.dg/vect/vect-early-break_54.c: New test.
	* gcc.dg/vect/vect-early-break_55.c: New test.
	* gcc.dg/vect/vect-early-break_56.c: New test.
	* gcc.dg/vect/vect-early-break_57.c: New test.
	* gcc.dg/vect/vect-early-break_58.c: New test.
	* gcc.dg/vect/vect-early-break_59.c: New test.
	* gcc.dg/vect/vect-early-break_6.c: New test.
	* gcc.dg/vect/vect-early-break_60.c: New test.
	* gcc.dg/vect/vect-early-break_61.c: New test.
	* gcc.dg/vect/vect-early-break_62.c: New test.
	* gcc.dg/vect/vect-early-break_63.c: New test.
	* gcc.dg/vect/vect-early-break_64.c: New test.
	* gcc.dg/vect/vect-early-break_65.c: New test.
	* gcc.dg/vect/vect-early-break_66.c: New test.
	* gcc.dg/vect/vect-early-break_67.c: New test.
	* gcc.dg/vect/vect-early-break_68.c: New test.
	* gcc.dg/vect/vect-early-break_69.c: New test.
	* gcc.dg/vect/vect-early-break_7.c: New test.
	* gcc.dg/vect/vect-early-break_70.c: New test.
	* gcc.dg/vect/vect-early-break_71.c: New test.
	* gcc.dg/vect/vect-early-break_72.c: New test.
	* gcc.dg/vect/vect-early-break_73.c: New test.
	* gcc.dg/vect/vect-early-break_74.c: New test.
	* gcc.dg/vect/vect-early-break_75.c: New test.
	* gcc.dg/vect/vect-early-break_76.c: New test.
	* gcc.dg/vect/vect-early-break_77.c: New test.
	* gcc.dg/vect/vect-early-break_78.c: New test.
	* gcc.dg/vect/vect-early-break_79.c: New test.
	* gcc.dg/vect/vect-early-break_8.c: New test.
	* gcc.dg/vect/vect-early-break_80.c: New test.
	* gcc.dg/vect/vect-early-break_81.c: New test.
	* gcc.dg/vect/vect-early-break_82.c: New test.
	* gcc.dg/vect/vect-early-break_83.c: New test.
	* gcc.dg/vect/vect-early-break_84.c: New test.
	* gcc.dg/vect/vect-early-break_85.c: New test.
	* gcc.dg/vect/vect-early-break_86.c: New test.
	* gcc.dg/vect/vect-early-break_87.c: New test.
	* gcc.dg/vect/vect-early-break_88.c: New test.
	* gcc.dg/vect/vect-early-break_89.c: New test.
	* gcc.dg/vect/vect-early-break_9.c: New test.
	* gcc.dg/vect/vect-early-break_90.c: New test.
	* gcc.dg/vect/vect-early-break_91.c: New test.
	* gcc.dg/vect/vect-early-break_92.c: New test.
	* gcc.dg/vect/vect-early-break_93.c: New test.
2023-12-24 19:30:09 +00:00
Tamar Christina
1bcc07aeb4 AArch64: Add implementation for vector cbranch for Advanced SIMD
Hi All,

This adds an implementation for conditional branch optab for AArch64.

For e.g.

void f1 ()
{
  for (int i = 0; i < N; i++)
    {
      b[i] += a[i];
      if (a[i] > 0)
	break;
    }
}

For 128-bit vectors we generate:

        cmgt    v1.4s, v1.4s, #0
        umaxp   v1.4s, v1.4s, v1.4s
        fmov    x3, d1
        cbnz    x3, .L8

and of 64-bit vector we can omit the compression:

        cmgt    v1.2s, v1.2s, #0
        fmov    x2, d1
        cbz     x2, .L13

gcc/ChangeLog:

	* config/aarch64/aarch64-simd.md (cbranch<mode>4): New.

gcc/testsuite/ChangeLog:

	* gcc.target/aarch64/sve/vect-early-break-cbranch.c: New test.
	* gcc.target/aarch64/vect-early-break-cbranch.c: New test.
2023-12-24 19:30:09 +00:00
Tamar Christina
01f4251b87 middle-end: Support vectorization of loops with multiple exits.
Hi All,

This patch adds initial support for early break vectorization in GCC. In other
words it implements support for vectorization of loops with multiple exits.
The support is added for any target that implements a vector cbranch optab,
this includes both fully masked and non-masked targets.

Depending on the operation, the vectorizer may also require support for boolean
mask reductions using Inclusive OR/Bitwise AND.  This is however only checked
then the comparison would produce multiple statements.

This also fully decouples the vectorizer's notion of exit from the existing loop
infrastructure's exit.  Before this patch the vectorizer always picked the
natural loop latch connected exit as the main exit.

After this patch the vectorizer is free to choose any exit it deems appropriate
as the main exit.  This means that even if the main exit is not countable (i.e.
the termination condition could not be determined) we might still be able to
vectorize should one of the other exits be countable.

In such situations the loop is reflowed which enabled vectorization of many
other loop forms.

Concretely the kind of loops supported are of the forms:

 for (int i = 0; i < N; i++)
 {
   <statements1>
   if (<condition>)
     {
       ...
       <action>;
     }
   <statements2>
 }

where <action> can be:
 - break
 - return
 - goto

Any number of statements can be used before the <action> occurs.

Since this is an initial version for GCC 14 it has the following limitations and
features:

- Only fixed sized iterations and buffers are supported.  That is to say any
  vectors loaded or stored must be to statically allocated arrays with known
  sizes. N must also be known.  This limitation is because our primary target
  for this optimization is SVE.  For VLA SVE we can't easily do cross page
  iteraion checks. The result is likely to also not be beneficial. For that
  reason we punt support for variable buffers till we have First-Faulting
  support in GCC 15.
- any stores in <statements1> should not be to the same objects as in
  <condition>.  Loads are fine as long as they don't have the possibility to
  alias.  More concretely, we block RAW dependencies when the intermediate value
  can't be separated fromt the store, or the store itself can't be moved.
- Prologue peeling, alignment peelinig and loop versioning are supported.
- Fully masked loops, unmasked loops and partially masked loops are supported
- Any number of loop early exits are supported.
- No support for epilogue vectorization.  The only epilogue supported is the
  scalar final one.  Peeling code supports it but the code motion code cannot
  find instructions to make the move in the epilog.
- Early breaks are only supported for inner loop vectorization.

With the help of IPA and LTO this still gets hit quite often.  During bootstrap
it hit rather frequently.  Additionally TSVC s332, s481 and s482 all pass now
since these are tests for support for early exit vectorization.

This implementation does not support completely handling the early break inside
the vector loop itself but instead supports adding checks such that if we know
that we have to exit in the current iteration then we branch to scalar code to
actually do the final VF iterations which handles all the code in <action>.

For the scalar loop we know that whatever exit you take you have to perform at
most VF iterations.  For vector code we only case about the state of fully
performed iteration and reset the scalar code to the (partially) remaining loop.

That is to say, the first vector loop executes so long as the early exit isn't
needed.  Once the exit is taken, the scalar code will perform at most VF extra
iterations.  The exact number depending on peeling and iteration start and which
exit was taken (natural or early).   For this scalar loop, all early exits are
treated the same.

When we vectorize we move any statement not related to the early break itself
and that would be incorrect to execute before the break (i.e. has side effects)
to after the break.  If this is not possible we decline to vectorize.  The
analysis and code motion also takes into account that it doesn't introduce a RAW
dependency after the move of the stores.

This means that we check at the start of iterations whether we are going to exit
or not.  During the analyis phase we check whether we are allowed to do this
moving of statements.  Also note that we only move the scalar statements, but
only do so after peeling but just before we start transforming statements.

With this the vector flow no longer necessarily needs to match that of the
scalar code.  In addition most of the infrastructure is in place to support
general control flow safely, however we are punting this to GCC 15.

Codegen:

for e.g.

unsigned vect_a[N];
unsigned vect_b[N];

unsigned test4(unsigned x)
{
 unsigned ret = 0;
 for (int i = 0; i < N; i++)
 {
   vect_b[i] = x + i;
   if (vect_a[i] > x)
     break;
   vect_a[i] = x;

 }
 return ret;
}

We generate for Adv. SIMD:

test4:
        adrp    x2, .LC0
        adrp    x3, .LANCHOR0
        dup     v2.4s, w0
        add     x3, x3, :lo12:.LANCHOR0
        movi    v4.4s, 0x4
        add     x4, x3, 3216
        ldr     q1, [x2, #:lo12:.LC0]
        mov     x1, 0
        mov     w2, 0
        .p2align 3,,7
.L3:
        ldr     q0, [x3, x1]
        add     v3.4s, v1.4s, v2.4s
        add     v1.4s, v1.4s, v4.4s
        cmhi    v0.4s, v0.4s, v2.4s
        umaxp   v0.4s, v0.4s, v0.4s
        fmov    x5, d0
        cbnz    x5, .L6
        add     w2, w2, 1
        str     q3, [x1, x4]
        str     q2, [x3, x1]
        add     x1, x1, 16
        cmp     w2, 200
        bne     .L3
        mov     w7, 3
.L2:
        lsl     w2, w2, 2
        add     x5, x3, 3216
        add     w6, w2, w0
        sxtw    x4, w2
        ldr     w1, [x3, x4, lsl 2]
        str     w6, [x5, x4, lsl 2]
        cmp     w0, w1
        bcc     .L4
        add     w1, w2, 1
        str     w0, [x3, x4, lsl 2]
        add     w6, w1, w0
        sxtw    x1, w1
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        add     w4, w2, 2
        str     w0, [x3, x1, lsl 2]
        sxtw    x1, w4
        add     w6, w1, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w6, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
        add     w2, w2, 3
        cmp     w7, 3
        beq     .L4
        sxtw    x1, w2
        add     w2, w2, w0
        ldr     w4, [x3, x1, lsl 2]
        str     w2, [x5, x1, lsl 2]
        cmp     w0, w4
        bcc     .L4
        str     w0, [x3, x1, lsl 2]
.L4:
        mov     w0, 0
        ret
        .p2align 2,,3
.L6:
        mov     w7, 4
        b       .L2

and for SVE:

test4:
        adrp    x2, .LANCHOR0
        add     x2, x2, :lo12:.LANCHOR0
        add     x5, x2, 3216
        mov     x3, 0
        mov     w1, 0
        cntw    x4
        mov     z1.s, w0
        index   z0.s, #0, #1
        ptrue   p1.b, all
        ptrue   p0.s, all
        .p2align 3,,7
.L3:
        ld1w    z2.s, p1/z, [x2, x3, lsl 2]
        add     z3.s, z0.s, z1.s
        cmplo   p2.s, p0/z, z1.s, z2.s
        b.any   .L2
        st1w    z3.s, p1, [x5, x3, lsl 2]
        add     w1, w1, 1
        st1w    z1.s, p1, [x2, x3, lsl 2]
        add     x3, x3, x4
        incw    z0.s
        cmp     w3, 803
        bls     .L3
.L5:
        mov     w0, 0
        ret
        .p2align 2,,3
.L2:
        cntw    x5
        mul     w1, w1, w5
        cbz     w5, .L5
        sxtw    x1, w1
        sub     w5, w5, #1
        add     x5, x5, x1
        add     x6, x2, 3216
        b       .L6
        .p2align 2,,3
.L14:
        str     w0, [x2, x1, lsl 2]
        cmp     x1, x5
        beq     .L5
        mov     x1, x4
.L6:
        ldr     w3, [x2, x1, lsl 2]
        add     w4, w0, w1
        str     w4, [x6, x1, lsl 2]
        add     x4, x1, 1
        cmp     w0, w3
        bcs     .L14
        mov     w0, 0
        ret

On the workloads this work is based on we see between 2-3x performance uplift
using this patch.

Follow up plan:
 - Boolean vectorization has several shortcomings.  I've filed PR110223 with the
   bigger ones that cause vectorization to fail with this patch.
 - SLP support.  This is planned for GCC 15 as for majority of the cases build
   SLP itself fails.  This means I'll need to spend time in making this more
   robust first.  Additionally it requires:
     * Adding support for vectorizing CFG (gconds)
     * Support for CFG to differ between vector and scalar loops.
   Both of which would be disruptive to the tree and I suspect I'll be handling
   fallouts from this patch for a while.  So I plan to work on the surrounding
   building blocks first for the remainder of the year.

Additionally it also contains reduced cases from issues found running over
various codebases.

Bootstrapped Regtested on aarch64-none-linux-gnu and no issues.

Also regtested with:
 -march=armv8.3-a+sve
 -march=armv8.3-a+nosve
 -march=armv9-a
 -mcpu=neoverse-v1
 -mcpu=neoverse-n2

Bootstrapped Regtested x86_64-pc-linux-gnu and no issues.
Bootstrap and Regtest on arm-none-linux-gnueabihf and no issues.

gcc/ChangeLog:

	* tree-if-conv.cc (idx_within_array_bound): Expose.
	* tree-vect-data-refs.cc (vect_analyze_early_break_dependences): New.
	(vect_analyze_data_ref_dependences): Use it.
	* tree-vect-loop-manip.cc (vect_iv_increment_position): New.
	(vect_set_loop_controls_directly,
	vect_set_loop_condition_partial_vectors,
	vect_set_loop_condition_partial_vectors_avx512,
	vect_set_loop_condition_normal): Support multiple exits.
	(slpeel_tree_duplicate_loop_to_edge_cfg): Support LCSAA peeling for
	multiple exits.
	(slpeel_can_duplicate_loop_p): Change vectorizer from looking at BB
	count and instead look at loop shape.
	(vect_update_ivs_after_vectorizer): Drop asserts.
	(vect_gen_vector_loop_niters_mult_vf): Support peeled vector iterations.
	(vect_do_peeling): Support multiple exits.
	(vect_loop_versioning): Likewise.
	* tree-vect-loop.cc (_loop_vec_info::_loop_vec_info): Initialise
	early_breaks.
	(vect_analyze_loop_form): Support loop flows with more than single BB
	loop body.
	(vect_create_loop_vinfo): Support niters analysis for multiple exits.
	(vect_analyze_loop): Likewise.
	(vect_get_vect_def): New.
	(vect_create_epilog_for_reduction): Support early exit reductions.
	(vectorizable_live_operation_1): New.
	(find_connected_edge): New.
	(vectorizable_live_operation): Support early exit live operations.
	(move_early_exit_stmts): New.
	(vect_transform_loop): Use it.
	* tree-vect-patterns.cc (vect_init_pattern_stmt): Support gcond.
	(vect_recog_bitfield_ref_pattern): Support gconds and bools.
	(vect_recog_gcond_pattern): New.
	(possible_vector_mask_operation_p): Support gcond masks.
	(vect_determine_mask_precision): Likewise.
	(vect_mark_pattern_stmts): Set gcond def type.
	(can_vectorize_live_stmts): Force early break inductions to be live.
	* tree-vect-stmts.cc (vect_stmt_relevant_p): Add relevancy analysis for
	early breaks.
	(vect_mark_stmts_to_be_vectorized): Process gcond usage.
	(perm_mask_for_reverse): Expose.
	(vectorizable_comparison_1): New.
	(vectorizable_early_exit): New.
	(vect_analyze_stmt): Support early break and gcond.
	(vect_transform_stmt): Likewise.
	(vect_is_simple_use): Likewise.
	(vect_get_vector_types_for_stmt): Likewise.
	* tree-vectorizer.cc (pass_vectorize::execute): Update exits for value
	numbering.
	* tree-vectorizer.h (enum vect_def_type): Add vect_condition_def.
	(LOOP_VINFO_EARLY_BREAKS, LOOP_VINFO_EARLY_BRK_STORES,
	LOOP_VINFO_EARLY_BREAKS_VECT_PEELED, LOOP_VINFO_EARLY_BRK_DEST_BB,
	LOOP_VINFO_EARLY_BRK_VUSES): New.
	(is_loop_header_bb_p): Drop assert.
	(class loop): Add early_breaks, early_break_stores, early_break_dest_bb,
	early_break_vuses.
	(vect_iv_increment_position, perm_mask_for_reverse,
	ref_within_array_bound): New.
	(slpeel_tree_duplicate_loop_to_edge_cfg): Update for early breaks.
2023-12-24 19:29:32 +00:00
Tamar Christina
f1dcc0fe37 middle-end: prevent LIM from hoising vector compares from gconds if target does not support it.
LIM notices that in some cases the condition and the results are loop
invariant and tries to move them out of the loop.

While the resulting code is operationally sound, moving the compare out of the
gcond results in generating code that no longer branches, so cbranch is no
longer applicable.  As such I now add code to check during this motion to see
if the target supports flag setting vector comparison as general operation.

I have tried writing a GIMPLE testcase for this but the gimple FE seems to be
having some trouble with the vector types.  It seems to fail parsing.

The early break code testsuite however has a test for this
(vect-early-break_67.c).

gcc/ChangeLog:

	* tree-ssa-loop-im.cc (determine_max_movement): Import insn-codes.h
	and optabs-tree.h and check for vector compare motion out of gcond.
2023-12-24 19:17:13 +00:00
Tamar Christina
0994ddd86f testsuite: Add more pragma novector to new tests
This updates the testsuite and adds more #pragma GCC novector to various tests
that would otherwise vectorize the vector result checking code.

This cleans out the testsuite since the last rebase and prepares for the landing
of the early break patch.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/no-scevccp-slp-30.c: Add pragma GCC novector to abort
	loop.
	* gcc.dg/vect/no-scevccp-slp-31.c: Likewise.
	* gcc.dg/vect/no-section-anchors-vect-69.c: Likewise.
	* gcc.target/aarch64/vect-xorsign_exec.c: Likewise.
	* gcc.target/i386/avx512er-vrcp28ps-3.c: Likewise.
	* gcc.target/i386/avx512er-vrsqrt28ps-3.c: Likewise.
	* gcc.target/i386/avx512er-vrsqrt28ps-5.c: Likewise.
	* gcc.target/i386/avx512f-ceil-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceil-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceilf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-ceilf-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floor-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floor-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floorf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-floorf-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-rint-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-rintf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-round-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-roundf-sfix-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-trunc-vec-1.c: Likewise.
	* gcc.target/i386/avx512f-truncf-vec-1.c: Likewise.
	* gcc.target/i386/vect-alignment-peeling-1.c: Likewise.
	* gcc.target/i386/vect-alignment-peeling-2.c: Likewise.
	* gcc.target/i386/vect-pack-trunc-1.c: Likewise.
	* gcc.target/i386/vect-pack-trunc-2.c: Likewise.
	* gcc.target/i386/vect-perm-even-1.c: Likewise.
	* gcc.target/i386/vect-unpack-1.c: Likewise.
2023-12-24 19:16:40 +00:00
John David Anglin
7dbde0c56a hppa: Fix pr110279-1.c on hppa
2023-12-24  John David Anglin  <danglin@gcc.gnu.org>

gcc/testsuite/ChangeLog:

	* gcc.dg/pr110279-1.c: Add -march=2.0 option on hppa*-*-*.
2023-12-24 19:03:59 +00:00
Pan Li
bd901d7673 RISC-V: XFail the signbit-5 run test for RVV
This patch would like to XFail the signbit-5 run test case for
the RVV.  Given the case has one limitation like "This test does not
work when the truth type does not match vector type." in the beginning
of the test file.  Aka, the RVV vector truth type is not integer type.

The target board of riscv-sim like below will pick up `-march=rv64gcv`
when building the run test elf. Thus, the RVV cannot bypass this test
case like aarch64_sve with additional option `-march=armv8-a`.

  riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow

For RVV, we leverage dg-xfail-run-if for this case like `amdgcn`.

The signbit-5.c passed test with below configurations but we need
further investigation for the failures of other configurations.

* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl1024b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl256b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=dynamic/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m2/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m4/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-lmul=m8/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64gcv_zvl512b/-mabi=lp64d/-mcmodel=medlow/--param=riscv-autovec-preference=fixed-vlmax
* riscv-sim/-march=rv64imafdcv/-mabi=lp64d/-mcmodel=medlow

gcc/testsuite/ChangeLog:

	* gcc.dg/signbit-5.c: XFail for the riscv_v.

Signed-off-by: Pan Li <pan2.li@intel.com>
2023-12-24 09:49:52 +08:00
Hans-Peter Nilsson
3d03630b12 CRIS: Fix PR middle-end/113109; "throw" failing
TL;DR: the "dse1" pass removed the eh-return-address store.  The
PA also marks its EH_RETURN_HANDLER_RTX as volatile, for the same
reason, as does visum.  See PR32769 - it's the same thing on PA.

Conceptually, it's logical that stores to incoming args are
optimized out on the return path or if no loads are seen -
at least before epilogue expansion, when the subsequent load
isn't seen in the RTL, as is the case for the "dse1" pass.

I haven't looked into why this problem, that appeared for the PA
already in 2007, was seen for CRIS only recently (with
r14-6674-g4759383245ac97).

	PR middle-end/113109
	* config/cris/cris.cc (cris_eh_return_handler_rtx): New function.
	* config/cris/cris-protos.h (cris_eh_return_handler_rtx): Prototype.
	* config/cris/cris.h (EH_RETURN_HANDLER_RTX): Redefine to call
	cris_eh_return_handler_rtx.
2023-12-24 01:40:58 +01:00
GCC Administrator
d2ae7cb2ef Daily bump. 2023-12-24 00:17:37 +00:00
Xi Ruoyao
310dc75e70
LoongArch: Add sign_extend pattern for 32-bit rotate shift
Remove a redundant sign extension.

gcc/ChangeLog:

	* config/loongarch/loongarch.md (rotrsi3_extend): New
	define_insn.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/rotrw.c: New test.
2023-12-23 20:58:34 +08:00
Xi Ruoyao
78607d1229
LoongArch: Implement FCCmode reload and cstore<ANYF:mode>4
We used a branch to load floating-point comparison results into GPR.
This is very slow when the branch is not predictable.

Implement movfcc so we can reload FCCmode into GPRs, FPRs, and MEM.
Then implement cstore<ANYF:mode>4.

gcc/ChangeLog:

	* config/loongarch/loongarch-tune.h
	(loongarch_rtx_cost_data::movcf2gr): New field.
	(loongarch_rtx_cost_data::movcf2gr_): New method.
	(loongarch_rtx_cost_data::use_movcf2gr): New method.
	* config/loongarch/loongarch-def.cc
	(loongarch_rtx_cost_data::loongarch_rtx_cost_data): Set movcf2gr
	to COSTS_N_INSNS (7) and movgr2cf to COSTS_N_INSNS (15), based
	on timing on LA464.
	(loongarch_cpu_rtx_cost_data): Set movcf2gr and movgr2cf to
	COSTS_N_INSNS (1) for LA664.
	(loongarch_rtx_cost_optimize_size): Set movcf2gr and movgr2cf to
	COSTS_N_INSNS (1) + 1.
	* config/loongarch/predicates.md (loongarch_fcmp_operator): New
	predicate.
	* config/loongarch/loongarch.md (movfcc): Change to
	define_expand.
	(movfcc_internal): New define_insn.
	(fcc_to_<X:mode>): New define_insn.
	(cstore<ANYF:mode>4): New define_expand.
	* config/loongarch/loongarch.cc
	(loongarch_hard_regno_mode_ok_uncached): Allow FCCmode in GPRs
	and GPRs.
	(loongarch_secondary_reload): Reload FCCmode via FPR and/or GPR.
	(loongarch_emit_float_compare): Call gen_reg_rtx instead of
	loongarch_allocate_fcc.
	(loongarch_allocate_fcc): Remove.
	(loongarch_move_to_gpr_cost): Handle FCC_REGS -> GR_REGS.
	(loongarch_move_from_gpr_cost): Handle GR_REGS -> FCC_REGS.
	(loongarch_register_move_cost): Handle FCC_REGS -> FCC_REGS,
	FCC_REGS -> FP_REGS, and FP_REGS -> FCC_REGS.

gcc/testsuite/ChangeLog:

	* gcc.target/loongarch/movcf2gr.c: New test.
	* gcc.target/loongarch/movcf2gr-via-fr.c: New test.
2023-12-23 20:58:33 +08:00
Thomas Schwinge
c0bf7ea189 GCN, nvptx: Basic '__cxa_guard_{acquire,abort,release}' for C++ static local variables support
For now, for single-threaded GCN, nvptx target use only; extension for
multi-threaded offloading use is to follow later.  Eventually switch to
libstdc++-v3/libsupc++ proper.

	libgcc/
	* c++-minimal/README: New.
	* c++-minimal/guard.c: New.
	* config/gcn/t-amdgcn (LIB2ADD): Add it.
	* config/nvptx/t-nvptx (LIB2ADD): Likewise.
2023-12-23 10:10:02 +01:00
YunQiang Su
079455458e MIPS: Don't add nan2008 option for -mtune=native
Users may wish just use -mtune=native for performance tuning only.
Let's don't make trouble for its case.

gcc/

	* config/mips/driver-native.cc (host_detect_local_cpu):
	don't add nan2008 option for -mtune=native.
2023-12-23 16:46:55 +08:00
YunQiang Su
384dbb0b4e MIPS: Put the ret to the end of args of reconcat [PR112759]
The function `reconcat` cannot append string(s) to NULL,
as the concat process will stop at the first NULL.

Let's always put the `ret` to the end, as it may be NULL.
We keep use reconcat here, due to that reconcat can make it
easier if we add more hardware features detecting, for example
by hwcap.

gcc/

	PR target/112759
	* config/mips/driver-native.cc (host_detect_local_cpu):
	Put the ret to the end of args of reconcat.
2023-12-23 16:46:23 +08:00
Juzhe-Zhong
2902300340 RISC-V: Make PHI initial value occupy live V_REG in dynamic LMUL cost model analysis
Consider this following case:

foo:
        ble     a0,zero,.L11
        lui     a2,%hi(.LANCHOR0)
        addi    sp,sp,-128
        addi    a2,a2,%lo(.LANCHOR0)
        mv      a1,a0
        vsetvli a6,zero,e32,m8,ta,ma
        vid.v   v8
        vs8r.v  v8,0(sp)                     ---> spill
.L3:
        vl8re32.v       v16,0(sp)            ---> reload
        vsetvli a4,a1,e8,m2,ta,ma
        li      a3,0
        vsetvli a5,zero,e32,m8,ta,ma
        vmv8r.v v0,v16
        vmv.v.x v8,a4
        vmv.v.i v24,0
        vadd.vv v8,v16,v8
        vmv8r.v v16,v24
        vs8r.v  v8,0(sp)                    ---> spill
.L4:
        addiw   a3,a3,1
        vadd.vv v8,v0,v16
        vadd.vi v16,v16,1
        vadd.vv v24,v24,v8
        bne     a0,a3,.L4
        vsetvli zero,a4,e32,m8,ta,ma
        sub     a1,a1,a4
        vse32.v v24,0(a2)
        slli    a4,a4,2
        add     a2,a2,a4
        bne     a1,zero,.L3
        li      a0,0
        addi    sp,sp,128
        jr      ra
.L11:
        li      a0,0
        ret

Pick unexpected LMUL = 8.

The root cause is we didn't involve PHI initial value in the dynamic LMUL calculation:

  # j_17 = PHI <j_11(9), 0(5)>                       ---> # vect_vec_iv_.8_24 = PHI <_25(9), { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }(5)>

We didn't count { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 } in consuming vector register but it does allocate an vector register group for it.

This patch fixes this missing count. Then after this patch we pick up perfect LMUL (LMUL = M4)

foo:
	ble	a0,zero,.L9
	lui	a4,%hi(.LANCHOR0)
	addi	a4,a4,%lo(.LANCHOR0)
	mv	a2,a0
	vsetivli	zero,16,e32,m4,ta,ma
	vid.v	v20
.L3:
	vsetvli	a3,a2,e8,m1,ta,ma
	li	a5,0
	vsetivli	zero,16,e32,m4,ta,ma
	vmv4r.v	v16,v20
	vmv.v.i	v12,0
	vmv.v.x	v4,a3
	vmv4r.v	v8,v12
	vadd.vv	v20,v20,v4
.L4:
	addiw	a5,a5,1
	vmv4r.v	v4,v8
	vadd.vi	v8,v8,1
	vadd.vv	v4,v16,v4
	vadd.vv	v12,v12,v4
	bne	a0,a5,.L4
	slli	a5,a3,2
	vsetvli	zero,a3,e32,m4,ta,ma
	sub	a2,a2,a3
	vse32.v	v12,0(a4)
	add	a4,a4,a5
	bne	a2,zero,.L3
.L9:
	li	a0,0
	ret

Tested on --with-arch=gcv no regression.

	PR target/113112

gcc/ChangeLog:

	* config/riscv/riscv-vector-costs.cc (max_number_of_live_regs): Refine dump information.
	(preferred_new_lmul_p): Make PHI initial value into live regs calculation.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c: New test.
2023-12-23 08:59:03 +08:00
GCC Administrator
0a529d196b Daily bump. 2023-12-23 00:17:03 +00:00
Martin Uecker
8c8d4b565c c23: construct composite type for tagged types
Support for constructing composite types for structs and unions
in C23.

gcc/c:
	* c-typeck.cc (composite_type_internal): Adapted from
	composite_type to support structs and unions.
	(composite_type): New wrapper function.
	(build_conditional_operator): Return composite type.
	* c-decl.cc (finish_struct): Allow NULL for
	enclosing_struct_parse_info.

gcc/testsuite:
	* gcc.dg/c23-tag-alias-6.c: New test.
	* gcc.dg/c23-tag-alias-7.c: New test.
	* gcc.dg/c23-tag-composite-1.c: New test.
	* gcc.dg/c23-tag-composite-2.c: New test.
	* gcc.dg/c23-tag-composite-3.c: New test.
	* gcc.dg/c23-tag-composite-4.c: New test.
	* gcc.dg/c23-tag-composite-5.c: New test.
	* gcc.dg/c23-tag-composite-6.c: New test.
	* gcc.dg/c23-tag-composite-7.c: New test.
	* gcc.dg/c23-tag-composite-8.c: New test.
	* gcc.dg/c23-tag-composite-9.c: New test.
	* gcc.dg/c23-tag-composite-10.c: New test.
	* gcc.dg/gnu23-tag-composite-1.c: New test.
	* gcc.dg/gnu23-tag-composite-2.c: New test.
	* gcc.dg/gnu23-tag-composite-3.c: New test.
	* gcc.dg/gnu23-tag-composite-4.c: New test.
	* gcc.dg/gnu23-tag-composite-5.c: New test.
2023-12-22 21:12:21 +01:00
Sandra Loosemore
3dd45dee7f OpenMP: Add prettyprinter support for context selectors.
With the change to use enumerators instead of strings to represent
context selector and selector-set names, the default tree-list output
for dumping selectors is less helpful for debugging and harder to use
in test cases.  This patch adds support for dumping context selectors
using syntax similar to that used for input to the compiler.

gcc/ChangeLog
	* omp-general.cc (omp_context_name_list_prop): Remove static qualifer.
	* omp-general.h (omp_context_name_list_prop): Declare.
	* tree-cfg.cc (dump_function_to_file): Intercept
	"omp declare variant base" attribute for special handling.
	* tree-pretty-print.cc: Include omp-general.h.
	(dump_omp_context_selector): New.
	(print_omp_context_selector): New.
	* tree-pretty-print.h (print_omp_context_selector): Declare.
2023-12-22 16:38:33 +00:00
Jakub Jelinek
cefae511ed combine: Don't optimize paradoxical SUBREG AND CONST_INT on WORD_REGISTER_OPERATIONS targets [PR112758]
As discussed in the PR, the following testcase is miscompiled on RISC-V
64-bit, because num_sign_bit_copies in one spot pretends the bits in
a paradoxical SUBREG beyond SUBREG_REG SImode are all sign bit copies:
5444              /* For paradoxical SUBREGs on machines where all register operations
5445                 affect the entire register, just look inside.  Note that we are
5446                 passing MODE to the recursive call, so the number of sign bit
5447                 copies will remain relative to that mode, not the inner mode.
5448
5449                 This works only if loads sign extend.  Otherwise, if we get a
5450                 reload for the inner part, it may be loaded from the stack, and
5451                 then we lose all sign bit copies that existed before the store
5452                 to the stack.  */
5453              if (WORD_REGISTER_OPERATIONS
5454                  && load_extend_op (inner_mode) == SIGN_EXTEND
5455                  && paradoxical_subreg_p (x)
5456                  && MEM_P (SUBREG_REG (x)))
and then optimizes based on that in one place, but then the
r7-1077 optimization triggers in and treats all the upper bits in
paradoxical SUBREG as undefined and performs based on that another
optimization.  The r7-1077 optimization is done only if SUBREG_REG
is either a REG or MEM, from the discussions in the PR seems that if
it is a REG, the upper bits in paradoxical SUBREG on
WORD_REGISTER_OPERATIONS targets aren't really undefined, but we can't
tell what values they have because we don't see the operation which
computed that REG, and for MEM it depends on load_extend_op - if
it is SIGN_EXTEND, the upper bits are sign bit copies and so something
not really usable for the optimization, if ZERO_EXTEND, they are zeros
and it is usable for the optimization, for UNKNOWN I think it is better
to punt as well.

So, the following patch basically disables the r7-1077 optimization
on WORD_REGISTER_OPERATIONS unless we know it is still ok for sure,
which is either if sub_width is >= BITS_PER_WORD because then the
WORD_REGISTER_OPERATIONS rules don't apply, or load_extend_op on a MEM
is ZERO_EXTEND.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

	PR rtl-optimization/112758
	* combine.cc (make_compopund_operation_int): Optimize AND of a SUBREG
	based on nonzero_bits of SUBREG_REG and constant mask on
	WORD_REGISTER_OPERATIONS targets only if it is a zero extending
	MEM load.

	* gcc.c-torture/execute/pr112758.c: New test.
2023-12-22 12:29:34 +01:00
Jakub Jelinek
0a6aa19275 symtab-thunks: Use aggregate_value_p even on is_gimple_reg_type returns [PR112941]
Large/huge _BitInt types are returned in memory and the bitint lowering
pass right now relies on that.
The gimplification etc. use aggregate_value_p to see if it should be
returned in memory or not and use
  <retval> = _123;
  return <retval>;
rather than
  return _123;
But expand_thunk used e.g. by IPA-ICF was performing an optimization,
assuming is_gimple_reg_type is always passed in registers and not calling
aggregate_value_p in that case.  The following patch changes it to match
what the gimplification etc. are doing.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/112941
	* symtab-thunks.cc (expand_thunk): Check aggregate_value_p regardless
	of whether is_gimple_reg_type (restype) or not.

	* gcc.dg/bitint-60.c: New test.
2023-12-22 12:28:54 +01:00
Jakub Jelinek
f5198f0264 lower-bitint: Handle unreleased SSA_NAMEs from earlier passes gracefully [PR113102]
On the following testcase earlier passes leave around an unreleased
SSA_NAME - non-GIMPLE_NOP SSA_NAME_DEF_STMT which isn't in any bb.
The following patch makes bitint lowering resistent against those,
the first hunk is where we'd for certain kinds of stmts try to ammend
them and the latter is where we'd otherwise try to remove them,
neither of which works.  The other loops over all SSA_NAMEs either
already also check gimple_bb (SSA_NAME_DEF_STMT (s)) or it doesn't
matter that much if we process it or not (worst case it means e.g.
the pass wouldn't return early even when it otherwise could).

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/113102
	* gimple-lower-bitint.cc (gimple_lower_bitint): Handle unreleased
	large/huge _BitInt SSA_NAMEs.

	* gcc.dg/bitint-59.c: New test.
2023-12-22 12:28:06 +01:00
Jakub Jelinek
d3defa435e lower-bitint: Fix handle_cast ICE [PR113102]
My recent change to use m_data[save_data_cnt] instead of
m_data[save_data_cnt + 1] when inside of a loop (m_bb is non-NULL)
broke the following testcase.  When we create a PHI node on the loop
using prepare_data_in_out, both m_data[save_data_cnt{, + 1}] are
computed and the fix was right, but there are also cases when we in
a loop (m_bb non-NULL) emit a nested cast with too few limbs and
then just use constant indexes for all accesses - in that case
only m_data[save_data_cnt + 1] is initialized and m_data[save_data_cnt]
is NULL.  In those cases, we want to use the former.

2023-12-22  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/113102
	* gimple-lower-bitint.cc (bitint_large_huge::handle_cast): Only
	use m_data[save_data_cnt] if it is non-NULL.

	* gcc.dg/bitint-58.c: New test.
2023-12-22 12:27:05 +01:00
Christophe Lyon
05d353b794 Allow overriding EXPECT
While investigating possible race conditions in the GCC testsuites
caused by bufferization issues, I wanted to investigate workarounds
similar to GDB's READ1 [1], and I noticed it was not always possible
to override EXPECT when running 'make check'.

This patch adds the missing support in various Makefiles.

I was not able to test the patch for all the libraries updated here,
but I confirmed it works as intended/needed for libstdc++.

libatomic, libitm, libgomp already work as intended because their
Makefiles do not have:
MAKEOVERRIDES=

Tested on (native) aarch64-linux-gnu, confirmed the patch introduces
the behaviour I want in gcc, g++, gfortran and libstdc++.

I updated (but could not test) libgm2, libphobos, libquadmath and
libssp for consistency since their Makefiles have MAKEOVERRIDES=

libffi, libgo, libsanitizer seem to need a similar update, but they
are imported from their respective upstream repo, so should not be
patched here.

[1] https://github.com/bminor/binutils-gdb/blob/master/gdb/testsuite/README#L269

2023-12-21  Christophe Lyon  <christophe.lyon@linaro.org>

	gcc/
	* Makefile.in: Allow overriding EXEPCT.

	libgm2/
	* Makefile.am: Allow overriding EXEPCT.
	* Makefile.in: Regenerate.

	libphobos/
	* Makefile.am: Allow overriding EXEPCT.
	* Makefile.in: Regenerate.

	libquadmath/
	* Makefile.am: Allow overriding EXEPCT.
	* Makefile.in: Regenerate.

	libssp/
	* Makefile.am: Allow overriding EXEPCT.
	* Makefile.in: Regenerate.

	libstdc++-v3/
	* Makefile.am: Allow overriding EXEPCT.
	* Makefile.in: Regenerate.
2023-12-22 10:24:56 +00:00
Ken Matsui
c4d1d1adf7
c++: testsuite: Remove testsuite_tr1.h includes
This patch removes the testsuite_tr1.h dependency from g++.dg/ext/is_*.C
tests since the header is supposed to be used only by libstdc++, not
front-end.  This also includes test code consistency fixes.

For the record this fixes the test failures reported at
https://gcc.gnu.org/pipermail/gcc-patches/2023-December/641058.html

gcc/testsuite/ChangeLog:

	* g++.dg/ext/is_array.C: Remove testsuite_tr1.h.  Add necessary
	definitions accordingly.  Tweak macros for consistency across
	test codes.
	* g++.dg/ext/is_bounded_array.C: Likewise.
	* g++.dg/ext/is_function.C: Likewise.
	* g++.dg/ext/is_member_function_pointer.C: Likewise.
	* g++.dg/ext/is_member_object_pointer.C: Likewise.
	* g++.dg/ext/is_member_pointer.C: Likewise.
	* g++.dg/ext/is_object.C: Likewise.
	* g++.dg/ext/is_reference.C: Likewise.
	* g++.dg/ext/is_scoped_enum.C: Likewise.

Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org>
Reviewed-by: Patrick Palka <ppalka@redhat.com>
Reviewed-by: Jason Merrill <jason@redhat.com>
2023-12-22 01:57:30 -08:00
chenxiaolong
5bd5ef9957 LoongArch: Add asm modifiers to the LSX and LASX directives in the doc.
gcc/ChangeLog:

	* doc/extend.texi:Add modifiers to the vector of asm in the doc.
	* doc/md.texi:Refine the description of the modifier 'f' in the doc.
2023-12-22 17:44:26 +08:00
Jason Merrill
2488771b6d c++: computed goto from catch block [PR81438]
As with 37722, we don't clean up the exception object if a computed goto
leaves a catch block, but we can warn about that.

	PR c++/81438

gcc/cp/ChangeLog:

	* decl.cc (poplevel_named_label_1): Handle leaving catch.
	(check_previous_goto_1): Likewise.
	(check_goto_1): Likewise.

gcc/testsuite/ChangeLog:

	* g++.dg/ext/label15.C: Require indirect_jumps.
	* g++.dg/ext/label16.C: New test.
2023-12-21 22:18:00 -05:00
Sandra Loosemore
5cb79aa2bd Testsuite: Fix failures in g++.dg/analyzer/placement-new-size.C
This testcase was failing on uses of int8_t, int64_t, etc without
including <stdint.h>.

gcc/testsuite/ChangeLog
	* g++.dg/analyzer/placement-new-size.C: Include <stdint.h>.  Also
	add missing newline to end of file.
2023-12-22 02:27:05 +00:00
Jason Merrill
d26f589e61 c++: sizeof... mangling with alias template [PR95298]
We were getting sizeof... mangling wrong when the argument after
substitution was a pack expansion that is not a simple T..., such as
list<T>... in variadic-mangle4.C or (A+1)... in variadic-mangle5.C.  In the
former case we ICEd; in the latter case we wrongly mangled it as sZ
<expression>.

	PR c++/95298

gcc/cp/ChangeLog:

	* mangle.cc (write_expression): Handle v18 sizeof... bug.
	* pt.cc (tsubst_pack_expansion): Keep TREE_VEC for sizeof...
	(tsubst_expr): Don't strip TREE_VEC here.

gcc/testsuite/ChangeLog:

	* g++.dg/cpp0x/variadic-mangle2.C: Add non-member.
	* g++.dg/cpp0x/variadic-mangle4.C: New test.
	* g++.dg/cpp0x/variadic-mangle5.C: New test.
	* g++.dg/cpp0x/variadic-mangle5a.C: New test.
2023-12-21 19:19:50 -05:00
Jason Merrill
2fa122cae5 testsuite: suppress mangling compatibility aliases
Recently a mangling test failed on a target with no mangling alias support
because I hadn't updated the expected mangling, but it was still passing on
x86_64-pc-linux-gnu because of the alias for the old mangling.  So let's
avoid these aliases in mangling tests.

gcc/testsuite/ChangeLog:

	* g++.dg/abi/mangle-arm-crypto.C: Specify -fabi-compat-version.
	* g++.dg/abi/mangle-concepts1.C
	* g++.dg/abi/mangle-neon-aarch64.C
	* g++.dg/abi/mangle-neon.C
	* g++.dg/abi/mangle-regparm.C
	* g++.dg/abi/mangle-regparm1a.C
	* g++.dg/abi/mangle-ttp1.C
	* g++.dg/abi/mangle-union1.C
	* g++.dg/abi/mangle1.C
	* g++.dg/abi/mangle13.C
	* g++.dg/abi/mangle15.C
	* g++.dg/abi/mangle16.C
	* g++.dg/abi/mangle18-1.C
	* g++.dg/abi/mangle19-1.C
	* g++.dg/abi/mangle20-1.C
	* g++.dg/abi/mangle22.C
	* g++.dg/abi/mangle23.C
	* g++.dg/abi/mangle24.C
	* g++.dg/abi/mangle25.C
	* g++.dg/abi/mangle26.C
	* g++.dg/abi/mangle27.C
	* g++.dg/abi/mangle28.C
	* g++.dg/abi/mangle29.C
	* g++.dg/abi/mangle3-2.C
	* g++.dg/abi/mangle3.C
	* g++.dg/abi/mangle30.C
	* g++.dg/abi/mangle31.C
	* g++.dg/abi/mangle32.C
	* g++.dg/abi/mangle33.C
	* g++.dg/abi/mangle34.C
	* g++.dg/abi/mangle35.C
	* g++.dg/abi/mangle36.C
	* g++.dg/abi/mangle37.C
	* g++.dg/abi/mangle39.C
	* g++.dg/abi/mangle40.C
	* g++.dg/abi/mangle43.C
	* g++.dg/abi/mangle44.C
	* g++.dg/abi/mangle45.C
	* g++.dg/abi/mangle46.C
	* g++.dg/abi/mangle47.C
	* g++.dg/abi/mangle48.C
	* g++.dg/abi/mangle49.C
	* g++.dg/abi/mangle5.C
	* g++.dg/abi/mangle50.C
	* g++.dg/abi/mangle51.C
	* g++.dg/abi/mangle52.C
	* g++.dg/abi/mangle53.C
	* g++.dg/abi/mangle54.C
	* g++.dg/abi/mangle55.C
	* g++.dg/abi/mangle56.C
	* g++.dg/abi/mangle57.C
	* g++.dg/abi/mangle58.C
	* g++.dg/abi/mangle59.C
	* g++.dg/abi/mangle6.C
	* g++.dg/abi/mangle60.C
	* g++.dg/abi/mangle61.C
	* g++.dg/abi/mangle62.C
	* g++.dg/abi/mangle62a.C
	* g++.dg/abi/mangle63.C
	* g++.dg/abi/mangle64.C
	* g++.dg/abi/mangle65.C
	* g++.dg/abi/mangle66.C
	* g++.dg/abi/mangle68.C
	* g++.dg/abi/mangle69.C
	* g++.dg/abi/mangle7.C
	* g++.dg/abi/mangle70.C
	* g++.dg/abi/mangle71.C
	* g++.dg/abi/mangle72.C
	* g++.dg/abi/mangle73.C
	* g++.dg/abi/mangle74.C
	* g++.dg/abi/mangle75.C
	* g++.dg/abi/mangle76.C
	* g++.dg/abi/mangle77.C
	* g++.dg/abi/mangle78.C
	* g++.dg/abi/mangle8.C
	* g++.dg/abi/mangle9.C: Likewise.
2023-12-21 19:19:34 -05:00
GCC Administrator
cdfaa4aa52 Daily bump. 2023-12-22 00:18:02 +00:00
Arsen Arsenović
ec2ec24a4d
libstdc++: implement std::generator
libstdc++-v3/ChangeLog:

	* include/Makefile.am: Install std/generator, bits/elements_of.h
	as freestanding.
	* include/Makefile.in: Regenerate.
	* include/bits/version.def: Add __cpp_lib_generator.
	* include/bits/version.h: Regenerate.
	* include/precompiled/stdc++.h: Include <generator>.
	* include/std/ranges: Include bits/elements_of.h
	* include/bits/elements_of.h: New file.
	* include/std/generator: New file.
	* testsuite/24_iterators/range_generators/01.cc: New test.
	* testsuite/24_iterators/range_generators/02.cc: New test.
	* testsuite/24_iterators/range_generators/copy.cc: New test.
	* testsuite/24_iterators/range_generators/except.cc: New test.
	* testsuite/24_iterators/range_generators/synopsis.cc: New test.
	* testsuite/24_iterators/range_generators/subrange.cc: New test.
2023-12-21 22:59:22 +01:00
Arsen Arsenović
a6bbaab273
libstdc++: add missing include in ranges_util.h
libstdc++-v3/ChangeLog:

	* include/bits/ranges_util.h: Add missing <bits/invoke.h>
	include.
2023-12-21 22:54:28 +01:00
Andrew Pinski
df5df10355 Document cond_copysign and cond_len_copysign optabs [PR112951]
This adds the documentation for cond_copysign and cond_len_copysign optabs.
Also reorders the optabs.def to be in the similar order as how the internal
function was done.

gcc/ChangeLog:

	PR middle-end/112951
	* doc/md.texi (cond_copysign): Document.
	(cond_len_copysign): Likewise.
	* optabs.def: Reorder cond_copysign to be before
	cond_fmin. Likewise for cond_len_copysign.

Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
2023-12-21 12:23:17 -08:00
Patrick Palka
619a9539ee c++: fix -Wparentheses for bool-like class types
Since r14-4977-g0f2e2080685e75 we now issue a -Wparentheses warning for

  extern std::vector<bool> v;
  bool b = v[0] = true; // warning: suggest parentheses around assignment used as truth value [-Wparentheses]

I intended for that commit to just allow the existing diagnostics to
happen in a template context as well, but the refactoring of
is_assignment_op_expr_p caused us for this -Wparentheses warning from
convert_for_assignment to now consider user-defined operator= expressions
instead of just built-in operator=.  And since std::vector<bool> is really
a bitset, whose operator[] returns a class type with such a user-defined
operator= (taking bool), we now warn here when we didn't use to.

That we now accept user-defined operator= expressions is generally good,
but arguably "boolish" class types should be treated like ordinary bool
as far as the warning is concerned.  To that end this patch suppresses
the warning for such types, specifically when the class type can be
implicitly converted to and assigned from bool.  This criterion captures
the std::vector<bool>::reference of libstdc++ at least.

gcc/cp/ChangeLog:

	* cp-tree.h (maybe_warn_unparenthesized_assignment): Add
	'nested_p' bool parameter.
	* semantics.cc (boolish_class_type_p_cache): Define.
	(boolish_class_type_p): Define.
	(maybe_warn_unparenthesized_assignment): Add 'nested_p'
	bool parameter.  Suppress the warning for nested assignments
	to bool and bool-like class types.
	(maybe_convert_cond): Pass nested_p=false to
	maybe_warn_unparenthesized_assignment.
	* typeck.cc (convert_for_assignment): Pass nested_p=true to
	maybe_warn_unparenthesized_assignment.  Remove now redundant
	check for 'rhs' having bool type.

gcc/testsuite/ChangeLog:

	* g++.dg/warn/Wparentheses-34.C: New test.
2023-12-21 15:00:55 -05:00
Patrick Palka
9a65c8ee65 c++: [[deprecated]] on template redecl [PR84542]
The deprecated and unavailable attributes weren't working when used on
a template redeclaration ultimately because we weren't merging the
corresponding tree flags in duplicate_decls.

	PR c++/84542

gcc/cp/ChangeLog:

	* decl.cc (merge_attribute_bits): Merge TREE_DEPRECATED
	and TREE_UNAVAILABLE.

gcc/testsuite/ChangeLog:

	* g++.dg/ext/attr-deprecated-2.C: No longer XFAIL.
	* g++.dg/ext/attr-unavailable-12.C: New test.
2023-12-21 14:33:56 -05:00
Patrick Palka
7226f825db c++: visibility wrt template and ptrmem targs [PR70413]
When constraining the visibility of an instantiation, we weren't
properly considering the visibility of PTRMEM_CST and TEMPLATE_DECL
template arguments.

This patch fixes this.  It turns out we don't maintain the relevant
visibility flags for alias templates (e.g. TREE_PUBLIC is never set),
so continue to ignore alias template template arguments for now.

	PR c++/70413
	PR c++/107906

gcc/cp/ChangeLog:

	* decl2.cc (min_vis_expr_r): Handle PTRMEM_CST and TEMPLATE_DECL
	other than those for alias templates.

gcc/testsuite/ChangeLog:

	* g++.dg/template/linkage2.C: New test.
	* g++.dg/template/linkage3.C: New test.
	* g++.dg/template/linkage4.C: New test.
	* g++.dg/template/linkage4a.C: New test.
2023-12-21 13:53:43 -05:00
Andre Vieira (lists)
135bb9e371 omp: Fix simdclone arguments with veclen lower than simdlen [PR113040]
This patch fixes an issue introduced by:
commit ea4a3d08f1
Author: Andre Vieira <andre.simoesdiasvieira@arm.com>
Date:   Wed Nov 1 17:02:41 2023 +0000

     omp: Reorder call for TARGET_SIMD_CLONE_ADJUST

The problem was that after this patch we no longer added multiple
arguments for vector arguments where the veclen was lower than the simdlen.

Bootstrapped and regression tested on x86_64-pc-linux-gnu and
aarch64-unknown-linux-gnu.

gcc/ChangeLog:

	PR middle-end/113040
	* omp-simd-clone.cc (simd_clone_adjust_argument_types): Add multiple
	vector arguments where simdlen is larger than veclen.
2023-12-21 10:36:29 -08:00
Uros Bizjak
2766b83759 i386: Fix shifts with high register input operand [PR113044]
The move to the output operand should use high register input operand.

	PR target/113044

gcc/ChangeLog:

	* config/i386/i386.md (*ashlqi_ext<mode>_1): Move from the
	high register of the input operand.
	(*<insn>qi_ext<mode>_1): Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/i386/pr113044.c: New test.
2023-12-21 15:58:03 +01:00
Vladimir N. Makarov
be977db17c Revert "[PR112918][LRA]: Fixing IRA ICE on m68k"
This reverts commit 989e67f827.
2023-12-21 09:38:43 -05:00
Julian Brown
144c531fe2 OpenMP/OpenACC: Reorganise OMP map clause handling in gimplify.cc
This patch has been separated out from the C++ "declare mapper"
support patch.  It contains just the gimplify.cc rearrangement
work, mostly moving gimplification from gimplify_scan_omp_clauses
to gimplify_adjust_omp_clauses for map clauses.

The motivation for doing this was that we don't know if we need to
instantiate mappers implicitly until the body of an offload region has
been scanned, i.e. in gimplify_adjust_omp_clauses, but we also need the
un-gimplified form of clauses to sort by base-pointer dependencies after
mapper instantiation has taken place.

The patch also reimplements the "present" clause sorting code to avoid
another sorting pass on mapping nodes.

This version of the patch is based on the version posted for og13, and
additionally incorporates a follow-on fix for DECL_VALUE_EXPR handling
in gimplify_adjust_omp_clauses:

"OpenMP/OpenACC: Reorganise OMP map clause handling in gimplify.cc"
https://gcc.gnu.org/pipermail/gcc-patches/2023-June/622223.html

Parts of:
"OpenMP: OpenMP 5.2 semantics for pointers with unmapped target"
https://gcc.gnu.org/pipermail/gcc-patches/2023-June/623351.html

2023-12-16  Julian Brown  <julian@codesourcery.com>

gcc/
	* gimplify.cc (omp_segregate_mapping_groups): Handle "present" groups.
	(gimplify_scan_omp_clauses): Use mapping group functionality to
	iterate through mapping nodes.  Remove most gimplification of
	OMP_CLAUSE_MAP nodes from here, but still populate ctx->variables
	splay tree.
	(gimplify_adjust_omp_clauses): Move most gimplification of
	OMP_CLAUSE_MAP nodes here.

libgomp/
	* testsuite/libgomp.fortran/target-enter-data-6.f90: Remove XFAIL.
2023-12-21 13:12:12 +00:00
Alex Coplan
aca1f9d7ca aarch64: Prevent moving throwing accesses in ldp/stp pass [PR113093]
As the PR shows, there was nothing to prevent the ldp/stp pass from
trying to move throwing insns, which lead to an RTL verification
failure.

This patch fixes that.

gcc/ChangeLog:

	PR target/113093
	* config/aarch64/aarch64-ldp-fusion.cc (latest_hazard_before):
	If the insn is throwing, record the previous insn as a hazard to
	prevent moving it from the end of the BB.

gcc/testsuite/ChangeLog:

	PR target/113093
	* gcc.dg/pr113093.c: New test.
2023-12-21 10:52:44 +00:00
Juzhe-Zhong
41a5f67db3 RISC-V: Add dynamic LMUL test for x264
When working on evaluating x264 performance, I notice the best LMUL for such case with -march=rv64gcv is LMUL = 2

LMUL = 1:

x264_pixel_8x8:
	add	a4,a1,a2
	addi	a6,a0,16
	vsetivli	zero,4,e8,mf4,ta,ma
	add	a5,a4,a2
	vle8.v	v12,0(a6)
	vle8.v	v2,0(a4)
	addi	a6,a0,4
	addi	a4,a4,4
	vle8.v	v11,0(a6)
	vle8.v	v9,0(a4)
	addi	a6,a1,4
	addi	a4,a0,32
	vle8.v	v13,0(a0)
	vle8.v	v1,0(a1)
	vle8.v	v4,0(a6)
	vle8.v	v8,0(a4)
	vle8.v	v7,0(a5)
	vwsubu.vv	v3,v13,v1
	add	a3,a5,a2
	addi	a6,a0,20
	addi	a4,a0,36
	vle8.v	v10,0(a6)
	vle8.v	v6,0(a4)
	addi	a5,a5,4
	vle8.v	v5,0(a5)
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v1,v12,v2
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vmv1r.v	v2,v1
	vwadd.vv	v1,v3,v2
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v2,v11,v4
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmslt.vi	v0,v2,0
	vneg.v	v2,v2,v0.t
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v3,v10,v9
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmv1r.v	v4,v2
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.vv	v2,v4,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v3,v8,v7
	vsetvli	zero,zero,e16,mf2,ta,mu
	add	a4,a3,a2
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v1,v1,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	add	a5,a4,a2
	vwsubu.vv	v3,v6,v5
	addi	a6,a0,48
	vsetvli	zero,zero,e16,mf2,ta,mu
	vle8.v	v16,0(a3)
	vle8.v	v12,0(a4)
	addi	a3,a3,4
	addi	a4,a4,4
	vle8.v	v17,0(a6)
	vle8.v	v14,0(a3)
	vle8.v	v10,0(a4)
	vle8.v	v8,0(a5)
	add	a6,a5,a2
	addi	a3,a0,64
	addi	a4,a0,80
	addi	a5,a5,4
	vle8.v	v13,0(a3)
	vle8.v	v4,0(a5)
	vle8.v	v9,0(a4)
	vle8.v	v6,0(a6)
	vmslt.vi	v0,v3,0
	addi	a7,a0,52
	vneg.v	v3,v3,v0.t
	vle8.v	v15,0(a7)
	vwadd.wv	v2,v2,v3
	addi	a3,a0,68
	addi	a4,a0,84
	vle8.v	v11,0(a3)
	vle8.v	v5,0(a4)
	addi	a5,a0,96
	vle8.v	v7,0(a5)
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v3,v17,v16
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v1,v1,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v3,v15,v14
	vsetvli	zero,zero,e16,mf2,ta,mu
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v2,v2,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	vwsubu.vv	v3,v13,v12
	vsetvli	zero,zero,e16,mf2,ta,mu
	slli	a4,a2,3
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v1,v1,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	sub	a4,a4,a2
	vwsubu.vv	v3,v11,v10
	vsetvli	zero,zero,e16,mf2,ta,mu
	add	a1,a1,a4
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v2,v2,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	lbu	a7,0(a1)
	vwsubu.vv	v3,v9,v8
	lbu	a5,112(a0)
	vsetvli	zero,zero,e16,mf2,ta,mu
	subw	a5,a5,a7
	vmslt.vi	v0,v3,0
	lbu	a3,113(a0)
	vneg.v	v3,v3,v0.t
	lbu	a4,1(a1)
	vwadd.wv	v1,v1,v3
	addi	a6,a6,4
	vsetvli	zero,zero,e8,mf4,ta,ma
	subw	a3,a3,a4
	vwsubu.vv	v3,v5,v4
	addi	a2,a0,100
	vsetvli	zero,zero,e16,mf2,ta,mu
	vle8.v	v4,0(a6)
	sraiw	a6,a5,31
	vle8.v	v5,0(a2)
	sraiw	a7,a3,31
	vmslt.vi	v0,v3,0
	xor	a2,a5,a6
	vneg.v	v3,v3,v0.t
	vwadd.wv	v2,v2,v3
	vsetvli	zero,zero,e8,mf4,ta,ma
	lbu	a4,114(a0)
	vwsubu.vv	v3,v7,v6
	lbu	t1,2(a1)
	vsetvli	zero,zero,e16,mf2,ta,mu
	subw	a2,a2,a6
	xor	a6,a3,a7
	vmslt.vi	v0,v3,0
	subw	a4,a4,t1
	vneg.v	v3,v3,v0.t
	lbu	t1,3(a1)
	vwadd.wv	v1,v1,v3
	lbu	a5,115(a0)
	subw	a6,a6,a7
	vsetvli	zero,zero,e8,mf4,ta,ma
	li	a7,0
	vwsubu.vv	v3,v5,v4
	sraiw	t3,a4,31
	vsetvli	zero,zero,e16,mf2,ta,mu
	subw	a5,a5,t1
	vmslt.vi	v0,v3,0
	vneg.v	v3,v3,v0.t
	vwadd.wv	v2,v2,v3
	sraiw	t1,a5,31
	vsetvli	zero,zero,e32,m1,ta,ma
	xor	a4,a4,t3
	vadd.vv	v1,v1,v2
	vmv.s.x	v2,a7
	vredsum.vs	v1,v1,v2
	vmv.x.s	a7,v1
	addw	a2,a7,a2
	subw	a4,a4,t3
	addw	a6,a6,a2
	xor	a2,a5,t1
	lbu	a3,116(a0)
	lbu	t4,4(a1)
	addw	a4,a4,a6
	subw	a2,a2,t1
	lbu	a5,5(a1)
	subw	a3,a3,t4
	addw	a2,a2,a4
	lbu	a4,117(a0)
	lbu	t1,6(a1)
	sraiw	a7,a3,31
	subw	a4,a4,a5
	lbu	a5,118(a0)
	sraiw	a6,a4,31
	subw	a5,a5,t1
	xor	a3,a3,a7
	lbu	t1,7(a1)
	lbu	a0,119(a0)
	sraiw	a1,a5,31
	subw	a0,a0,t1
	subw	a3,a3,a7
	xor	a4,a4,a6
	addw	a3,a3,a2
	subw	a4,a4,a6
	sraiw	a2,a0,31
	xor	a5,a5,a1
	addw	a4,a4,a3
	subw	a5,a5,a1
	xor	a0,a0,a2
	addw	a5,a5,a4
	subw	a0,a0,a2
	addw	a0,a0,a5
	ret

LMUL = dynamic

x264_pixel_8x8:
	add	a7,a1,a2
	vsetivli	zero,8,e8,mf2,ta,ma
	add	a6,a7,a2
	vle8.v	v1,0(a1)
	add	a3,a6,a2
	vle8.v	v2,0(a7)
	add	a4,a3,a2
	vle8.v	v13,0(a0)
	vle8.v	v7,0(a4)
	vwsubu.vv	v4,v13,v1
	vle8.v	v11,0(a6)
	vle8.v	v9,0(a3)
	add	a5,a4,a2
	addi	t1,a0,16
	vle8.v	v5,0(a5)
	vle8.v	v3,0(t1)
	addi	a7,a0,32
	addi	a6,a0,48
	vle8.v	v12,0(a7)
	vle8.v	v10,0(a6)
	addi	a3,a0,64
	addi	a4,a0,80
	vle8.v	v8,0(a3)
	vle8.v	v6,0(a4)
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v4,0
	vneg.v	v4,v4,v0.t
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v3,v2
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vwadd.vv	v2,v4,v1
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v12,v11
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vwadd.wv	v2,v2,v1
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v10,v9
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vwadd.wv	v2,v2,v1
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v8,v7
	vsetvli	zero,zero,e16,m1,ta,mu
	slli	a4,a2,3
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vwadd.wv	v2,v2,v1
	vsetvli	zero,zero,e8,mf2,ta,ma
	sub	a4,a4,a2
	vwsubu.vv	v1,v6,v5
	vsetvli	zero,zero,e16,m1,ta,mu
	addi	a3,a0,96
	vmslt.vi	v0,v1,0
	vle8.v	v7,0(a3)
	vneg.v	v1,v1,v0.t
	add	a5,a5,a2
	vwadd.wv	v2,v2,v1
	vle8.v	v6,0(a5)
	addi	a0,a0,112
	add	a1,a1,a4
	vle8.v	v5,0(a0)
	vle8.v	v4,0(a1)
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v7,v6
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vwadd.wv	v2,v2,v1
	vsetvli	zero,zero,e32,m2,ta,ma
	li	a5,0
	vmv.s.x	v1,a5
	vredsum.vs	v1,v2,v1
	vmv.x.s	a0,v1
	vsetvli	zero,zero,e8,mf2,ta,ma
	vwsubu.vv	v1,v5,v4
	vsetvli	zero,zero,e16,m1,ta,mu
	vmslt.vi	v0,v1,0
	vneg.v	v1,v1,v0.t
	vsetivli	zero,1,e32,m1,ta,ma
	vmv.s.x	v2,a5
	vsetivli	zero,8,e16,m1,ta,ma
	vwredsumu.vs	v1,v1,v2
	vsetivli	zero,0,e32,m1,ta,ma
	vmv.x.s	a5,v1
	addw	a0,a0,a5
	ret

I notice we have much better codegen and performance improvement gain with --param=riscv-autovec-lmul=dynamic
which is able to pick the best LMUL (M2).

Add test avoid future somebody potential destroy performance on X264.

gcc/testsuite/ChangeLog:

	* gcc.dg/vect/costmodel/riscv/rvv/dynamic-lmul2-7.c: New test.
2023-12-21 18:43:41 +08:00
Jakub Jelinek
b99353c33f Fix -Wcalloc-transposed-args warning in collect2.cc and work around -Walloc-size warning
This fixes one warning and works around another one where we allocate less than
what we cast to.

2023-12-21  Jakub Jelinek  <jakub@redhat.com>

	* gimple-fold.cc (maybe_fold_comparisons_from_match_pd):
	Use unsigned char buffers for lhs1 and lhs2 instead of allocating
	them through XALLOCA.
	* collect2.cc (maybe_run_lto_and_relink): Swap xcalloc arguments.
2023-12-21 11:21:40 +01:00
Richard Sandiford
803d222e53 aarch64: Fix early RA handling of deleted insns [PR113094]
The testcase constructs a sequence of insns that are fully dead
and yet (due to forced options) are not removed as such.  This
triggered a case where we would emit a meaningless reload for a
to-be-deleted insn.

We can't delete the insns first because that might disrupt the
iteration ranges.  So this patch turns them into notes before
the walk and then continues to delete them properly afterwards.

gcc/
	PR target/113094
	* config/aarch64/aarch64-early-ra.cc (apply_allocation): Stub
	out instructions that are going to be deleted before iterating
	over the rest.

gcc/testsuite/
	PR target/113094
	* gcc.target/aarch64/pr113094.c: New test.
2023-12-21 10:20:19 +00:00
Richard Sandiford
81dfa84e35 aarch64: Fix cut-&-pasto in early RA pass [PR112948]
As the PR notes, there was a cut-&-pasto in find_strided_accesses.
I've not been able to find a testcase that shows the problem.

gcc/
	PR target/112948
	* config/aarch64/aarch64-early-ra.cc (find_strided_accesses): Fix
	cut-&-pasto.
2023-12-21 10:20:19 +00:00
Jakub Jelinek
b9dc16cbe2 c++: Enable -Walloc-size and -Wcalloc-transposed-args warnings for C++
The following patch enables the -Walloc-size and -Wcalloc-transposed-args
warnings for C++ as well.

Tracking just 6 arguments for SIZEOF_EXPR for the calloc purposes
is because I see alloc_size 1,2, 2,3 and 3,4 pairs used in the wild,
so we need at least 5 to cover that rather than 3, and don't want to waste
too much compile time/memory for that.

2023-12-21  Jakub Jelinek  <jakub@redhat.com>

gcc/c-family/
	* c.opt (Walloc-size): Enable also for C++ and ObjC++.
gcc/cp/
	* cp-gimplify.cc (cp_genericize_r): If warn_alloc_size, call
	warn_for_alloc_size for -Walloc-size diagnostics.
	* semantics.cc (finish_call_expr): If warn_calloc_transposed_args,
	call warn_for_calloc for -Wcalloc-transposed-args diagnostics.
gcc/testsuite/
	* g++.dg/warn/Walloc-size-1.C: New test.
	* g++.dg/warn/Wcalloc-transposed-args-1.C: New test.
2023-12-21 11:17:08 +01:00
Jakub Jelinek
0e7f5039c5 ubsan: Add workaround for missing bitint libubsan support for shifts [PR113092]
libubsan still doesn't support bitints, so ubsan contains a workaround and
emits value 0 and TK_Unknown kind for those.  If shift second operand has
the large/huge _BitInt type, this results in internal errors in libubsan
though, so the following patch provides a temporary workaround for that
- in the rare case where the last operand has _BitInt type wider than
__int128 (or long long on 32-bit arches), it will pretend the shift count
has that type saturated to its range.  IMHO better than crashing in
the library.  If the value fits into the __int128 (or long long) range,
it will be printed correctly (just print that it has __int128/long long
type rather than say _BitInt(255)), if it doesn't, user will at least
know that it is a very large negative or very large positive value.

2023-12-21  Jakub Jelinek  <jakub@redhat.com>

	PR sanitizer/113092
	* c-ubsan.cc (ubsan_instrument_shift): Workaround for missing
	ubsan _BitInt support for the shift count.

	* gcc.dg/ubsan/bitint-4.c: New test.
2023-12-21 11:16:02 +01:00
Jakub Jelinek
3d1bdbf64c lower-bitint: Avoid nested casts in muldiv/float operands [PR112941]
Multiplication/division/modulo/float operands are handled by libgcc calls
and so need to be passed as array of limbs with precision argument,
using handle_operand_addr.  That code can't deal with more than one cast,
so the following patch avoids merging those cases.
.MUL_OVERFLOW calls use the same code, but we don't actually try to merge
the operands in that case already.

2023-12-21  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/112941
	* gimple-lower-bitint.cc (gimple_lower_bitint): Disallow merging
	a cast with multiplication, division or conversion to floating point
	if rhs1 of the cast is result of another single use cast in the same
	bb.

	* gcc.dg/bitint-56.c: New test.
	* gcc.dg/bitint-57.c: New test.
2023-12-21 11:13:42 +01:00
chenxiaolong
297ed1ac52 LoongArch: Fix builtin function prototypes for LASX in doc.
gcc/ChangeLog:

	* doc/extend.texi:According to the documents submitted earlier,
	Two problems with function return types and using the actual types
	of parameters instead of variable names were found and fixed.
2023-12-21 17:42:23 +08:00
Jiajie Chen
84ad1b5303 LoongArch: extend.texi: Fix typos in LSX intrinsics
Several typos have been found and fixed: missing semicolons, using
variable name instead of type, duplicate functions and wrong types.

gcc/ChangeLog:

	* doc/extend.texi(__lsx_vabsd_di): remove extra `i' in name.
	(__lsx_vfrintrm_d, __lsx_vfrintrm_s, __lsx_vfrintrne_d,
	__lsx_vfrintrne_s, __lsx_vfrintrp_d, __lsx_vfrintrp_s, __lsx_vfrintrz_d,
	__lsx_vfrintrz_s): fix return types.
	(__lsx_vld, __lsx_vldi, __lsx_vldrepl_b, __lsx_vldrepl_d,
	__lsx_vldrepl_h, __lsx_vldrepl_w, __lsx_vmaxi_b, __lsx_vmaxi_d,
	__lsx_vmaxi_h, __lsx_vmaxi_w, __lsx_vmini_b, __lsx_vmini_d,
	__lsx_vmini_h, __lsx_vmini_w, __lsx_vsrani_d_q, __lsx_vsrarni_d_q,
	__lsx_vsrlni_d_q, __lsx_vsrlrni_d_q, __lsx_vssrani_d_q,
	__lsx_vssrarni_d_q, __lsx_vssrarni_du_q, __lsx_vssrlni_d_q,
	__lsx_vssrlrni_du_q, __lsx_vst, __lsx_vstx, __lsx_vssrani_du_q,
	__lsx_vssrlni_du_q, __lsx_vssrlrni_d_q): add missing semicolon.
	(__lsx_vpickve2gr_bu, __lsx_vpickve2gr_hu): fix typo in return
	type.
	(__lsx_vstelm_b, __lsx_vstelm_d, __lsx_vstelm_h,
	__lsx_vstelm_w): use imm type for the last argument.
	(__lsx_vsigncov_b, __lsx_vsigncov_h, __lsx_vsigncov_w,
	__lsx_vsigncov_d): remove duplicate definitions.
2023-12-21 17:42:07 +08:00