There're 2 cases:
1. hwasan-poison-optimisation.c is supposed to scan call to
__hwasan_tag_mismatch4, and x86 have different mnemonic(call) from
aarch64(bl), so adjust testcase to scan either call or bl.
2. alloca-outside-caught.c/vararray-outside-caught.c are supposed to
scan mismatched tags and expected the tag corresponding to
out-of-bounds memory is 00, but for x86 the continous stack is
allocated by other local variable/array which is assigned with a
different tag, but still there're mismatches. So adjust testcase to
scan XX/XX instead of XX/00.
gcc/testsuite/ChangeLog:
* c-c++-common/hwasan/alloca-outside-caught.c: Adjust
testcase.
* c-c++-common/hwasan/hwasan-poison-optimisation.c: Ditto.
* c-c++-common/hwasan/vararray-outside-caught.c: Ditto.
Currently, when exporting names from the GMF, or within header modules,
for a set of constrained partial specialisations we only emit the first
one. This is because the 'type_specialization' list only includes a
single specialization per template+argument list; constraints are not
considered here.
The existing code uses a separate 'partial_specializations' list to
track this instead, but currently it's only used for declarations in the
module purview. This patch makes use of this list for all declarations.
PR c++/113405
gcc/cp/ChangeLog:
* module.cc (set_defining_module): Track partial specialisations
for all declarations.
gcc/testsuite/ChangeLog:
* g++.dg/modules/concept-9.h: New test.
* g++.dg/modules/concept-9_a.C: New test.
* g++.dg/modules/concept-9_b.C: New test.
* g++.dg/modules/concept-10_a.H: New test.
* g++.dg/modules/concept-10_b.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Currently, importing a namespace declarations marks it as imported, and
so marks it as originating from the module that it was imported from.
This is usually harmless, but causes problems with nested namespaces.
In the linked PR, what happens is that the namespace 'A' imported from
the module ends up not being considered when creating the 'A' namespace
within its own TU, and thus it has its 'cp_binding_level' recreated.
However, by this point 'A::B' has already been imported, and so the
'level_chain' member no longer correctly points at 'A's binding level,
so the sanity check for this in 'resume_scope' ICEs.
Since as far as I can tell there's no reason for imported namespaces to
be attached to any specific module (namespace declarations with external
linkage are always attached to the global module by [module.unit] p7.2),
this patch just removes the 'imported' flag, which stops code from
caring about its originating module.
This patch also makes some minor adjustments to existing tests to cater
for the new dumped name.
PR c++/100707
gcc/cp/ChangeLog:
* name-lookup.cc (add_imported_namespace): Don't mark namespaces
as imported.
gcc/testsuite/ChangeLog:
* g++.dg/modules/indirect-1_b.C: Adjust to handle namespaces not
being attached to the module they were imported from.
* g++.dg/modules/indirect-1_c.C: Likewise.
* g++.dg/modules/indirect-2_b.C: Likewise.
* g++.dg/modules/indirect-2_c.C: Likewise.
* g++.dg/modules/indirect-3_b.C: Likewise.
* g++.dg/modules/indirect-3_c.C: Likewise.
* g++.dg/modules/indirect-4_b.C: Likewise.
* g++.dg/modules/indirect-4_c.C: Likewise.
* g++.dg/modules/namespace-5_a.C: New test.
* g++.dg/modules/namespace-5_b.C: New test.
* g++.dg/modules/namespace-5_c.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
My last commit I tested on aarch64 but vect_long_mult was not actually invoked
and I didn't notice that I was missing a `[` in front of check_effective_target_aarch64_sve.
When I ran the testsuite on x86_64, I got the failure.
Committed as obvious after testing on x86_64.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp (check_effective_target_vect_long_mult): Fix
small typo for aarch64*-*-*.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
I can't actually find anything in the ISA manual that makes Ztso imply
A. In theory the memory ordering is just a different thing that the set
of availiable instructions (ie, Ztso without A would still imply TSO for
loads and stores). It also seems like a configuration that could be
sane to build: without A it's all but impossible to write any meaningful
multi-core code, and TSO is really cheap for a single core.
That said, I think it's kind of reasonable to provide A to users asking
for Ztso. So maybe even if this was a mistake it's the right thing to
do?
gcc/ChangeLog:
* common/config/riscv/riscv-common.cc (riscv_implied_info):
Remove {"ztso", "a"}.
Here we handle the operator expression u < v inconsistently: in a SFINAE
context we accept it, and in a non-SFINAE context we reject it with
error: request for member 'operator<=>' is ambiguous
as per [class.member.lookup]/6. This inconsistency is ultimately
because we neglect to propagate error_mark_node after recursing in
add_operator_candidates, fixed like so.
PR c++/113529
gcc/cp/ChangeLog:
* call.cc (add_operator_candidates): Propagate error_mark_node
result after recursing to find rewritten candidates.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/spaceship-sfinae3.C: New test.
Reviewed-by: Jason Merrill <jason@redhat.com>
gcc/fortran/ChangeLog:
PR fortran/113377
* trans-expr.cc (conv_dummy_value): New.
(gfc_conv_procedure_call): Factor code for handling dummy arguments
with the VALUE attribute in the scalar case into conv_dummy_value().
Reuse and adjust for calling elemental procedures.
gcc/testsuite/ChangeLog:
PR fortran/113377
* gfortran.dg/optional_absent_10.f90: New test.
On aarch64, vectorization of `long` multiply can be done if SVE is enabled
or if long is 32bit (ILP32). It can also be done for constants too but there
is no effective target test for that just yet.
Build and tested on aarch64-linux-gnu with no regressions (also tested with SVE enabled).
gcc/testsuite/ChangeLog:
PR testsuite/109705
* lib/target-supports.exp (check_effective_target_vect_long_mult):
Fix aarch64*-*-* checks.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
PR 108007 is another manifestation where we rely on DCE to clean-up
after IPA-SRA and if the user explicitely switches DCE off, IPA-SRA
can leave behind statements which are fed uninitialized values and
trap, even though their results are themselves never used.
I have already fixed this for unused parameters in callees, this bug
shows that almost the same thing can happen for removed returns, on
the side of callers. This means that the issue has to be fixed
elsewhere, in call redirection. This patch adds a function which
looks for (and through, using a work-list) uses of operations fed
specific SSA names and removes them all.
That would have been easy if it wasn't for debug statements during
tree-inline (from which call redirection is also invoked). Debug
statements are decoupled from the rest at this point and iterating
over uses of SSAs does not bring them up. During tree-inline they are
handled especially at the end, I assume in order to make sure that
relative ordering of UIDs are the same with and without debug info.
This means that during tree-inline we need to make a hash of killed
SSAs, that we already have in copy_body_data, available to the
function making the purging. So the patch duly does also that, making
the interface slightly ugly. Moreover, all newly unused SSA names
need to be freed and as PR 112616 showed, it must be done in a defined
order, which is what newly added ipa_release_ssas_in_hash does.
gcc/ChangeLog:
2024-01-12 Martin Jambor <mjambor@suse.cz>
PR ipa/108007
PR ipa/112616
* cgraph.h (cgraph_edge): Add a parameter to
redirect_call_stmt_to_callee.
* ipa-param-manipulation.h (ipa_param_adjustments): Add a
parameter to modify_call.
(ipa_release_ssas_in_hash): Declare.
* cgraph.cc (cgraph_edge::redirect_call_stmt_to_callee): New
parameter killed_ssas, pass it to padjs->modify_call.
* ipa-param-manipulation.cc (purge_all_uses): New function.
(ipa_param_adjustments::modify_call): New parameter killed_ssas.
Instead of substituting uses, invoke purge_all_uses. If
hash of killed SSAs has not been provided, create a temporary one
and release SSAs that have been added to it.
(compare_ssa_versions): New function.
(ipa_release_ssas_in_hash): Likewise.
* tree-inline.cc (redirect_all_calls): Create
id->killed_new_ssa_names earlier, pass it to edge redirection,
adjust a comment.
(copy_body): Release SSAs in id->killed_new_ssa_names.
gcc/testsuite/ChangeLog:
2024-01-15 Martin Jambor <mjambor@suse.cz>
PR ipa/108007
PR ipa/112616
* gcc.dg/ipa/pr108007.c: New test.
* gcc.dg/ipa/pr112616.c: Likewise.
The problem here is the builtin apply mechanism thinks the FP registers
are to be used due to get_raw_arg_mode not returning VOIDmode. This
fixes that oversight and the backend now returns VOIDmode for non-general-regs
if TARGET_GENERAL_REGS_ONLY is true.
Built and tested for aarch64-linux-gnu with no regressions.
PR target/113486
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_get_reg_raw_mode): For
TARGET_GENERAL_REGS_ONLY, return VOIDmode for non-GP_REGNUM_P regno.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/builtin_apply-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Since the match.pd transforms (zero_one == 0) ? y : z <op> y,
into ((typeof(y))zero_one * z) <op> y. Add splitters to recongize
this expression to generate SFB instructions.
gcc/ChangeLog:
PR target/113095
* config/riscv/sfb.md: New splitters to rewrite single bit
sign extension as the condition to SFB instructions.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sfb.c: New test.
* gcc.target/riscv/pr113095.c: New test.
-falign-functions is ignored in cold code, since it is an optimization intended to
improve instruction prefetch. In some case it is necessary to force alignment for
all functions, so this patch adds -fmin-function-alignment for this purpose.
gcc/ChangeLog:
PR middle-end/88345
* common.opt: (flimit-function-alignment): Reorder alphabeticaly
(fmin-function-alignment): New parameter.
* doc/invoke.texi: (-fmin-function-alignment): Document.
(-falign-functions,-falign-loops,-falign-labels): Mention that
aglinments are ignored in cold code.
* varasm.cc (assemble_start_function): Handle min-function-alignment.
As suggested in the ticket this replaces the expansion by converting the
Advanced SIMD types to SVE types by simply printing out an SVE register for
these instructions.
This fixes the subreg issues since there are no subregs involved anymore.
gcc/ChangeLog:
PR target/109636
* config/aarch64/aarch64-simd.md (<su_optab>div<mode>3,
mulv2di3): Remove.
* config/aarch64/iterators.md (VQDIV): Remove.
(SVE_FULL_SDI_SIMD, SVE_FULL_HSDI_SIMD_DI,
SVE_I_SIMD_DI): New.
(VPRED, sve_lane_con): Add V4SI and V2DI.
* config/aarch64/aarch64-sve.md (<optab><mode>3,
@aarch64_pred_<optab><mode>): Support Advanced SIMD types.
(mul<mode>3): New, split from <optab><mode>3.
(@aarch64_pred_<optab><mode>, *post_ra_<optab><mode>3): New.
* config/aarch64/aarch64-sve2.md (@aarch64_mul_lane_<mode>,
*aarch64_mul_unpredicated_<mode>): Change SVE_FULL_HSDI to
SVE_FULL_HSDI_SIMD_DI.
gcc/testsuite/ChangeLog:
PR target/109636
* gcc.target/aarch64/sve/pr109636_1.c: New test.
* gcc.target/aarch64/sve/pr109636_2.c: New test.
* gcc.target/aarch64/sve2/pr109636_1.c: New test.
The AArch64 vector PCS does not allow simd calls with simdlen 1,
however due to a bug we currently do allow it for num == 0.
This causes us to emit a symbol that doesn't exist and we fail to link.
gcc/ChangeLog:
PR tree-optimization/113552
* config/aarch64/aarch64.cc
(aarch64_simd_clone_compute_vecsize_and_simdlen): Block simdlen 1.
gcc/testsuite/ChangeLog:
PR tree-optimization/113552
* gcc.target/aarch64/pr113552.c: New test.
* gcc.target/aarch64/simd_pcs_attribute-3.c: Remove bogus check.
When the check for exceeding param_ipa_cp_value_list_size limit was
modified to be ignored for generating values from self-recursive
calls, it should have been changed from equal to, to equals to or is
greater than. This omission manifests itself as PR 113490.
When I examined the condition I also noticed that the parameter should
come from the callee rather than the caller, since the value list is
associated with the former and not the latter. In practice the limit
is of course very likely to be the same, but I fixed this aspect of
the condition too. I briefly audited all other uses of opt_for_fn in
ipa-cp.cc and all the others looked OK.
gcc/ChangeLog:
2024-01-19 Martin Jambor <mjambor@suse.cz>
PR ipa/113490
* ipa-cp.cc (ipcp_lattice<valtype>::add_value): Bail out if value
count is equal or greater than the limit. Use the limit from the
callee.
gcc/testsuite/ChangeLog:
2024-01-22 Martin Jambor <mjambor@suse.cz>
PR ipa/113490
* gcc.dg/ipa/pr113490.c: New test.
PR analyzer/112927 reports a false positive from -Wanalyzer-tainted-size
seen on the Linux kernel's drivers/char/ipmi/ipmi_devintf.c with the
analyzer kernel plugin.
The issue is that in:
(A):
if (msg->data_len > 272) {
return -90;
}
(B):
n = msg->data_len;
__check_object_size(to, n);
n = copy_from_user(to, from, n);
the analyzer is treating __check_object_size as having arbitrary side
effects, and, in particular could modify msg->data_len. Hence the
sanitization that occurs at (A) above is treated as being for a
different value than the size obtained at (B), hence the bogus warning
at the call to copy_from_user.
Fixed by extending the analyzer kernel plugin to "teach" it that
__check_object_size has no side effects.
gcc/testsuite/ChangeLog:
PR analyzer/112927
* gcc.dg/plugin/analyzer_kernel_plugin.c
(class known_function___check_object_size): New.
(kernel_analyzer_init_cb): Register it.
* gcc.dg/plugin/plugin.exp: Add taint-pr112927.c.
* gcc.dg/plugin/taint-pr112927.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This patch fixes a bug in gcc/m2/gm2-libs/FIO.mod which failed to cast the
whence parameter into the correct type. The patch casts the whence
parameter for lseek to SYSTEM.CSSIZE_T.
gcc/m2/ChangeLog:
PR modula2/113559
* gm2-libs/FIO.mod (SetPositionFromBeginning): Convert pos into
CSSIZE_T during call to lseek.
(SetPositionFromEnd): Convert pos into CSSIZE_T during call to
lseek.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
A couple of gcc.dg/vect/vect-simd-clone-1*.c tests FAIL on 32-bit
Solaris/x86 since 20230222:
FAIL: gcc.dg/vect/vect-simd-clone-16c.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
FAIL: gcc.dg/vect/vect-simd-clone-16d.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
FAIL: gcc.dg/vect/vect-simd-clone-17c.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
FAIL: gcc.dg/vect/vect-simd-clone-17d.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
FAIL: gcc.dg/vect/vect-simd-clone-18c.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
FAIL: gcc.dg/vect/vect-simd-clone-18d.c scan-tree-dump-times vect
"[\\\\n\\\\r] [^\\\\n]* = foo\\\\.simdclone" 2
The problem is that the 32-bit Solaris/x86 triple still uses i386,
although gcc defaults to -mpentium4. However, the tests only handle
x86_64* and i686*, although the tests don't seem to require some
specific ISA extension not covered by vect_simd_clones.
To fix this, the tests now allow generic i?86. At the same time, I've
removed the wildcards from x86_64* and i686* since DejaGnu uses the
canonical forms.
Tested on i386-pc-solaris2.11 and i686-pc-linux-gnu.
2024-01-24 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
gcc/testsuite:
PR target/113556
* gcc.dg/vect/vect-simd-clone-16c.c: Don't wildcard x86_64 in
target specs. Allow any i?86 target instead of i686 only.
* gcc.dg/vect/vect-simd-clone-16d.c: Likewise.
* gcc.dg/vect/vect-simd-clone-17c.c: Likewise.
* gcc.dg/vect/vect-simd-clone-17d.c: Likewise.
* gcc.dg/vect/vect-simd-clone-18c.c: Likewise.
* gcc.dg/vect/vect-simd-clone-18d.c: Likewise.
gcc.target/i386/pr80833-1.c FAILs on 32-bit Solaris/x86 since 20220609:
FAIL: gcc.target/i386/pr80833-1.c scan-assembler pextrd
Unlike e.g. Linux/i686, 32-bit Solaris/x86 defaults to -mstackrealign,
so this patch overrides that to match.
Tested on i386-pc-solaris2.11 and i686-pc-linux-gnu.
2024-01-23 Rainer Orth <ro@CeBiTec.Uni-Bielefeld.DE>
gcc/testsuite:
* gcc.target/i386/pr80833-1.c: Add -mno-stackrealign to dg-options.
GAS introduced explicit relocs since 2001, and %pcrel_hi/low were
introduced in 2014. In future, we may introduce more.
Let's convert -mexplicit-relocs option, and accpet options:
none, base, pcrel.
We also update gcc/configure.ac to set the value to option
the gas support when GCC itself is built.
gcc
* configure.ac: Detect the explicit relocs support for
mips, and define C macro MIPS_EXPLICIT_RELOCS.
* config.in: Regenerated.
* configure: Regenerated.
* doc/invoke.texi(MIPS Options): Add -mexplicit-relocs.
* config/mips/mips-opts.h: Define enum mips_explicit_relocs.
* config/mips/mips.cc(mips_set_compression_mode): Sorry if
!TARGET_EXPLICIT_RELOCS instead of just set it.
* config/mips/mips.h: Define TARGET_EXPLICIT_RELOCS and
TARGET_EXPLICIT_RELOCS_PCREL with mips_opt_explicit_relocs.
* config/mips/mips.opt: Introduce -mexplicit-relocs= option
and define -m(no-)explicit-relocs as aliases.
Since, to the best of my knowledge, all reported regressions related to
the ldp/stp fusion pass have now been fixed, and PGO+LTO bootstrap with
--enable-languages=all is working again with the passes enabled, this
patch turns the passes back on by default, as agreed with Jakub here:
https://gcc.gnu.org/pipermail/gcc-patches/2024-January/642478.html
gcc/ChangeLog:
* config/aarch64/aarch64.opt (-mearly-ldp-fusion): Set default
to 1.
(-mlate-ldp-fusion): Likewise.
This renamed main_exit_p to last_val_reduc_p to more accurately
reflect what the value is calculating.
gcc/ChangeLog:
* tree-vect-loop.cc (vect_get_vect_def,
vect_create_epilog_for_reduction): Rename main_exit_p to
last_val_reduc_p.
This fixes a bug where vect_create_epilog_for_reduction does not handle the
case where all exits are early exits. In this case we should do like induction
handling code does and not have a main exit.
This shows that some new miscompiles are happening (stage3 is likely miscompiled)
but that's unrelated to this patch and I'll look at it next.
gcc/ChangeLog:
PR tree-optimization/113364
* tree-vect-loop.cc (vect_create_epilog_for_reduction): If all exits all
early exits then we must reduce from the first offset for all of them.
gcc/testsuite/ChangeLog:
PR tree-optimization/113364
* gcc.dg/vect/vect-early-break_107-pr113364.c: New test.
When removing the first node of a bucket it is useless to check if this bucket
is the one containing the _M_before_begin node. The bucket before-begin node is
already transfered to the next pointed-to bucket regardeless if it is the container
before-begin node.
libstdc++-v3/ChangeLog:
* include/bits/hashtable.h (_Hahstable<>::_M_remove_bucket_begin): Remove
_M_before_begin check and cleanup implementation.
Co-authored-by: Théo Papadopoulo <papadopoulo@gmail.com>
The reduced testcase for pr113429 (cam4 failure) needed additional
modules so it wasn't committed.
The fuzzer found a c testcase that was also fixed with pr113429's fix.
Adding it as a regression test.
PR target/113429
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/vsetvl/pr113429.c: New test.
Signed-off-by: Patrick O'Neill <patrick@rivosinc.com>
SPEC 2017 wrf benchmark expose unreasonble memory usage of VSETVL PASS
that is, VSETVL PASS consume over 33 GB memory which make use impossible
to compile SPEC 2017 wrf in a laptop.
The root cause is wasting-memory variables:
unsigned num_exprs = num_bbs * num_regs;
sbitmap *avl_def_loc = sbitmap_vector_alloc (num_bbs, num_exprs);
sbitmap *m_kill = sbitmap_vector_alloc (num_bbs, num_exprs);
m_avl_def_in = sbitmap_vector_alloc (num_bbs, num_exprs);
m_avl_def_out = sbitmap_vector_alloc (num_bbs, num_exprs);
I find that compute_avl_def_data can be achieved by RTL_SSA framework.
Replace the code implementation base on RTL_SSA framework.
After this patch, the memory-hog issue is fixed.
simple vsetvl memory usage (valgrind --tool=massif --pages-as-heap=yes --massif-out-file=massif.out)
is 1.673 GB.
lazy vsetvl memory usage (valgrind --tool=massif --pages-as-heap=yes --massif-out-file=massif.out)
is 2.441 GB.
Tested on both RV32 and RV64, no regression.
gcc/ChangeLog:
PR target/113495
* config/riscv/riscv-vsetvl.cc (get_expr_id): Remove.
(get_regno): Ditto.
(get_bb_index): Ditto.
(pre_vsetvl::compute_avl_def_data): Ditto.
(pre_vsetvl::earliest_fuse_vsetvl_info): Fix large memory usage.
(pre_vsetvl::pre_global_vsetvl_info): Ditto.
gcc/testsuite/ChangeLog:
PR target/113495
* gcc.target/riscv/rvv/vsetvl/avl_single-107.c: Adapt test.
This disables the new test added by r14-8168 on machines that don't have
TLS support, such as bare-metal ARM.
gcc/testsuite/ChangeLog:
* g++.dg/modules/pr113292_c.C: Require TLS.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
-Wdangling-reference checks if a function receives a temporary as its
argument, and only warns if any of the arguments was a temporary. But
we should not warn when the temporary represents a lambda or we generate
false positives as in the attached testcases.
PR c++/113256
PR c++/111607
PR c++/109640
gcc/cp/ChangeLog:
* call.cc (do_warn_dangling_reference): Don't warn if the temporary
is of lambda type.
gcc/testsuite/ChangeLog:
* g++.dg/warn/Wdangling-reference14.C: New test.
* g++.dg/warn/Wdangling-reference15.C: New test.
* g++.dg/warn/Wdangling-reference16.C: New test.
As the following testcase shows, I forgot to call c_fully_fold on the
__atomic_*/__sync_* operands called on _BitInt address, the expressions
are then used inside of TARGET_EXPR initializers etc. and are never fully
folded later, which means we can ICE e.g. on C_MAYBE_CONST_EXPR trees
inside of those.
The following patch fixes it, while the function currently is only called
in the C FE because C++ doesn't support BITINT_TYPE, I think guarding the
calls on !c_dialect_cxx () is safer.
2024-01-23 Jakub Jelinek <jakub@redhat.com>
PR c/113518
* c-common.cc (atomic_bitint_fetch_using_cas_loop): Call c_fully_fold
on lhs_addr, val and model for C.
* gcc.dg/bitint-77.c: New test.
Ccmp is not used if the result of the and/ior is used by both
a GIMPLE_COND and a GIMPLE_ASSIGN. This improves the code generation
here by using ccmp in this case.
Two changes is required, first we need to allow the outer statement's
result be used more than once.
The second change is that during the expansion of the gimple, we need
to try using ccmp. This is needed because we don't use expand the ssa
name of the lhs but rather expand directly from the gimple.
A small note on the ccmp_4.c testcase, we should be able to get slightly
better than with this patch but it is one extra instruction compared to
before.
PR target/100942
gcc/ChangeLog:
* ccmp.cc (ccmp_candidate_p): Add outer argument.
Allow if the outer is true and the lhs is used more
than once.
(expand_ccmp_expr): Update call to ccmp_candidate_p.
* expr.h (expand_expr_real_gassign): Declare.
* expr.cc (expand_expr_real_gassign): New function, split out from...
(expand_expr_real_1): ...here.
* cfgexpand.cc (expand_gimple_stmt_1): Use expand_expr_real_gassign.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/ccmp_3.c: New test.
* gcc.target/aarch64/ccmp_4.c: New test.
* gcc.target/aarch64/ccmp_5.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Co-Authored-By: Richard Sandiford <richard.sandiford@arm.com>
As the PR shows, we were missing code to update debug uses in the
load/store pair fusion pass. This patch fixes that.
The patch tries to give a complete treatment of the debug uses that will
be affected by the changes we make, and in particular makes an effort to
preserve debug info where possible, e.g. when re-ordering an update of
a base register by a constant over a debug use of that register. When
re-ordering loads over a debug use of a transfer register, we reset the
debug insn. Likewise when re-ordering stores over debug uses of mem.
While doing this I noticed that try_promote_writeback used a strange
choice of move_range for the pair insn, in that it chose the previous
nondebug insn instead of the insn itself. Since the insn is being
changed, these move ranges are equivalent (at least in terms of nondebug
insn placement as far as RTL-SSA is concerned), but I think it is more
natural to choose the pair insn itself. This is needed to avoid
incorrectly updating some debug uses.
gcc/ChangeLog:
PR target/113089
* config/aarch64/aarch64-ldp-fusion.cc (reset_debug_use): New.
(fixup_debug_use): New.
(fixup_debug_uses_trailing_add): New.
(fixup_debug_uses): New. Use it ...
(ldp_bb_info::fuse_pair): ... here.
(try_promote_writeback): Call fixup_debug_uses_trailing_add to
fix up debug uses of the base register that are affected by
folding in the trailing add insn.
gcc/testsuite/ChangeLog:
PR target/113089
* gcc.c-torture/compile/pr113089.c: New test.
While working on PR113089, I realised we where missing code to re-parent
trailing nondebug uses of the base register in the case of cancelling
writeback in the load/store pair pass. This patch fixes that.
gcc/ChangeLog:
PR target/113089
* config/aarch64/aarch64-ldp-fusion.cc (ldp_bb_info::fuse_pair):
Update trailing nondebug uses of the base register in the case
of cancelling writeback.
This patch adds some accessors to set_info and use_info to make it
easier to get at and iterate through uses in debug insns.
It is used by the aarch64 load/store pair fusion pass in a subsequent
patch to fix PR113089, i.e. to update debug uses in the pass.
gcc/ChangeLog:
PR target/113089
* rtl-ssa/accesses.h (use_info::next_debug_insn_use): New.
(debug_insn_use_iterator): New.
(set_info::first_debug_insn_use): New.
(set_info::debug_insn_uses): New.
* rtl-ssa/member-fns.inl (use_info::next_debug_insn_use): New.
(set_info::first_debug_insn_use): New.
(set_info::debug_insn_uses): New.
For the testcase in the PR, we try to pair insns where the first has
writeback and the second uses the updated base register. This causes us
to record a hazard against the second insn, thus narrowing the move
range away from the end of the BB.
However, it isn't meaningful to record hazards against the other insn
in the pair, as this doesn't change which pairs can be formed, and also
doesn't change where the pair is formed (from the perspective of
nondebug insns).
To see why this is the case, consider the two cases:
- Suppoe we are finding hazards for insns[0]. If we record a hazard
against insns[1], then range.last becomes
insns[1]->prev_nondebug_insn (), but note that this is equivalent to
inserting after insns[1] (since insns[1] is being changed).
- Now consider finding hazards for insns[1]. Suppose we record
insns[0] as a hazard. Then we set range.first = insns[0], which is a
no-op.
As such, it seems better to never record hazards against the other insn
in the pair, as we check whether the insns themselves are suitable for
combination separately (e.g. for ldp checking that they use distinct
transfer registers). Avoiding unnecessarily narrowing the move range
avoids unnecessarily re-ordering over debug insns.
This should also mean that we can only narrow the move range away from
the end of the BB in the case that we record a hazard for insns[0]
against insns[1]->prev_nondebug_insn () or earlier. This means that for
the non-call-exceptions case, either the move range includes insns[1],
or we reject the pair (thus the assert tripped in the PR should always
hold).
gcc/ChangeLog:
PR target/113356
* config/aarch64/aarch64-ldp-fusion.cc (ldp_bb_info::try_fuse_pair):
Don't record hazards against the opposite insn in the pair.
gcc/testsuite/ChangeLog:
PR target/113356
* gcc.target/aarch64/pr113356.C: New test.
When building GCC with --enable-default-ssp, the stack protector is
enabled for got-load.C, causing additional GOT loads for
__stack_chk_guard. So mem/u will be matched more than 2 times and the
test will fail.
Disable stack protector to fix this issue.
gcc/testsuite:
* g++.target/loongarch/got-load.C (dg-options): Add
-fno-stack-protector.
Recent
change (https://gcc.gnu.org/pipermail/gcc-cvs/2023-December/394915.html)
added a generic SME support using `.hidden`, `.type`, and ``.size`
pseudo-ops in the assembly sources, `aarch64-w64-mingw32` does not
support the pseudo-ops though. This patch wraps usage of those
pseudo-ops using macros and ifdefs them for `__ELF__` define.
libgcc/
* config/aarch64/aarch64-asm.h (HIDDEN, SYMBOL_SIZE, SYMBOL_TYPE)
(ENTRY_ALIGN, GNU_PROPERTY): New macros.
* config/aarch64/__arm_sme_state.S: Use them.
* config/aarch64/__arm_tpidr2_save.S: Likewise.
* config/aarch64/__arm_za_disable.S: Likewise.
* config/aarch64/crti.S: Likewise.
* config/aarch64/lse.S: Likewise.
Fix ia32 test failure:
FAIL: gcc.dg/torture/pr113255.c -O1 (test for excess errors)
Excess errors:
cc1: error: '-mstringop-strategy=rep_8byte' not supported for 32-bit code
PR rtl-optimization/113255
* gcc.dg/torture/pr113255.c (dg-additional-options): Add only
if not ia32.
Fix the m2 build warning and error:
[...]
../../src/gcc/m2/mc/mc.flex:32:9: warning: "alloca" redefined
32 | #define alloca __builtin_alloca
| ^~~~~~
In file included from /usr/include/stdlib.h:587,
from <stdout>:22:
/usr/include/alloca.h:35:10: note: this is the location of the previous definition
35 | # define alloca(size) __builtin_alloca (size)
| ^~~~~~
../../src/gcc/m2/mc/mc.flex: In function 'handleDate':
../../src/gcc/m2/mc/mc.flex:333:25: error: passing argument 1 of 'time' from incompatible point
er type [-Wincompatible-pointer-types]
333 | time_t clock = time ((long *)0);
| ^~~~~~~~~
| |
| long int *
In file included from ../../src/gcc/m2/mc/mc.flex:28:
/usr/include/time.h:76:29: note: expected 'time_t *' {aka 'long long int *'} but argument is of
type 'long int *'
76 | extern time_t time (time_t *__timer) __THROW;
PR bootstrap/113554
* mc/mc.flex (alloca): Don't redefine.
(handleDate): Replace (long *)0 with (time_t *)0 when calling
time.