Since we already set scalable vectorization by default, this flag is redundant.
Also, we are start to full coverage testing with different compile option.
E.g --param=riscv-autovec-preference=fixed-vlmax.
To avoid compile option confusion. Remove it.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp: Remove scalable compile option.
While filing a clang request to return 18 on _BitInts for
__builtin_classify_type instead of -1 they return currently, I've
noticed that we return -1 for vector types. Initially I wanted to change
behavior just for __builtin_classify_type (type) form, as that is new in
GCC 14 and we've returned for 20+ years -1 for __builtin_classify_type
on vector expressions, but I was convinved otherwise, so this changes
the behavior even for that and now returns 19.
2023-11-20 Jakub Jelinek <jakub@redhat.com>
gcc/
* typeclass.h (enum type_class): Add vector_type_class.
* builtins.cc (type_to_class): Return vector_type_class for
VECTOR_TYPE.
* doc/extend.texi (__builtin_classify_type): Mention bit-precise
integer types and vector types.
gcc/testsuite/
* c-c++-common/builtin-classify-type-1.c (main): Add tests for vector
types.
In order to handle masks properly for conditional operations this patch
teaches vect_recog_mask_conversion_pattern to also handle conditional
operations. Now we convert e.g.
_mask = *_6;
_ifc123 = COND_OP (_mask, ...);
into
_mask = *_6;
patt200 = (<signed-boolean:1>) _mask;
patt201 = COND_OP (patt200, ...);
This way the mask will be properly recognized as boolean mask and the
correct vector mask will be generated.
gcc/ChangeLog:
PR middle-end/112406
* tree-vect-patterns.cc (vect_recog_mask_conversion_pattern):
Convert masks for conditional operations as well.
gcc/testsuite/ChangeLog:
* gfortran.dg/pr112406.f90: New test.
On Fri, Nov 17, 2023 at 03:01:04PM +0100, Jakub Jelinek wrote:
> As a follow-up, I'm considering changing in this routine the popcount
> call to IFN_POPCOUNT with 2 arguments and during expansion test costs.
Here is the follow-up which does the rtx costs testing.
2023-11-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/90693
* tree-ssa-math-opts.cc (match_single_bit_test): Mark POPCOUNT with
result only used in equality comparison against 1 with direct optab
support as .POPCOUNT call with 2 arguments.
* internal-fn.h (expand_POPCOUNT): Declare.
* internal-fn.def (DEF_INTERNAL_INT_EXT_FN): New macro, document it,
undefine at the end.
(POPCOUNT): Use it instead of DEF_INTERNAL_INT_FN.
* internal-fn.cc (DEF_INTERNAL_INT_EXT_FN): Define to nothing before
inclusion to define expanders.
(expand_POPCOUNT): New function.
Per the earlier discussions on this PR, the following patch folds
popcount (x) == 1 (and != 1) into (x ^ (x - 1)) > x - 1 (or <=)
if the corresponding popcount optab isn't implemented (I think any
double-word popcount or call will be necessarily slower than the
above cheap 3 op check and even for -Os larger or same size).
I've noticed e.g. C++ aligned new starts with std::has_single_bit
which does popcount (x) == 1.
As a follow-up, I'm considering changing in this routine the popcount
call to IFN_POPCOUNT with 2 arguments and during expansion test costs.
2023-11-20 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/90693
* tree-ssa-math-opts.cc (match_single_bit_test): New function.
(math_opts_dom_walker::after_dom_children): Call it for EQ_EXPR
and NE_EXPR assignments and GIMPLE_CONDs.
* gcc.target/i386/pr90693.c: New test.
I have noticed we are inconsistent, some DEF_INTERNAL*
macros (most of them) were undefined at the end of internal-fn.def (but in
some cases uselessly undefined again after inclusion), while others were not
(and sometimes undefined after the inclusion). I've changed it to always
undefine at the end of internal-fn.def.
2023-11-20 Jakub Jelinek <jakub@redhat.com>
* internal-fn.def: Document missing DEF_INTERNAL* macros and make sure
they are all undefined at the end.
* internal-fn.cc (lookup_hilo_internal_fn, lookup_evenodd_internal_fn,
widening_fn_p, get_len_internal_fn): Don't undef DEF_INTERNAL_*FN
macros after inclusion of internal-fn.def.
I got spurious fails of tests that required arm_thumb1_movt_ok on a
target cpu that did not support movt. Looking into it, I found the
arm_movt property to have been cut&pasted into other procs that
checked for different properties. They shouldn't share the same test
results cache entry, so I'm changing their prop names. Or rather its
prop name, because the other occurrence was already fixed recently.
for gcc/testsuite/ChangeLog
* lib/target-supports.exp
(check_effective_target_arm_thumb1_cbz_ok): Fix prop name
cut&pasto.
On targets that have -fshort-enums enabled by default, the type casts
in the pr108251 analyzer tests warn that the byte-aligned enums may
not be sufficiently aligned to be a struct connection *. The function
can't know better, the warning is reasonable, the code doesn't
expected enums to be shorter and less aligned than the struct.
Rather than use -fno-short-enums, I decided to embrace the warning on
targets that have short_enums enabled by default.
However, C++ doesn't issue the warning, because even with
-fshort-enums, enumeration types are not TYPE_PACKED, and the
expression is not sufficiently simplified by the C++ front-end for
check_and_warn_address_or_pointer_of_packed_member to identify the
insufficiently aligned pointer. So don't expect the warning there.
for gcc/testsuite/ChangeLog
* c-c++-common/analyzer/null-deref-pr108251-smp_fetch_ssl_fc_has_early-O2.c:
Expect "unaligned pointer value" warning on short_enums
targets, but not in c++.
* c-c++-common/analyzer/null-deref-pr108251-smp_fetch_ssl_fc_has_early.c:
Likewise.
I've recently patched scev-3.c and scev-5.c because it only passed by
accident on ia32. It also fails on some (but not all) arm-eabi
variants. It seems hard to characterize the conditions in which the
optimization is supposed to pass, but expecting them to fail on ilp32
targets, though probably a little excessive and possibly noisy, is not
quite as alarming as getting a fail in test reports, so I propose
changing the xfail marker from ia32 to ilp32.
I'm also proposing to add a similar marker to scev-4.c. Though it
doesn't appear to be failing for me, I've got reports that suggest it
still does for others, and it certainly did for us as well.
for gcc/testsuite/ChangeLog
* gcc.dg/tree-ssa/scev-3.c: xfail on all ilp32 targets,
though some of these do pass.
* gcc.dg/tree-ssa/scev-4.c: Likewise.
* gcc.dg/tree-ssa/scev-5.c: Likewise.
There should never be a reason to compare more than one level of template
parameters; additional levels are for the enclosing context, which is either
irrelevant (for a template template parameter) or already compared (for a
member template).
Also, the comp_template_parms handling of type parameters was wrongly
checking for TEMPLATE_TYPE_PARM when a type parameter appears here as a
TYPE_DECL.
gcc/cp/ChangeLog:
* pt.cc (comp_template_parms): Just one level.
(template_parameter_lists_equivalent_p): Likewise.
Let's use a more informative name instead of DECL_VIRTUAL_P directly.
gcc/cp/ChangeLog:
* cp-tree.h (DECL_TEMPLATE_PARM_CHECK): New.
(DECL_IMPLICIT_TEMPLATE_PARM_P): New.
(decl_template_parm_check): New.
* mangle.cc (write_closure_template_head): Use it.
* parser.cc (synthesize_implicit_template_parm): Likewise.
* pt.cc (template_parameters_equivalent_p): Likewise.
x86 backend support reduc_{and,ior,xor>_scal_m for vector integer
modes.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp (vect_logical_reduc): Add i?86-*-*
and x86_64-*-*.
BB vectorizer relies on the backend support of
.REDUC_{PLUS,IOR,XOR,AND} to vectorize reduction.
gcc/ChangeLog:
PR target/112325
* config/i386/sse.md (reduc_<code>_scal_<mode>): New expander.
(REDUC_ANY_LOGIC_MODE): New iterator.
(REDUC_PLUS_MODE): Extend to VxHI/SI/DImode.
(REDUC_SSE_PLUS_MODE): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr112325-1.c: New test.
* gcc.target/i386/pr112325-2.c: New test.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112537
-mmemcpy-strategy=[auto|libcall|scalar|vector]
auto: Current status, use scalar or vector instructions.
libcall: Always use a library call.
scalar: Only use scalar instructions.
vector: Only use vector instructions.
PR target/112537
gcc/ChangeLog:
* config/riscv/riscv-opts.h (enum riscv_stringop_strategy_enum): Strategy enum.
* config/riscv/riscv-string.cc (riscv_expand_block_move): Disabled based on options.
(expand_block_move): Ditto.
* config/riscv/riscv.opt: Add -mmemcpy-strategy=.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/cpymem-strategy-1.c: New test.
* gcc.target/riscv/rvv/base/cpymem-strategy-2.c: New test.
* gcc.target/riscv/rvv/base/cpymem-strategy-3.c: New test.
* gcc.target/riscv/rvv/base/cpymem-strategy-4.c: New test.
* gcc.target/riscv/rvv/base/cpymem-strategy-5.c: New test.
* gcc.target/riscv/rvv/base/cpymem-strategy.h: New test.
Use no suffix at all in the musl dynamic linker name for hard
float ABI. Use -sf and -sp suffixes in musl dynamic linker name
for soft float and single precision ABIs. The following table
outlines the musl interpreter names for the LoongArch64 ABI names.
musl interpreter | LoongArch64 ABI
--------------------------- | -----------------
ld-musl-loongarch64.so.1 | loongarch64-lp64d
ld-musl-loongarch64-sp.so.1 | loongarch64-lp64f
ld-musl-loongarch64-sf.so.1 | loongarch64-lp64s
gcc/ChangeLog:
* config/loongarch/gnu-user.h (MUSL_ABI_SPEC): Modify suffix.
Modules streaming requires DECL_CONTEXT to be set on declarations that
are streamed. This ensures that __cxa_thread_atexit is given translation
unit context much like is already done with many other support
functions.
PR c++/99187
gcc/cp/ChangeLog:
* cp-tree.h (enum cp_tree_index): Add CPTI_THREAD_ATEXIT.
(thread_atexit_node): New.
* decl.cc (get_thread_atexit_node): Cache in thread_atexit_node.
gcc/testsuite/ChangeLog:
* g++.dg/modules/pr99187.C: New test.
Signed-off-by: Nathaniel Shead <nathanieloshead@gmail.com>
Signed-off-by: Nathan Sidwell <nathan@acm.org>
I've been meaning to extract this and upstream it for a long time. The work is
primarily Philipp from VRULL with one case added by Raphael and light bugfixing
on my part.
Essentially there's 10 distinct fusions supported and they can be selected
individually by building a suitable mask in the uarch tuning structure.
Additional cases can be added -- the bulk of the effort is in recognizing the
two fusible instructions.
The cases supported in this patch are all from the Veyron V1 processor, though
the hope is they will be useful elsewhere. I would encourage those familiar
with other uarch implementations to enable fusion cases for those uarchs and
extend the set of supported cases if any are missing.
gcc/
* config/riscv/riscv-protos.h (extract_base_offset_in_addr): Prototype.
* config/riscv/riscv.cc (riscv_fusion_pairs): New enum.
(riscv_tune_param): Add fusible_ops field.
(riscv_tune_param_rocket_tune_info): Initialize new field.
(riscv_tune_param_sifive_7_tune_info): Likewise.
(thead_c906_tune_info): Likewise.
(generic_oo_tune_info): Likewise.
(optimize_size_tune_info): Likewise.
(riscv_macro_fusion_p): New function.
(riscv_fusion_enabled_p): Likewise.
(riscv_macro_fusion_pair_p): Likewise.
(TARGET_SCHED_MACRO_FUSION_P): Define.
(TARGET_SCHED_MACRO_FUSION_PAIR_P): Likewise.
(extract_base_offset_in_addr): Moved into riscv.cc from...
* config/riscv/thead.cc: Here.
Co-authored-by: Raphael Zinsly <rzinsly@ventanamicro.com>
Co-authored-by: Jeff Law <jlaw@ventanamicro.com>
This is fix for a minor problem Jivan and I found while testing the ext-dce work originally from Joern.
The ext-dce pass will transform zero/sign extensions into subreg accesses when
the upper bits are actually unused. So it's more likely with the ext-dce work
to get a sequence like this prior to combine:
>
>> (insn 10 9 11 2 (set (reg:SI 144)
>> (unspec_volatile [
>> (const_int 0 [0])
>> ] UNSPECV_FRFLAGS)) "j.c":11:3 discrim 1 362 {riscv_frflags}
>> (nil))
>> (insn 11 10 55 2 (set (reg:DI 140 [ _12 ])
>> (subreg:DI (reg:SI 144) 0)) "j.c":11:3 discrim 1 206 {*movdi_64bit}
>> (expr_list:REG_DEAD (reg:SI 144)
>> (nil)))
When we try to combine insn 10->11 we'll ultimately call simplify_subreg with
something like
(subreg:DI (unspec_volatile [...]) 0)
Note the lack of a mode on the unspec_volatile. That in turn will cause
simplify_subreg to trigger an assertion.
The modeless unspec is generated by the RISC-V backend and the more I've
pondered this issue over the last few days the more I'm convinced it's a
backend bug. Basically if the LHS of the set has a mode, then the RHS of the
set should have a mode as well.
I've audited the various backends and only found a few problems which are fixed
by this patch. I've tested the relevant ports in my tester. c6x, sh, mips and
s390[x].
There are other patterns that are potentially problematical in various ports.
They have a REG destination and an UNSPEC source, but the REG has no mode in
the pattern. Since it wasn't clear what mode to give the UNSPEC, I left those
alone.
gcc/
* config/c6x/c6x.md (mvilc): Add mode to UNSPEC source.
* config/mips/mips.md (rdhwr_synci_step_<mode>): Likewise.
* config/riscv/riscv.md (riscv_frcsr, riscv_frflags): Likewise.
* config/s390/s390.md (@split_stack_call<mode>): Likewise.
(@split_stack_cond_call<mode>): Likewise.
* config/sh/sh.md (sp_switch_1): Likewise.
AIX doesn't support IEEE 128 floating point. Don't add the -mfloat128
on AIX.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp (add_options_for___float128): Only add
-mfloat128 to powerpc*-*-linux*.
Signed-off-by: David Edelsohn <dje.gcc@gmail.com>
A command like "make -j 2 check-gcc-c check-gcc-c++" run in the top level of
a fresh build directory does not work reliably. That will spawn two
independent make processes inside the "gcc" directory, and each of those
will attempt to create site.exp if it doesn't exist and will interfere with
each other, producing often a corrupted or empty site.exp. Resolve that by
making these targets depend on a new phony target which makes sure site.exp
is created first before starting the recursive makes.
ChangeLog:
* Makefile.in: Regenerate.
* Makefile.tpl: Add dependency on site.exp to check-gcc-* targets
The various decls relating to rich_location are in
libcpp/include/line-map.h, but they don't relate to line maps.
Split them out to their own header: libcpp/include/rich-location.h
No functional change intended.
gcc/ChangeLog:
* Makefile.in (CPPLIB_H): Add libcpp/include/rich-location.h.
* coretypes.h (class rich_location): New forward decl.
gcc/analyzer/ChangeLog:
* analyzer.h: Include "rich-location.h".
gcc/c-family/ChangeLog:
* c-lex.cc: Include "rich-location.h".
gcc/cp/ChangeLog:
* mapper-client.cc: Include "rich-location.h".
gcc/ChangeLog:
* diagnostic.h: Include "rich-location.h".
* edit-context.h (class fixit_hint): New forward decl.
* gcc-rich-location.h: Include "rich-location.h".
* genmatch.cc: Likewise.
* pretty-print.h: Likewise.
gcc/rust/ChangeLog:
* rust-location.h: Include "rich-location.h".
libcpp/ChangeLog:
* Makefile.in (TAGS_SOURCES): Add "include/rich-location.h".
* include/cpplib.h (class rich_location): New forward decl.
* include/line-map.h (class range_label)
(enum range_display_kind, struct location_range)
(class semi_embedded_vec, class rich_location, class label_text)
(class range_label, class fixit_hint): Move to...
* include/rich-location.h: ...this new file.
* internal.h: Include "rich-location.h".
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This patch:
- adds support to the analyzer for tracking API-private state
or which we don't have a decl (such as strtok's internal state),
- uses it to implement a new -Wanalyzer-undefined-behavior-strtok which
warns when strtok (NULL, delim) is called as the first call to
strtok after main.
gcc/analyzer/ChangeLog:
PR analyzer/107573
* analyzer.h (register_known_functions): Add region_model_manager
param.
* analyzer.opt (Wanalyzer-undefined-behavior-strtok): New.
* call-summary.cc
(call_summary_replay::convert_region_from_summary_1): Handle
RK_PRIVATE.
* engine.cc (impl_run_checkers): Pass model manager to
register_known_functions.
* kf.cc (class undefined_function_behavior): New.
(class kf_strtok): New.
(register_known_functions): Add region_model_manager param.
Use it to register "strtok".
* region-model-manager.cc
(region_model_manager::get_or_create_conjured_svalue): Add "idx"
param.
* region-model-manager.h
(region_model_manager::get_or_create_conjured_svalue): Add "idx"
param.
(region_model_manager::get_root_region): New accessor.
* region-model.cc (region_model::scan_for_null_terminator): Handle
"expr" being null.
(region_model::get_representative_path_var_1): Handle RK_PRIVATE.
* region-model.h (region_model::called_from_main_p): Make public.
* region.cc (region::get_memory_space): Handle RK_PRIVATE.
(region::can_have_initial_svalue_p): Handle MEMSPACE_PRIVATE.
(private_region::dump_to_pp): New.
* region.h (MEMSPACE_PRIVATE): New.
(RK_PRIVATE): New.
(class private_region): New.
(is_a_helper <const private_region *>::test): New.
* store.cc (store::replay_call_summary_cluster): Handle
RK_PRIVATE.
* svalue.h (struct conjured_svalue::key_t): Add "idx" param to
ctor and "m_idx" field.
(class conjured_svalue::conjured_svalue): Likewise.
gcc/ChangeLog:
PR analyzer/107573
* doc/invoke.texi: Add -Wanalyzer-undefined-behavior-strtok.
gcc/testsuite/ChangeLog:
PR analyzer/107573
* c-c++-common/analyzer/strtok-1.c: New test.
* c-c++-common/analyzer/strtok-2.c: New test.
* c-c++-common/analyzer/strtok-3.c: New test.
* c-c++-common/analyzer/strtok-4.c: New test.
* c-c++-common/analyzer/strtok-cppreference.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This optimizes the simple case of formatting a single string, integer
or bool, with no format-specifier (so no padding, alignment, alternate
form etc.)
libstdc++-v3/ChangeLog:
PR libstdc++/110801
* include/std/format (_Sink_iter::_M_reserve): New member
function.
(_Sink::_Reservation): New nested class.
(_Sink::_M_reserve, _Sink::_M_bump): New virtual functions.
(_Seq_sink::_M_reserve, _Seq_sink::_M_bump): New virtual
overrides.
(_Iter_sink<O, ContigIter>::_M_reserve): Likewise.
(__do_vformat_to): Use new functions to optimize "{}" case.
Even if !HAVE_AS_SUPPORT_CALL36, const_call_insn_operand should still
return false when -mexplict-relocs=none -mcmodel=medium to make
loongarch_legitimize_call_address emit la.local or la.global.
gcc/ChangeLog:
* config/loongarch/predicates.md (const_call_insn_operand):
Remove buggy "HAVE_AS_SUPPORT_CALL36" conditions. Change "1" to
"true" to make the coding style consistent.
This option (CPUCFG word 0x3 bit 23) means "the hardware guarantee that
two loads on the same address won't be reordered with each other". Thus
we can omit the "load-load" barrier dbar 0x700.
This is only a micro-optimization because dbar 0x700 is already treated
as nop if the hardware supports LD_SEQ_SA.
gcc/ChangeLog:
* config/loongarch/loongarch.cc (loongarch_print_operand): Don't
print dbar 0x700 if TARGET_LD_SEQ_SA.
* config/loongarch/sync.md (atomic_load<mode>): Likewise.
With -mdiv32, we can assume div.w[u] and mod.w[u] works on low 32 bits
of a 64-bit GPR even if it's not sign-extended.
gcc/ChangeLog:
* config/loongarch/loongarch.md (DIV): New mode iterator.
(<optab:ANY_DIV><mode:GPR>3): Don't expand if TARGET_DIV32.
(<optab:ANY_DIV>di3_fake): Disable if TARGET_DIV32.
(*<optab:ANY_DIV><mode:GPR>3): Allow SImode if TARGET_DIV32.
(<optab:ANY_DIV>si3_extended): New insn if TARGET_DIV32.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/div-div32.c: New test.
* gcc.target/loongarch/div-no-div32.c: New test.
* config/loongarch/loongarch-def.h:
(loongarch_isa_base_features): Declare. Define it in ...
* config/loongarch/loongarch-cpu.cc
(loongarch_isa_base_features): ... here.
(fill_native_cpu_config): If we know the base ISA of the CPU
model from PRID, use it instead of la64 (v1.0). Check if all
expected features of this base ISA is available, emit a warning
if not.
* config/loongarch/loongarch-opts.cc (config_target_isa): Enable
the features implied by the base ISA if not -march=native.
LoongArch v1.10 introduced the concept of ISA evolution. During ISA
evolution, many independent features can be added and enumerated via
CPUCFG.
Add a data file into genopts storing the CPUCFG word, bit, the name
of the command line option controlling if this feature should be used
for compilation, and the text description. Make genstr.sh process these
info and add the command line options into loongarch.opt and
loongarch-str.h, and generate a new file loongarch-cpucfg-map.h for
mapping CPUCFG output to the corresponding option. When handling
-march=native, use the information in loongarch-cpucfg-map.h to generate
the corresponding option mask. Enable the features implied by -march
setting unless the user has explicitly disabled the feature.
The added options (-mdiv32 and -mld-seq-sa) are not really handled yet.
They'll be used in the following patches.
gcc/ChangeLog:
* config/loongarch/genopts/isa-evolution.in: New data file.
* config/loongarch/genopts/genstr.sh: Translate info in
isa-evolution.in when generating loongarch-str.h, loongarch.opt,
and loongarch-cpucfg-map.h.
* config/loongarch/genopts/loongarch.opt.in (isa_evolution):
New variable.
* config/loongarch/t-loongarch: (loongarch-cpucfg-map.h): New
rule.
(loongarch-str.h): Depend on isa-evolution.in.
(loongarch.opt): Depend on isa-evolution.in.
(loongarch-cpu.o): Depend on loongarch-cpucfg-map.h.
* config/loongarch/loongarch-str.h: Regenerate.
* config/loongarch/loongarch-def.h (loongarch_isa): Add field
for evolution features. Add helper function to enable features
in this field.
Probe native CPU capability and save the corresponding options
into preset.
* config/loongarch/loongarch-cpu.cc (fill_native_cpu_config):
Probe native CPU capability and save the corresponding options
into preset.
(cache_cpucfg): Simplify with C++11-style for loop.
(cpucfg_useful_idx, N_CPUCFG_WORDS): Move to ...
* config/loongarch/loongarch.cc
(loongarch_option_override_internal): Enable the ISA evolution
feature options implied by -march and not explicitly disabled.
(loongarch_asm_code_end): New function, print ISA information as
comments in the assembly if -fverbose-asm. It makes easier to
debug things like -march=native.
(TARGET_ASM_CODE_END): Define.
* config/loongarch/loongarch.opt: Regenerate.
* config/loongarch/loongarch-cpucfg-map.h: Generate.
(cpucfg_useful_idx, N_CPUCFG_WORDS) ... here.
On LA664, the PRID preset is ISA_BASE_LA64V110 but the base architecture
is guessed ISA_BASE_LA64V100. This causes a warning to be outputed:
cc1: warning: base architecture 'la64' differs from PRID preset '?'
But we've not set the "?" above in loongarch_isa_base_strings, thus it's
a nullptr and then an ICE is triggered.
Add ISA_BASE_LA64V110 to genopts and initialize
loongarch_isa_base_strings[ISA_BASE_LA64V110] correctly to fix the ICE.
The warning itself will be fixed later.
gcc/ChangeLog:
* config/loongarch/genopts/loongarch-strings:
(STR_ISA_BASE_LA64V110): Add.
* config/loongarch/genopts/loongarch.opt.in:
(ISA_BASE_LA64V110): Add.
* config/loongarch/loongarch-def.c
(loongarch_isa_base_strings): Initialize [ISA_BASE_LA64V110]
to STR_ISA_BASE_LA64V110.
* config/loongarch/loongarch.opt: Regenerate.
* config/loongarch/loongarch-str.h: Regenerate.
The code coverage support uses counters to determine which edges in the control
flow graph were executed. If a counter overflows, then the code coverage
information is invalid. Therefore the counter type should be a 64-bit integer.
In multi-threaded applications, it is important that the counter increments are
atomic. This is not the case by default. The user can enable atomic counter
increments through the -fprofile-update=atomic and
-fprofile-update=prefer-atomic options.
If the target supports 64-bit atomic operations, then everything is fine. If
not and -fprofile-update=prefer-atomic was chosen by the user, then non-atomic
counter increments will be used. However, if the target does not support the
required atomic operations and -fprofile-atomic=update was chosen by the user,
then a warning was issued and as a forced fallback to non-atomic operations was
done. This is probably not what a user wants. There is still hardware on the
market which does not have atomic operations and is used for multi-threaded
applications. A user which selects -fprofile-update=atomic wants consistent
code coverage data and not random data.
This patch removes the fallback to non-atomic operations for
-fprofile-update=atomic the target platform supports libatomic. To
mitigate potential performance issues an optimization for systems which
only support 32-bit atomic operations is provided. Here, the edge
counter increments are done like this:
low = __atomic_add_fetch_4 (&counter.low, 1, MEMMODEL_RELAXED);
high_inc = low == 0 ? 1 : 0;
__atomic_add_fetch_4 (&counter.high, high_inc, MEMMODEL_RELAXED);
In gimple_gen_time_profiler() this split operation cannot be used, since the
updated counter value is also required. Here, a library call is emitted. This
is not a performance issue since the update is only done if counters[0] == 0.
gcc/c-family/ChangeLog:
* c-cppbuiltin.cc (c_cpp_builtins): Define
__LIBGCC_HAVE_LIBATOMIC for libgcov.
gcc/ChangeLog:
* doc/invoke.texi (-fprofile-update): Clarify default method. Document
the atomic method behaviour.
* tree-profile.cc (enum counter_update_method): New.
(counter_update): Likewise.
(gen_counter_update): Use counter_update_method. Split the
atomic counter update in two 32-bit atomic operations if
necessary.
(tree_profiling): Select counter_update_method.
libgcc/ChangeLog:
* libgcov.h (GCOV_SUPPORTS_ATOMIC): Always define it.
Set it also to 1, if __LIBGCC_HAVE_LIBATOMIC is defined.
Move the counter update to the new gen_counter_update() helper function. Use
it in gimple_gen_edge_profiler() and gimple_gen_time_profiler(). The resulting
gimple instructions should be identical with the exception of the removed
unshare_expr() call. The unshare_expr() call was used in
gimple_gen_edge_profiler().
gcc/ChangeLog:
* tree-profile.cc (gen_assign_counter_update): New.
(gen_counter_update): Likewise.
(gimple_gen_edge_profiler): Use gen_counter_update().
(gimple_gen_time_profiler): Likewise.
gcc/ChangeLog:
* config/riscv/riscv-target-attr.cc
(riscv_target_attr_parser::parse_arch): Use char[] for
std::unique_ptr to prevent mismatched new delete issue.
(riscv_process_one_target_attr): Ditto.
(riscv_process_target_attr): Ditto.
Missing from earlier commit, which removed the only use of those two
variables.
gcc/testsuite/ChangeLog:
* gfortran.dg/coarray/caf.exp: Remove unused variable.
* gfortran.dg/dg.exp: Remove unused variable.
Because the la464 memory model design allows the same address load out of order,
so in the following test example, the Load of 23 lines may be executed first over
the load of 21 lines, resulting in an error.
So when memmodel is MEMMODEL_RELAXED, the load instruction will be followed by
"dbar 0x700" when implementing _atomic_load.
1 void *
2 gomp_ptrlock_get_slow (gomp_ptrlock_t *ptrlock)
3 {
4 int *intptr;
5 uintptr_t oldval = 1;
6
7 __atomic_compare_exchange_n (ptrlock, &oldval, 2, false,
8 MEMMODEL_RELAXED, MEMMODEL_RELAXED);
9
10 /* futex works on ints, not pointers.
11 But a valid work share pointer will be at least
12 8 byte aligned, so it is safe to assume the low
13 32-bits of the pointer won't contain values 1 or 2. */
14 __asm volatile ("" : "=r" (intptr) : "0" (ptrlock));
15 #if __BYTE_ORDER == __BIG_ENDIAN
16 if (sizeof (*ptrlock) > sizeof (int))
17 intptr += (sizeof (*ptrlock) / sizeof (int)) - 1;
18 #endif
19 do
20 do_wait (intptr, 2);
21 while (__atomic_load_n (intptr, MEMMODEL_RELAXED) == 2);
22 __asm volatile ("" : : : "memory");
23 return (void *) __atomic_load_n (ptrlock, MEMMODEL_ACQUIRE);
24 }
gcc/ChangeLog:
* config/loongarch/sync.md (atomic_load<mode>): New template.
1. short and char type calls for atomic_add_fetch and __atomic_fetch_add are
implemented using amadd{_db}.{b/h}.
2. Use amcas{_db}.{b/h/w/d} to implement __atomic_compare_exchange_n and __atomic_compare_exchange.
3. The short and char types of the functions __atomic_exchange and __atomic_exchange_n are
implemented using amswap{_db}.{b/h}.
gcc/ChangeLog:
* config/loongarch/loongarch-def.h: Add comments.
* config/loongarch/loongarch-opts.h (ISA_BASE_IS_LA64V110): Define macro.
* config/loongarch/loongarch.cc (loongarch_memmodel_needs_rel_acq_fence):
Remove redundant code implementations.
* config/loongarch/sync.md (d): Added QI, HI support.
(atomic_add<mode>): New template.
(atomic_exchange<mode>_short): Likewise.
(atomic_cas_value_strong<mode>_amcas): Likewise..
(atomic_fetch_add<mode>_short): Likewise.
When compiling with '-mcmodel=medium', the function call is made through
'pcaddu18i+jirl' if binutils supports call36, otherwise the
native implementation 'pcalau12i+jirl' is used.
gcc/ChangeLog:
* config.in: Regenerate.
* config/loongarch/loongarch-opts.h (HAVE_AS_SUPPORT_CALL36): Define macro.
* config/loongarch/loongarch.cc (loongarch_legitimize_call_address):
If binutils supports call36, the function call is not split over expand.
* config/loongarch/loongarch.md: Add call36 generation code.
* config/loongarch/predicates.md: Likewise.
* configure: Regenerate.
* configure.ac: Check whether binutils supports call36.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/func-call-medium-5.c: If the assembler supports call36,
the test is abandoned.
* gcc.target/loongarch/func-call-medium-6.c: Likewise.
* gcc.target/loongarch/func-call-medium-7.c: Likewise.
* gcc.target/loongarch/func-call-medium-8.c: Likewise.
* lib/target-supports.exp: Added a function to see if the assembler supports
the call36 relocation.
* gcc.target/loongarch/func-call-medium-call36-1.c: New test.
* gcc.target/loongarch/func-call-medium-call36.c: New test.
Co-authored-by: Xi Ruoyao <xry111@xry111.site>