Since we already had the infrastructure to optimize
`(x == 0) && (x > y)` to false for integer types,
this extends the same to pointer types as indirectly
requested by PR 96695.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
PR tree-optimization/96695
* match.pd (min_value, max_value): Extend to
pointer types too.
gcc/testsuite/ChangeLog:
PR tree-optimization/96695
* gcc.dg/pr96695-1.c: New test.
* gcc.dg/pr96695-10.c: New test.
* gcc.dg/pr96695-11.c: New test.
* gcc.dg/pr96695-12.c: New test.
* gcc.dg/pr96695-2.c: New test.
* gcc.dg/pr96695-3.c: New test.
* gcc.dg/pr96695-4.c: New test.
* gcc.dg/pr96695-5.c: New test.
* gcc.dg/pr96695-6.c: New test.
* gcc.dg/pr96695-7.c: New test.
* gcc.dg/pr96695-8.c: New test.
* gcc.dg/pr96695-9.c: New test.
My apologies (again), I managed to mess up the 64-bit version of the
test case for PR 110792. Unlike the 32-bit version, the 64-bit case
contains exactly the same load instructions, just in a different order
making the correct and incorrect behaviours impossible to distinguish
with a scan-assembler-not. Somewhere between checking that this test
failed in a clean tree without the patch, and getting the escaping
correct, I'd failed to notice that this also FAILs in the patched tree.
Doh! Instead of removing the test completely, I've left it as a
compilation test.
The original fix is tested by the 32-bit test case.
Committed to mainline as obvious. Sorry for the incovenience.
2023-08-06 Roger Sayle <roger@nextmovesoftware.com>
gcc/testsuite/ChangeLog
PR target/110792
* gcc.target/i386/pr110792.c: Remove dg-final scan-assembler-not.
This is needed to avoid impossible threading update in vectorizer testcase,
but should also reflect reality on most CPUs we care about.
gcc/ChangeLog:
* config/i386/cpuid.h (__get_cpuid_count, __get_cpuid_max): Add
__builtin_expect that CPU likely supports cpuid.
This prevents useless loop distribiton produced in hmmer. With FDO we now
correctly work out that the loop created for last iteraiton is not going to
iterate however loop distribution still produces a verioned loop that has no
chance to survive loop vectorizer since we only keep distributed loops
when loop vectorization suceeds and it requires number of (header) iterations
to exceed the vectorization factor.
gcc/ChangeLog:
* tree-loop-distribution.cc (loop_distribution::execute): Disable
distribution for loops with estimated iterations 0.
Epilogue peeling expects the scalar loop to have same number of executions as
the vector loop which is true at the beggining of vectorization. However if the
epilogues are vectorized, this is no longer the case. In this situation the
loop preheader is replaced by new guard code with correct profile, however
loop body is left unscaled. This leads to loop that exists more often then
it is entered.
This patch add slogic to scale the frequencies down and also to fix profile
of original preheader where necesary.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* tree-vect-loop-manip.cc (vect_do_peeling): Fix profile update of peeled epilogues.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-bitfield-read-1.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-2.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-3.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-4.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-5.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-6.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-read-7.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-write-1.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-write-2.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-write-3.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-write-4.c: Check profile consistency.
* gcc.dg/vect/vect-bitfield-write-5.c: Check profile consistency.
* gcc.dg/vect/vect-epilogues-2.c: Check profile consistency.
* gcc.dg/vect/vect-epilogues.c: Check profile consistency.
* gcc.dg/vect/vect-mask-store-move-1.c: Check profile consistency.
This patch completes the implementation of the ISO module
SysClock.mod. Three new testcases are provided. wrapclock.{cc,def}
are new support files providing access to clock_settime, clock_gettime
and glibc timezone variables.
gcc/m2/ChangeLog:
PR modula2/110779
* gm2-libs-iso/SysClock.mod: Re-implement using wrapclock.
* gm2-libs-iso/wrapclock.def: New file.
libgm2/ChangeLog:
PR modula2/110779
* config.h.in: Regenerate.
* configure: Regenerate.
* configure.ac (GM2_CHECK_LIB): Check for clock_gettime
and clock_settime.
* libm2iso/Makefile.am (M2DEFS): Add wrapclock.def.
* libm2iso/Makefile.in: Regenerate.
* libm2iso/wraptime.cc: Replace HAVE_TIMEVAL with
HAVE_STRUCT_TIMEVAL.
* libm2iso/wrapclock.cc: New file.
gcc/testsuite/ChangeLog:
PR modula2/110779
* gm2/iso/run/pass/m2date.mod: New test.
* gm2/iso/run/pass/testclock.mod: New test.
* gm2/iso/run/pass/testclock2.mod: New test.
Signed-off-by: Gaius Mulley <gaiusmod2@gmail.com>
To avoid false positivies, tune the warnings for parameters declared
as arrays with size expressions. Do not warn when more bounds are
specified in the declaration than before.
PR c/98536
gcc/c-family/:
* c-warn.cc (warn_parm_array_mismatch): Do not warn if more
bounds are specified.
gcc/testsuite:
* gcc.dg/Wvla-parameter-4.c: Adapt test.
* gcc.dg/attr-access-2.c: Adapt test.
To avoid false diagnostics, use c_inhibit_evaluation_warnings when
a generic association is known to not match during parsing. We may
still generate false positives if the default branch comes earler than
a specific association that matches.
PR c/68193
PR c/97100
PR c/110703
gcc/c/:
* c-parser.cc (c_parser_generic_selection): Inhibit evaluation
warnings branches that are known not be taken during parsing.
gcc/testsuite/ChangeLog:
* gcc.dg/pr68193.c: New test.
This patch makes -fanalyzer make use of the function attribute
"alloc_size", allowing -fanalyzer to emit -Wanalyzer-allocation-size,
-Wanalyzer-out-of-bounds, and -Wanalyzer-tainted-allocation-size on
execution paths involving allocations using such functions.
gcc/analyzer/ChangeLog:
PR analyzer/110426
* bounds-checking.cc (region_model::check_region_bounds): Handle
symbolic base regions.
* call-details.cc: Include "stringpool.h" and "attribs.h".
(call_details::lookup_function_attribute): New function.
* call-details.h (call_details::lookup_function_attribute): New
function decl.
* region-model-manager.cc
(region_model_manager::maybe_fold_binop): Add reference to
PR analyzer/110902.
* region-model-reachability.cc (reachable_regions::handle_sval):
Add symbolic regions for pointers that are conjured svalues for
the LHS of a stmt.
* region-model.cc (region_model::canonicalize): Purge dynamic
extents for regions that aren't referenced.
(get_result_size_in_bytes): New function.
(region_model::on_call_pre): Use get_result_size_in_bytes and
potentially set the dynamic extents of the region pointed to by
the return value.
(region_model::deref_rvalue): Add param "add_nonnull_constraint"
and use it to conditionalize adding the constraint.
(pending_diagnostic_subclass::dubious_allocation_size): Add "stmt"
param to both ctors and use it to initialize new "m_stmt" field.
(pending_diagnostic_subclass::operator==): Use m_stmt; don't use
m_lhs or m_rhs.
(pending_diagnostic_subclass::m_stmt): New field.
(region_model::check_region_size): Generalize to any kind of
pointer svalue by using deref_rvalue rather than checking for
region_svalue. Pass stmt to dubious_allocation_size ctor.
* region-model.h (region_model::deref_rvalue): Add param
"add_nonnull_constraint".
* svalue.cc (conjured_svalue::lhs_value_p): New function.
* svalue.h (conjured_svalue::lhs_value_p): New decl.
gcc/testsuite/ChangeLog:
PR analyzer/110426
* gcc.dg/analyzer/allocation-size-1.c: Update expected message to
reflect consolidation of size and assignment into a single event.
* gcc.dg/analyzer/allocation-size-2.c: Likewise.
* gcc.dg/analyzer/allocation-size-3.c: Likewise.
* gcc.dg/analyzer/allocation-size-4.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-1.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-2.c: Likewise.
* gcc.dg/analyzer/allocation-size-multiline-3.c: Likewise.
* gcc.dg/analyzer/attr-alloc_size-1.c: New test.
* gcc.dg/analyzer/attr-alloc_size-2.c: New test.
* gcc.dg/analyzer/attr-alloc_size-3.c: New test.
* gcc.dg/analyzer/explode-4.c: New test.
* gcc.dg/analyzer/taint-size-1.c: Add test coverage for
__attribute__ alloc_size.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
As mentioned in PR 110202, GCC may be presented with input where control
word of the VPTERNLOG intrinsic implies that some of its operands do not
affect the result. In that case, we can eliminate redundant operands
of the instruction by substituting any other operand in their place.
This removes false dependencies.
For instance, instead of (252 = 0xfc = _MM_TERNLOG_A | _MM_TERNLOG_B)
vpternlogq $252, %zmm2, %zmm1, %zmm0
emit
vpternlogq $252, %zmm0, %zmm1, %zmm0
When VPTERNLOG is invariant w.r.t first and second operands, and the
third operand is memory, load memory into the output operand first, i.e.
instead of (85 = 0x55 = ~_MM_TERNLOG_C)
vpternlogq $85, (%rdi), %zmm1, %zmm0
emit
vmovdqa64 (%rdi), %zmm0
vpternlogq $85, %zmm0, %zmm0, %zmm0
gcc/ChangeLog:
PR target/110202
* config/i386/i386-protos.h
(vpternlog_redundant_operand_mask): Declare.
(substitute_vpternlog_operands): Declare.
* config/i386/i386.cc
(vpternlog_redundant_operand_mask): New helper.
(substitute_vpternlog_operands): New function. Use them...
* config/i386/sse.md: ... here in new VPTERNLOG define_splits.
gcc/testsuite/ChangeLog:
PR target/110202
* gcc.target/i386/invariant-ternlog-1.c: New test.
* gcc.target/i386/invariant-ternlog-2.c: New test.
This patch is inspired by Jakub's work on PR rtl-optimization/110717.
The bitfield example described in comment #2, looks like:
struct S { __int128 a : 69; };
unsigned type bar (struct S *p) {
return p->a;
}
which on x86_64 with -O2 currently generates:
bar: movzbl 8(%rdi), %ecx
movq (%rdi), %rax
andl $31, %ecx
movq %rcx, %rdx
salq $59, %rdx
sarq $59, %rdx
ret
The ANDL $31 is interesting... we first extract an unsigned 69-bit bitfield
by masking/clearing the top bits of the most significant word, and then
it gets sign-extended, by left shifting and arithmetic right shifting.
Obviously, this bit-wise AND is redundant, for signed bit-fields, we don't
require these bits to be cleared, if we're about to set them appropriately.
This patch eliminates this redundancy in the middle-end, during RTL
expansion, but extending the extract_bit_field APIs so that the integer
UNSIGNEDP argument takes a special value; 0 indicates the field should
be sign extended, 1 (any non-zero value) indicates the field should be
zero extended, but -1 indicates a third option, that we don't care how
or whether the field is extended. By passing and checking this sentinel
value at the appropriate places we avoid the useless bit masking (on
all targets).
For the test case above, with this patch we now generate:
bar: movzbl 8(%rdi), %ecx
movq (%rdi), %rax
movq %rcx, %rdx
salq $59, %rdx
sarq $59, %rdx
ret
2023-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* expmed.cc (extract_bit_field_1): Document that an UNSIGNEDP
value of -1 is equivalent to don't care.
(extract_integral_bit_field): Indicate that we don't require
the most significant word to be zero extended, if we're about
to sign extend it.
(extract_fixed_bit_field_1): Document that an UNSIGNEDP value
of -1 is equivalent to don't care. Don't clear the most
significant bits with AND mask when UNSIGNEDP is -1.
gcc/testsuite/ChangeLog
* gcc.target/i386/pr110717-2.c: New test case.
This patch is the final piece in the series to improve the ABI issues
affecting PR 88873. The previous patches tackled inserting DFmode
values into V2DFmode registers, by introducing insvti_{low,high}part
patterns. This patch improves the extraction of DFmode values from
V2DFmode registers via TImode intermediates.
I'd initially thought this would require new extvti_{low,high}part
patterns to be defined, but all that's required is to recognize that
the SUBREG idioms produced by combine are equivalent to (forms of)
vec_select patterns. The target-independent middle-end can't be sure
that the appropriate vec_select instruction exists on the target,
hence doesn't canonicalize a SUBREG of a vector mode as a vec_select,
but the backend can provide a define_split stating where and when
this is useful, for example, considering whether the operand is in
memory, or whether !TARGET_SSE_MATH and the destination is i387.
For pr88873.c, gcc -O2 -march=cascadelake currently generates:
foo: vpunpcklqdq %xmm3, %xmm2, %xmm7
vpunpcklqdq %xmm1, %xmm0, %xmm6
vpunpcklqdq %xmm5, %xmm4, %xmm2
vmovdqa %xmm7, -24(%rsp)
vmovdqa %xmm6, %xmm1
movq -16(%rsp), %rax
vpinsrq $1, %rax, %xmm7, %xmm4
vmovapd %xmm4, %xmm6
vfmadd132pd %xmm1, %xmm2, %xmm6
vmovapd %xmm6, -24(%rsp)
vmovsd -16(%rsp), %xmm1
vmovsd -24(%rsp), %xmm0
ret
with this patch, we now generate:
foo: vpunpcklqdq %xmm1, %xmm0, %xmm6
vpunpcklqdq %xmm3, %xmm2, %xmm7
vpunpcklqdq %xmm5, %xmm4, %xmm2
vmovdqa %xmm6, %xmm1
vfmadd132pd %xmm7, %xmm2, %xmm1
vmovsd %xmm1, %xmm1, %xmm0
vunpckhpd %xmm1, %xmm1, %xmm1
ret
The improvement is even more dramatic when compared to the original
29 instructions shown in comment #8. GCC 13, for example, required
12 transfers to/from memory.
2023-08-04 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/sse.md (define_split): Convert highpart:DF extract
from V2DFmode register into a sse2_storehpd instruction.
(define_split): Likewise, convert lowpart:DF extract from V2DF
register into a sse2_storelpd instruction.
gcc/testsuite/ChangeLog
* gcc.target/i386/pr88873.c: Tweak to check for improved code.
'-Wflex-array-member-not-at-end (C and C++ only)'
Warn when a structure containing a C99 flexible array member as the
last field is not at the end of another structure. This warning
warns e.g. about
struct flex { int length; char data[]; };
struct mid_flex { int m; struct flex flex_data; int n; };
gcc/ChangeLog:
* doc/invoke.texi (-Wflex-array-member-not-at-end): Document
new option.
For the test case LRA generates wrong code for AVR cpymem_qi insn:
(insn 16 15 17 3 (parallel [
(set (mem:BLK (reg:HI 26 r26) [0 A8])
(mem:BLK (reg:HI 30 r30) [0 A8]))
(unspec [
(const_int 0 [0])
] UNSPEC_CPYMEM)
(use (reg:QI 52))
(clobber (reg:HI 26 r26))
(clobber (reg:HI 30 r30))
(clobber (reg:QI 0 r0))
(clobber (reg:QI 52))
]) "t.c":16:22 132 {cpymem_qi}
The insn gets the same value in r26 and r30. The culprit is clobbering
r30 and using r30 as input. For such situation LRA wrongly assumes that
r30 does not live before the insn. The patch is fixing it.
gcc/ChangeLog:
* lra-lives.cc (process_bb_lives): Check input insn pattern hard regs
against early clobber hard regs.
gcc/testsuite/ChangeLog:
* gcc.target/avr/lra-cpymem_qi.c: New.
FORTRAN currently has a pragma NOVECTOR for indicating that vectorization should
not be applied to a particular loop.
ICC/ICX also has such a pragma for C and C++ called #pragma novector.
As part of this patch series I need a way to easily turn off vectorization of
particular loops, particularly for testsuite reasons.
This patch proposes a #pragma GCC novector that does the same for C
as gfortan does for FORTRAN and what ICX/ICX does for C.
I added only some basic tests here, but the next patch in the series uses this
in the testsuite in about ~800 tests.
gcc/c-family/ChangeLog:
* c-pragma.h (enum pragma_kind): Add PRAGMA_NOVECTOR.
* c-pragma.cc (init_pragma): Use it.
gcc/c/ChangeLog:
* c-parser.cc (c_parser_while_statement, c_parser_do_statement,
c_parser_for_statement, c_parser_statement_after_labels,
c_parse_pragma_novector, c_parser_pragma): Wire through novector and
default to false.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-novector-pragma.c: New test.
FORTRAN currently has a pragma NOVECTOR for indicating that vectorization should
not be applied to a particular loop.
ICC/ICX also has such a pragma for C and C++ called #pragma novector.
As part of this patch series I need a way to easily turn off vectorization of
particular loops, particularly for testsuite reasons.
This patch proposes a #pragma GCC novector that does the same for C++
as gfortan does for FORTRAN and what ICX/ICX does for C++.
I added only some basic tests here, but the next patch in the series uses this
in the testsuite in about ~800 tests.
gcc/cp/ChangeLog:
* cp-tree.h (RANGE_FOR_NOVECTOR): New.
(cp_convert_range_for, finish_while_stmt_cond, finish_do_stmt,
finish_for_cond): Add novector param.
* init.cc (build_vec_init): Default novector to false.
* method.cc (build_comparison_op): Likewise.
* parser.cc (cp_parser_statement): Likewise.
(cp_parser_for, cp_parser_c_for, cp_parser_range_for,
cp_convert_range_for, cp_parser_iteration_statement,
cp_parser_omp_for_loop, cp_parser_pragma): Support novector.
(cp_parser_pragma_novector): New.
* pt.cc (tsubst_expr): Likewise.
* semantics.cc (finish_while_stmt_cond, finish_do_stmt,
finish_for_cond): Likewise.
gcc/ChangeLog:
* doc/extend.texi: Document it.
gcc/testsuite/ChangeLog:
* g++.dg/vect/vect.exp (support vect- prefix).
* g++.dg/vect/vect-novector-pragma.cc: New test.
In GCC 11 we implemented the vectorizer optab for widening left shifts,
however this optab is only supported for uniform shift constants.
At the moment GCC still has two loop vectorization strategy (classical loop and
SLP based loop vec) and the optab is implemented as a scalar pattern.
This means that when we apply it to a non-uniform constant inside a loop we only
find out during SLP build that the constants aren't uniform. At this point it's
too late and we lose SLP entirely.
Over the years I've tried various options but none of it works well:
1. Dissolving patterns during SLP built (problematic, also dissolves them for
non-slp).
2. Optionally ignoring patterns for SLP build (problematic, ends up interfearing
with relevancy detection).
3. Relaxing contraint on SLP build to allow non-constant values and dissolving
them after SLP build using an SLP pattern. (problematic, ends up breaking
shift reassociation).
As a result we've concluded that for now this pattern should just be removed
and formed during RTL.
The plan is to move this to an SLP only pattern once we remove classical loop
vectorization support from GCC, at which time we can also properly support SVE's
Top and Bottom variants.
This removes the optab and reworks the RTL to recognize both the vector variant
and the intrinsics variant. Also just simplifies all these patterns.
gcc/ChangeLog:
PR target/106346
* config/aarch64/aarch64-simd.md (vec_widen_<sur>shiftl_lo_<mode>,
vec_widen_<sur>shiftl_hi_<mode>): Remove.
(aarch64_<sur>shll<mode>_internal): Renamed to...
(aarch64_<su>shll<mode>): .. This.
(aarch64_<sur>shll2<mode>_internal): Renamed to...
(aarch64_<su>shll2<mode>): .. This.
(aarch64_<sur>shll_n<mode>, aarch64_<sur>shll2_n<mode>): Re-use new
optabs.
* config/aarch64/constraints.md (D2, DL): New.
* config/aarch64/predicates.md (aarch64_simd_shll_imm_vec): New.
gcc/testsuite/ChangeLog:
PR target/106346
* gcc.target/aarch64/pr98772.c: Adjust assembly.
* gcc.target/aarch64/vect-widen-shift.c: New test.
Currently we segfault when len == 0 for an attribute list.
essentially [cons: =0, 1, 2, 3; attrs: ] segfaults but should be equivalent to
[cons: =0, 1, 2, 3] and [cons: =0, 1, 2, 3; attrs:]. This fixes it by just
returning early and leaving it to the validators whether this should error out
or not.
gcc/ChangeLog:
* gensupport.cc (conlist): Support length 0 attribute.
boolean comparisons have different cost depending on the mode. e.g.
for SVE, a && b doesn't require an additional instruction when a or b
is predicated by combining the predicate of the one operation into the
second one. At the moment though we only fuse compares so this update
requires one of the operands to be a comparison.
Scalars also don't require this because the non-ifcvt variant is a series of
branches where following the branch sequences themselves are natural ANDs.
Advanced SIMD however does require an actual AND to combine the boolean values.
As such this patch discounts Scalar and SVE boolean operation latency and
throughput.
With this patch comparison heavy code prefers SVE as it should, especially in
cases with SVE VL == Advanced SIMD VL where previously the SVE prologue costs
would tip it towards Advanced SIMD.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_bool_compound_p): New.
(aarch64_adjust_stmt_cost, aarch64_vector_costs::count_ops): Use it.
When determining issue rates we currently discount non-constant MLA accumulators
for Advanced SIMD but don't do it for the latency.
This means the costs for Advanced SIMD with a constant accumulator are wrong and
results in us costing SVE and Advanced SIMD the same. This can cauze us to
vectorize with Advanced SIMD instead of SVE in some cases.
This patch adds the same discount for SVE and Scalar as we do for issue rate.
This gives a 5% improvement in fotonik3d_r in SPECCPU 2017 on large
Neoverse cores.
gcc/ChangeLog:
* config/aarch64/aarch64.cc (aarch64_multiply_add_p): Update handling
of constants.
(aarch64_adjust_stmt_cost): Use it.
(aarch64_vector_costs::count_ops): Likewise.
(aarch64_vector_costs::add_stmt_cost): Pass vinfo to
aarch64_adjust_stmt_cost.
The following fixes a problem with my last attempt of avoiding
out-of-bound shift values for vectorized right shifts of widened
operands. Instead of truncating the shift amount with a bitwise
and we actually need to saturate it to the target precision.
The following does that and adds test coverage for the constant
and invariant but variable case that would previously have failed.
PR tree-optimization/110838
* tree-vect-patterns.cc (vect_recog_over_widening_pattern):
Fix right-shift value sanitizing. Properly emit external
def mangling in the preheader rather than in the pattern
def sequence where it will fail vectorizing.
* gcc.dg/vect/pr110838.c: New testcase.
On some AArch64 bootstrapped builds, we were getting a flaky test
because the floating point operations in `get_time` were being fused
with the floating point operations in `timevar_accumulate`.
This meant that the rounding behaviour of our multiplication with
`ticks_to_msec` was different when used in `timer::start` and when
performed in `timer::stop`. These extra inaccuracies led to the
testcase `g++.dg/ext/timevar1.C` being flaky on some hardware.
------------------------------
Avoiding the inlining which was agreed to be undesirable. Three
alternative approaches:
1) Use `-ffp-contract=on` to avoid this particular optimisation.
2) Adjusting the code so that the "tolerance" is always of the order of
a "tick".
3) Recording times and elapsed differences in integral values.
- Could be in terms of a standard measurement (e.g. nanoseconds or
microseconds).
- Could be in terms of whatever integral value ("ticks" /
secondsµseconds / "clock ticks") is returned from the syscall
chosen at configure time.
While `-ffp-contract=on` removes the problem that I bumped into, there
has been a similar bug on x86 that was to do with a different floating
point problem that also happens after `get_time` and
`timevar_accumulate` both being inlined into the same function. Hence
it seems worth choosing a different approach.
Of the two other solutions, recording measurements in integral values
seems the most robust against slightly "off" measurements being
presented to the user -- even though it could avoid the ICE that creates
a flaky test.
I considered storing time in whatever units our syscall returns and
normalising them at the time we print out rather than normalising them
to nanoseconds at the point we record our "current time". The logic
being that normalisation could have some rounding affect (e.g. if
TICKS_PER_SECOND is 3) that would be taken into account in calculations.
I decided against it in order to give the values recorded in
`timevar_time_def` some interpretive value so it's easier to read the
code. Compared to the small rounding that would represent a tiny amount
of time and AIUI can not trigger the same kind of ICE's as we are
attempting to fix, said interpretive value seems more valuable.
Recording time in microseconds seemed reasonable since all obvious
values for ticks and `getrusage` are at microsecond granularity or less
precise. That said, since TICKS_PER_SECOND and CLOCKS_PER_SEC are both
variables given to use by the host system I was not sure of that enough
to make this decision.
------------------------------
timer::all_zero is ignoring rows which are inconsequential to the user
and would be printed out as all zeros. Since upon printing rows we
convert to the same double value and print out the same precision as
before, we return true/false based on the same amount of time as before.
timer::print_row casts to a floating point measurement in units of
seconds as was printed out before.
timer::validate_phases -- I'm printing out nanoseconds here rather than
floating point seconds since this is an error message for when things
have "gone wrong" printing out the actual nanoseconds that have been
recorded seems like the best approach.
N.b. since we now print out nanoseconds instead of floating point value
the padding requirements are different. Originally we were padding to
24 characters and printing 18 decimal places. This looked odd with the
now visually smaller values getting printed. I judged 13 characters
(corresponding to 2 hours) to be a reasonable point at which our
alignment could start to degrade and this provides a more compact output
for the majority of cases (checked by triggering the error case via
GDB).
------------------------------
N.b. I use a literal 1000000000 for "NANOSEC_PER_SEC". I believe this
would fit in an integer on all hosts that GCC supports, but am not
certain there are not strange integer sizes we support hence am pointing
it out for special attention during review.
------------------------------
No expected change in generated code.
Bootstrapped and regtested on AArch64 with no regressions.
Hope this is acceptable -- I had originally planned to use
`-ffp-contract` as agreed until I saw mention of the old x86 bug in the
same area which was not to do with floating point contraction of
operations (PR 99903).
gcc/ChangeLog:
PR middle-end/110316
PR middle-end/9903
* timevar.cc (NANOSEC_PER_SEC, TICKS_TO_NANOSEC,
CLOCKS_TO_NANOSEC, nanosec_to_floating_sec, percent_of): New.
(TICKS_TO_MSEC, CLOCKS_TO_MSEC): Remove these macros.
(timer::validate_phases): Use integral arithmetic to check
validity.
(timer::print_row, timer::print): Convert from integral
nanoseconds to floating point seconds before printing.
(timer::all_zero): Change limit to nanosec count instead of
fractional count of seconds.
(make_json_for_timevar_time_def): Convert from integral
nanoseconds to floating point seconds before recording.
* timevar.h (struct timevar_time_def): Update all measurements
to use uint64_t nanoseconds rather than seconds stored in a
double.
The following adjusts the shift simplification patterns to avoid
touching out-of-bound shift value arithmetic right shifts of
possibly negative values. While simplifying those to zero isn't
wrong it's violating the principle of least surprise.
PR tree-optimization/110838
* match.pd (([rl]shift @0 out-of-bounds) -> zero): Restrict
the arithmetic right-shift case to non-negative operands.
This changes gimple_bitwise_inverted_equal_p to use a 2 different match patterns
to try to match bit_not wrapped with a possible nop_convert and a comparison
also wrapped with a possible nop_convert. This is to avoid being recursive.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
PR tree-optimization/110874
* gimple-match-head.cc (gimple_bit_not_with_nop): New declaration.
(gimple_maybe_cmp): Likewise.
(gimple_bitwise_inverted_equal_p): Rewrite to use gimple_bit_not_with_nop
and gimple_maybe_cmp instead of being recursive.
* match.pd (bit_not_with_nop): New match pattern.
(maybe_cmp): Likewise.
gcc/testsuite/ChangeLog:
PR tree-optimization/110874
* gcc.c-torture/compile/pr110874-a.c: New test.
Canonicalizes (signed x << c) >> c into the lowest
precision(type) - c bits of x IF those bits have a mode precision or a
precision of 1. Also combines this rule with (unsigned x << c) >> c -> x &
((unsigned)-1 >> c) to prevent duplicate pattern.
PR middle-end/101955
* match.pd ((signed x << c) >> c): New canonicalization.
* gcc.dg/pr101955.c: New test.
This patch would like to support the rounding mode API for the
VFNMSAC for the below samples.
* __riscv_vfnmsac_vv_f32m1_rm
* __riscv_vfnmsac_vv_f32m1_rm_m
* __riscv_vfnmsac_vf_f32m1_rm
* __riscv_vfnmsac_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfnmsac_frm): New class for vfnmsac frm.
(vfnmsac_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfnmsac_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-negate-multiply-sub.c:
New test.
This patch would like to support the rounding mode API for the
VFMSAC for the below samples.
* __riscv_vfmsac_vv_f32m1_rm
* __riscv_vfmsac_vv_f32m1_rm_m
* __riscv_vfmsac_vf_f32m1_rm
* __riscv_vfmsac_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfmsac_frm): New class for vfmsac frm.
(vfmsac_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfmsac_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-multiply-sub.c: New test.
This patch would like to support the rounding mode API for the
VFNMACC for the below samples.
* __riscv_vfnmacc_vv_f32m1_rm
* __riscv_vfnmacc_vv_f32m1_rm_m
* __riscv_vfnmacc_vf_f32m1_rm
* __riscv_vfnmacc_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfnmacc_frm): New class for vfnmacc.
(vfnmacc_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfnmacc_frm): New function definition.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-negate-multiply-add.c:
New test.
Fix the assertion failure on empty reduction define in info_for_reduction.
Even a stmt is live, it may still have empty reduction define. Check the
reduction definition instead of live info before calling info_for_reduction.
gcc/ChangeLog:
PR target/110625
* config/aarch64/aarch64.cc (aarch64_force_single_cycle): check
STMT_VINFO_REDUC_DEF to avoid failures in info_for_reduction.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/pr110625_3.c: New testcase.
This patch would like to support the rounding mode API for the
VFMACC for the below samples.
* __riscv_vfmacc_vv_f32m1_rm
* __riscv_vfmacc_vv_f32m1_rm_m
* __riscv_vfmacc_vf_f32m1_rm
* __riscv_vfmacc_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class vfmacc_frm): New class for vfmacc frm.
(vfmacc_frm_obj): New declaration.
(BASE): Ditto.
* config/riscv/riscv-vector-builtins-bases.h: Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfmacc_frm): New function definition.
* config/riscv/riscv-vector-builtins.cc
(function_expander::use_ternop_insn): Add frm operand support.
* config/riscv/vector.md: Add vfmuladd to frm_mode.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-multiply-add.c: New test.
This patch would like to support the rounding mode API for the
VFWMUL for the below samples.
* __riscv_vfwmul_vv_f64m2_rm
* __riscv_vfwmul_vv_f64m2_rm_m
* __riscv_vfwmul_vf_f64m2_rm
* __riscv_vfwmul_vf_f64m2_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(vfwmul_frm_obj): New declaration.
(vfwmul_frm): Ditto.
* config/riscv/riscv-vector-builtins-bases.h:
(vfwmul_frm): Ditto.
* config/riscv/riscv-vector-builtins-functions.def
(vfwmul_frm): New function definition.
* config/riscv/vector.md: (frm_mode) Add vfwmul to frm_mode.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-widening-mul.c: New test.
This patch would like to support the rounding mode API for the
VFDIV and VFRDIV for the below samples.
* __riscv_vfdiv_vv_f32m1_rm
* __riscv_vfdiv_vv_f32m1_rm_m
* __riscv_vfdiv_vf_f32m1_rm
* __riscv_vfdiv_vf_f32m1_rm_m
* __riscv_vfrdiv_vf_f32m1_rm
* __riscv_vfrdiv_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(binop_frm): New declaration.
(reverse_binop_frm): Likewise.
(BASE): Likewise.
* config/riscv/riscv-vector-builtins-bases.h:
(vfdiv_frm): New extern declaration.
(vfrdiv_frm): Likewise.
* config/riscv/riscv-vector-builtins-functions.def
(vfdiv_frm): New function definition.
(vfrdiv_frm): Likewise.
* config/riscv/vector.md: Add vfdiv to frm_mode.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-div.c: New test.
* gcc.target/riscv/rvv/base/float-point-single-rdiv.c: New test.
Hmmer's internal function has 4 loops. The following is the profile at start:
loop 1:
estimate 472
iterations by profile: 473.497707 (reliable) count in:84821 (precise, freq 0.9979)
loop 2:
estimate 99
iterations by profile: 100.000000 (reliable) count in:39848881 (precise, freq 468.8104)
loop 3:
estimate 99
iterations by profile: 100.000000 (reliable) count in:39848881 (precise, freq 468.8104)
loop 4:
estimate 100
iterations by profile: 100.999596 (reliable) execution count:84167 (precise, freq 0.9902)
So the first loops is outer loop and second/third loops are nesed. Fourth loop is not critical.
Precise iteraiton counts are unknown (473 and 100 comes from profile)
Nested loop has following form:
for (k = 1; k <= M; k++) {
mc[k] = mpp[k-1] + tpmm[k-1];
if ((sc = ip[k-1] + tpim[k-1]) > mc[k]) mc[k] = sc;
if ((sc = dpp[k-1] + tpdm[k-1]) > mc[k]) mc[k] = sc;
if ((sc = xmb + bp[k]) > mc[k]) mc[k] = sc;
mc[k] += ms[k];
if (mc[k] < -INFTY) mc[k] = -INFTY;
dc[k] = dc[k-1] + tpdd[k-1];
if ((sc = mc[k-1] + tpmd[k-1]) > dc[k]) dc[k] = sc;
if (dc[k] < -INFTY) dc[k] = -INFTY;
if (k < M) {
ic[k] = mpp[k] + tpmi[k];
if ((sc = ip[k] + tpii[k]) > ic[k]) ic[k] = sc;
ic[k] += is[k];
if (ic[k] < -INFTY) ic[k] = -INFTY;
}
We do quite some belly dancing here.
1) loop-ch slightly misupdates profile, so the estimates of 99
does not match profile setimate of 100.
2) loops-split splits on if (k < M) and produces two loops.
It fails to notice that the second loop never iterates.
It used to misupdate profile a lot which later caused internal
loop to become cold. This is fixed now.
3) loop-dist introduces runtime aliasing checks for both loops
4) tree vectorizer vectorizes some of the copies of the loop produces
and drops expected iteration counts
5) loop peeling peels the loops with expected low iteration counts
6) complete loop unrolling kills some loops in prologues/epilogues.
We end up with quite many loops and run out of registers:
iterations by profile: 5.312499 (unreliable, maybe flat)
this is vectorized internal loops after loop peeling
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
iterations by profile: 0.009495 (unreliable, maybe flat)
Those are all versioned/peeled and vectorized variants of the loop never looping
iterations by profile: 100.000008 (unreliable)
iterations by profile: 100.000000 (unreliable)
Those are variants with failed aliasing checks
iterations by profile: 9.662853 (unreliable, maybe flat)
iterations by profile: 4.646072 (unreliable)
iterations by profile: 100.000007 (unreliable)
iterations by profile: 5.312500 (unreliable)
iterations by profile: 473.497707 (reliable)
This is loop 1
iterations by profile: 100.999596 (reliable)
This is the loop 4.
This patch fixes loop iteration estimate update after loop split so we get:
iterations by profile: 5.312499 (unreliable, maybe flat) entry count:12742188 (guessed, freq 149.9081)
This is remainder of the peeled vectorized loop 2. It misses estimate that is correct since after peeling it 6 times it is essentially
impossible to tell what the remaining loop profile is (without histograms)
iterations by profile: 0.009496 (unreliable, maybe flat) entry count:374801 (guessed, freq 4.4094)
Peeled split part of loop 2 (one that never loops). We ought to work this out
but at least w
estimate 99
iterations by profile: 100.000008 (unreliable) entry count:3945039 (guessed, freq 46.4122)
estimate 99
iterations by profile: 100.000000 (unreliable) entry count:35505353 (guessed, freq 417.7100)
estimate 99
iterations by profile: 9.662853 (unreliable, maybe flat) entry count:35505353 (guessed, freq 417.7100)
Profile here mismatches estimate - I will need to work out why.
estimate 5
iterations by profile: 4.646072 (unreliable) entry count:31954818 (guessed, freq 375.9390)
This is vectorized but not peeled loop 3
estimate 99
iterations by profile: 100.000007 (unreliable) entry count:7101070 (guessed, freq 83.5420)
Unvectorized variant of loop 3
estimate 5
iterations by profile: 5.312500 (unreliable) entry count:25563855 (guessed, freq 300.7512)
Another vectorized variant of loop 3
estimate 472
iterations by profile: 473.497707 (reliable) entry count:84821 (precise, freq 0.9979)
Outer loop
estimate 100
iterations by profile: 100.999596 (reliable) entry count:84167 (precise, freq 0.9902)
loop 4, not vectorized/peeled
So there is still work to do on this testcase, but with the patch we prevent 3 useless loops.
Bootstrapped/regtested x86_64-linux, plan to commit it later today.
gcc/ChangeLog:
* tree-ssa-loop-split.cc (split_loop): Update estimated iteration counts.
Profiledbootstrap fails with ICE in update_loop_exit_probability_scale_dom_bbs
called from loop unroling.
The reason is that under relatively rare situations, we may run into case where
loop has multiple exits and all are considered as likely but then we scale down
the profile and one of the exits becomes unlikely.
We pass around unadjusted_exit_count to scale exit probability correctly. In this
case we may end up using uninitialized value and profile-count type intentionally
bombs on that.
gcc/ChangeLog:
PR bootstrap/110857
* cfgloopmanip.cc (scale_loop_profile): (Un)initialize
unadjusted_exit_count.
Instead of reading the known zero bits in IPA, read the value/mask
pair which is available.
There is a slight change of behavior here. I have removed the check
for SSA_NAME, as the ranger can calculate the range and value/mask for
INTEGER_CST. This simplifies the code a bit, since there's no special
casing when setting the jfunc bits. The default range for VR is
undefined, so I think it's safe just to check for undefined_p().
gcc/ChangeLog:
* ipa-prop.cc (ipa_compute_jump_functions_for_edge): Read global
value/mask.
gcc/testsuite/ChangeLog:
* g++.dg/ipa/pure-const-3.C: Move source to...
* g++.dg/ipa/pure-const-3.h: ...here, and adjust original test
accordingly.
* g++.dg/ipa/pure-const-3b.C: New.