[ This is a partial commit. So not all the cases mentioned by
Xiao are currently handled. ]
This patch recognizes Zicond patterns when the select pattern
with condition eq or neq to 0 (using eq as an example), namely:
1 rd = (rs2 == 0) ? non-imm : 0
2 rd = (rs2 == 0) ? non-imm : non-imm
3 rd = (rs2 == 0) ? reg : non-imm
4 rd = (rs2 == 0) ? reg : reg
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_expand_conditional_move): Recognize
various Zicond patterns.
* config/riscv/riscv.md (mov<mode>cc): Allow TARGET_ZICOND. Use
sfb_alu_operand for both arms of the conditional move.
Co-authored-by: Jeff Law <jlaw@ventanamicro.com>
This patch adds tests for the following builtins:
__builtin_preserve_enum_value
__builtin_btf_type_id
__builtin_preserve_type_info
gcc/testsuite/ChangeLog:
* gcc.target/bpf/core-builtin-enumvalue.c: New test.
* gcc.target/bpf/core-builtin-enumvalue-errors.c: New test.
* gcc.target/bpf/core-builtin-enumvalue-opt.c: New test.
* gcc.target/bpf/core-builtin-fieldinfo-const-elimination.c: New test.
* gcc.target/bpf/core-builtin-fieldinfo-errors-1.c: Changed.
* gcc.target/bpf/core-builtin-fieldinfo-errors-2.c: Changed.
* gcc.target/bpf/core-builtin-type-based.c: New test.
* gcc.target/bpf/core-builtin-type-id.c: New test.
* gcc.target/bpf/core-support.h: New test.
This patch updates the support for the BPF CO-RE builtins
__builtin_preserve_access_index and __builtin_preserve_field_info,
and adds support for the CO-RE builtins __builtin_btf_type_id,
__builtin_preserve_type_info and __builtin_preserve_enum_value.
These CO-RE relocations are now converted to __builtin_core_reloc which
abstracts all of the original builtins in a polymorphic relocation
specific builtin.
The builtin processing is now split in 2 stages, the first (pack) is
executed right after the front-end and the second (process) right before
the asm output.
In expand pass the __builtin_core_reloc is converted to a
unspec:UNSPEC_CORE_RELOC rtx entry.
The data required to process the builtin is now collected in the packing
stage (after front-end), not allowing the compiler to optimize any of
the relevant information required to compose the relocation when
necessary.
At expansion, that information is recovered and CTF/BTF is queried to
construct the information that will be used in the relocation.
At this point the relocation is added to specific section and the
builtin is expanded to the expected default value for the builtin.
In order to process __builtin_preserve_enum_value, it was necessary to
hook the front-end to collect the original enum value reference.
This is needed since the parser folds all the enum values to its
integer_cst representation.
More details can be found within the core-builtins.cc.
Regtested in host x86_64-linux-gnu and target bpf-unknown-none.
gcc/ChangeLog:
PR target/107844
PR target/107479
PR target/107480
PR target/107481
* config.gcc: Added core-builtins.cc and .o files.
* config/bpf/bpf-passes.def: Removed file.
* config/bpf/bpf-protos.h (bpf_add_core_reloc,
bpf_replace_core_move_operands): New prototypes.
* config/bpf/bpf.cc (enum bpf_builtins, is_attr_preserve_access,
maybe_make_core_relo, bpf_core_field_info, bpf_core_compute,
bpf_core_get_index, bpf_core_new_decl, bpf_core_walk,
bpf_is_valid_preserve_field_info_arg, is_attr_preserve_access,
handle_attr_preserve, pass_data_bpf_core_attr, pass_bpf_core_attr):
Removed.
(def_builtin, bpf_expand_builtin, bpf_resolve_overloaded_builtin): Changed.
* config/bpf/bpf.md (define_expand mov<MM:mode>): Changed.
(mov_reloc_core<mode>): Added.
* config/bpf/core-builtins.cc (struct cr_builtin, enum
cr_decision struct cr_local, struct cr_final, struct
core_builtin_helpers, enum bpf_plugin_states): Added types.
(builtins_data, core_builtin_helpers, core_builtin_type_defs):
Added variables.
(allocate_builtin_data, get_builtin-data, search_builtin_data,
remove_parser_plugin, compare_same_kind, compare_same_ptr_expr,
compare_same_ptr_type, is_attr_preserve_access, core_field_info,
bpf_core_get_index, compute_field_expr,
pack_field_expr_for_access_index, pack_field_expr_for_preserve_field,
process_field_expr, pack_enum_value, process_enum_value, pack_type,
process_type, bpf_require_core_support, make_core_relo, read_kind,
kind_access_index, kind_preserve_field_info, kind_enum_value,
kind_type_id, kind_preserve_type_info, get_core_builtin_fndecl_for_type,
bpf_handle_plugin_finish_type, bpf_init_core_builtins,
construct_builtin_core_reloc, bpf_resolve_overloaded_core_builtin,
bpf_expand_core_builtin, bpf_add_core_reloc,
bpf_replace_core_move_operands): Added functions.
* config/bpf/core-builtins.h (enum bpf_builtins): Added.
(bpf_init_core_builtins, bpf_expand_core_builtin,
bpf_resolve_overloaded_core_builtin): Added functions.
* config/bpf/coreout.cc (struct bpf_core_extra): Added.
(bpf_core_reloc_add, output_asm_btfext_core_reloc): Changed.
* config/bpf/coreout.h (bpf_core_reloc_add) Changed prototype.
* config/bpf/t-bpf: Added core-builtins.o.
* doc/extend.texi: Added documentation for new BPF builtins.
With additional floating point relations in the pipeline, we can no
longer tell based on the LHS what the relation of X < Y is without knowing
the type of X and Y.
* gimple-range-fold.cc (fold_using_range::range_of_range_op): Add
ranges to the call to relation_fold_and_or.
(fold_using_range::relation_fold_and_or): Add op1 and op2 ranges.
(fur_source::register_outgoing_edges): Add op1 and op2 ranges.
* gimple-range-fold.h (relation_fold_and_or): Adjust params.
* gimple-range-gori.cc (gori_compute::compute_operand_range): Add
a varying op1 and op2 to call.
* range-op-float.cc (range_operator::op1_op2_relation): New dafaults.
(operator_equal::op1_op2_relation): New float version.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation) Ditto.
* range-op-mixed.h (operator_equal::op1_op2_relation): New float
prototype.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation): Ditto.
* range-op.cc (range_op_handler::op1_op2_relation): Dispatch new
variations.
(range_operator::op1_op2_relation): Add extra params.
(operator_equal::op1_op2_relation): Ditto.
(operator_not_equal::op1_op2_relation): Ditto.
(operator_lt::op1_op2_relation): Ditto.
(operator_le::op1_op2_relation): Ditto.
(operator_gt::op1_op2_relation): Ditto.
(operator_ge::op1_op2_relation): Ditto.
* range-op.h (range_operator): New prototypes.
(range_op_handler): Ditto.
We've been assuming x == x s VREL_EQ in GORI, but this is not always going to
be true with floating point. Provide an API to return the relation.
* gimple-range-gori.cc (gori_compute::compute_operand1_range):
Use identity relation.
(gori_compute::compute_operand2_range): Ditto.
* value-relation.cc (get_identity_relation): New.
* value-relation.h (get_identity_relation): New prototype.
Set routines which take a type shouldn't have to pre-set the type of the
underlying range as it is specified as a parameter already.
* value-range.h (Value_Range::set_varying): Set the type.
(Value_Range::set_zero): Ditto.
(Value_Range::set_nonzero): Ditto.
I'm using this hunk locally to more thoroughly exercise the zicond paths
due to inaccuracies elsewhere in the costing model. It was never
supposed to be part of the costing commit though. And as we've seen
it's causing problems with the vector bits.
While my testing isn't complete, this hunk was never supposed to be
pushed and it's causing problems. So I'm just ripping it out.
There's a bigger TODO in this space WRT a top-to-bottom evaluation of
the costing on RISC-V. I'm still formulating what that evaluation is
going to look like, so don't hold your breath waiting on it.
Pushed to the trunk.
gcc/
* config/riscv/riscv.cc (riscv_rtx_costs): Remove errant hunk from
recent commit.
The ICE in PR analyzer/108171 appears to be a dup of the recently fixed
PR analyzer/110882 and is likewise fixed by it; adding this test case.
gcc/testsuite/ChangeLog:
PR analyzer/108171
* gcc.dg/analyzer/pr108171.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
The previous patch missed the vfsub comment for binop_frm, this
patch would like to fix this.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc: Add vfsub.
zstdtest has some inline data where some testcases lack the
uncompressed length field. Thus it computes that but still
ends up allocating memory for the uncompressed buffer based on
that (zero) length. Oops. Causes memory corruption if the
allocator returns non-NULL.
libbacktrace/
* zstdtest.c (test_samples): Properly compute the allocation
size for the uncompressed data.
can_div_trunc_p (a, b, &Q, &r) tries to compute a Q and r that
satisfy the usual conditions for truncating division:
(1) a = b * Q + r
(2) |b * Q| <= |a|
(3) |r| < |b|
We can compute Q using the constant component (the case when
all indeterminates are zero). Since |r| < |b| for the constant
case, the requirements for indeterminate xi with coefficients
ai (for a) and bi (for b) are:
(2') |bi * Q| <= |ai|
(3') |ai - bi * Q| <= |bi|
(See the big comment for more details, restrictions, and reasoning).
However, the function works on abstract arithmetic types, and so
it has to be careful not to introduce new overflow. The code
therefore only handled the extreme for (3'), that is:
|ai - bi * Q| = |bi|
for the case where Q is zero.
Looking at it again, the overflow issue is a bit easier to handle than
I'd originally thought (or so I hope). This patch therefore extends the
code to handle |ai - bi * Q| = |bi| for all Q, with Q = 0 no longer
being a separate case.
The net effect is to allow the function to succeed for things like:
(a0 + b1 (Q+1) x) / (b0 + b1 x)
where Q = a0 / b0, with various sign conditions. E.g. we now handle:
(7 + 8x) / (4 + 4x)
with Q = 1 and r = 3 + 4x,
gcc/
* poly-int.h (can_div_trunc_p): Succeed for more boundary conditions.
gcc/testsuite/
* gcc.dg/plugin/poly-int-tests.h (test_can_div_trunc_p_const)
(test_can_div_trunc_p_const): Add more tests.
The following makes sure to limit the shift operand when vectorizing
(short)((int)x >> 31) via (short)x >> 31 as the out of bounds shift
operand otherwise invokes undefined behavior. When we determine
whether we can demote the operand we know we at most shift in the
sign bit so we can adjust the shift amount.
Note this has the possibility of un-CSEing common shift operands
as there's no good way to share pattern stmts between patterns.
We'd have to separately pattern recognize the definition.
PR tree-optimization/110838
* tree-vect-patterns.cc (vect_recog_over_widening_pattern):
Adjust the shift operand of RSHIFT_EXPRs.
* gcc.dg/torture/pr110838.c: New testcase.
Sometimes IVOPTs chooses a weird induction variable which downstream
leads to issues. Most of the times we can fend those off during costing
by rejecting the candidate but it looks like the address description
costing synthesizes is different from what we end up generating so
the following fixes things up at code generation time. Specifically
we avoid the create_mem_ref_raw fallback which uses a literal zero
address base with the actual base in index2. For the case in question
we have the address
type = unsigned long
offset = 0
elements = {
[0] = &e * -3,
[1] = (sizetype) a.9_30 * 232,
[2] = ivtmp.28_44 * 4
}
from which we code generate the problematical
_3 = MEM[(long int *)0B + ivtmp.36_9 + ivtmp.28_44 * 4];
which references the object at address zero. The patch below
recognizes the fallback after the fact and transforms the
TARGET_MEM_REF memory reference into a LEA for which this form
isn't problematic:
_24 = &MEM[(long int *)0B + ivtmp.36_34 + ivtmp.28_44 * 4];
_3 = *_24;
hereby avoiding the correctness issue. We'd later conclude the
program terminates at the null pointer dereference and make the
function pure, miscompling the main function of the testcase.
PR tree-optimization/110702
* tree-ssa-loop-ivopts.cc (rewrite_use_address): When
we created a NULL pointer based access rewrite that to
a LEA.
* gcc.dg/torture/pr110702.c: New testcase.
This rewriting removes algorithm inefficiencies due to unnecessary
recursion and copying. The new version has much smaller and statically known
stack requirements and is additionally up to 2x faster.
gcc/ada/
* libgnat/s-imageb.adb (Set_Image_Based_Unsigned): Rewritten.
* libgnat/s-imagew.adb (Set_Image_Width_Unsigned): Likewise.
The problem is that it is necessary to break the privacy during the
expansion of the Input attribute, which may introduce a view mismatch
with the parameter of the routine checking the invariant of the type.
gcc/ada/
* exp_util.adb (Make_Invariant_Call): Convert the expression to
the type of the formal parameter if need be.
Using the operator of System.Storage_Elements has introduced a range check
that may be tripped on, so this removes the intermediate conversion to the
Storage_Count subtype that is responsible for it.
gcc/ada/
* libgnat/s-dwalin.adb ("-"): New subtraction operator.
(Enable_Cache): Use it to compute the offset.
(Symbolic_Address): Likewise.
statement_sink_location for loads is currently confused about
stores that are not on the paths we are sinking across. The
following replaces the logic that tries to ensure we are not
sinking across stores by instead of walking all immediate virtual
uses and then checking whether found stores are on the paths
we sink through with checking the live virtual operand at the
sinking location. To obtain the live virtual operand we rely
on the new virtual_operand_live class which provides an overall
cheaper and also more precise way to check the constraints.
* tree-ssa-sink.cc: Include tree-ssa-live.h.
(pass_sink_code::execute): Instantiate virtual_operand_live
and pass it down.
(sink_code_in_bb): Pass down virtual_operand_live.
(statement_sink_location): Get virtual_operand_live and
verify we are not sinking loads across stores by looking up
the live virtual operand at the sink location.
* gcc.dg/tree-ssa/ssa-sink-20.c: New testcase.
The following adds an on-demand global liveness computation class
computing and caching the live-out virtual operand of basic blocks
and answering live-out, live-in and live-on-edge queries. The flow
is optimized for the intended use in code sinking which will query
live-in and possibly can be optimized further when the originating
query is for live-out.
The code relies on up-to-date immediate dominator information and
on an unchanging virtual operand state.
* tree-ssa-live.h (class virtual_operand_live): New.
* tree-ssa-live.cc (virtual_operand_live::init): New.
(virtual_operand_live::get_live_in): Likewise.
(virtual_operand_live::get_live_out): Likewise.
The following swaps the loop splitting pass and the final value
replacement pass to avoid keeping the IV of the earlier loop
live when not necessary. The existing gcc.target/i386/pr87007-5.c
testcase shows that we otherwise fail to elide an empty loop
later. I don't see any good reason why loop splitting would need
final value replacement, all exit values honor the constraints
we place on loop header PHIs automatically.
* passes.def: Exchange loop splitting and final value
replacement passes.
* gcc.target/i386/pr87007-5.c: Make sure we split the loop
and eliminate both in the end.
gcc/ChangeLog:
* config/s390/s390.cc (expand_perm_as_a_vlbr_vstbr_candidate):
New function which handles bswap patterns for vec_perm_const.
(vectorize_vec_perm_const_1): Call new function.
* config/s390/vector.md (*bswap<mode>): Fix operands in output
template.
(*vstbr<mode>): New insn.
gcc/testsuite/ChangeLog:
* gcc.target/s390/s390.exp: Add subdirectory vxe2.
* gcc.target/s390/vxe2/vlbr-1.c: New test.
* gcc.target/s390/vxe2/vstbr-1.c: New test.
* gcc.target/s390/vxe2/vstbr-2.c: New test.
The .spec files used for linking on ppc-vx6, when the rtp-smp runtime
is selected, add -L flags for /lib_smp/ and /lib/.
There was a problem, though: although /lib_smp/ and /lib/ were to be
searched in this order, and the specs files do that correctly, the
compiler would search /lib/ first regardless, because
STARTFILE_PREFIX_SPEC said so, and specs files cannot override that.
With this patch, we arrange for the presence of -msmp to affect
STARTFILE_PREFIX_SPEC, so that the compiler searches /lib_smp/ rather
than /lib/ for crt files. A separate patch for GNAT ensures that when
the rtp-smp runtime is selected, -msmp is passed to the compiler
driver for linking, along with the --specs flags.
for gcc/ChangeLog
* config/vxworks-smp.opt: New. Introduce -msmp.
* config.gcc: Enable it on powerpc* vxworks prior to 7r*.
* config/rs6000/vxworks.h (STARTFILE_PREFIX_SPEC): Choose
lib_smp when -msmp is present in the command line.
* doc/invoke.texi: Document it.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_save_reg_p): Save ra for leaf
when enabling -mno-omit-leaf-frame-pointer
(riscv_option_override): Override omit-frame-pointer.
(riscv_frame_pointer_required): Save s0 for non-leaf function
(TARGET_FRAME_POINTER_REQUIRED): Override defination
* config/riscv/riscv.opt: Add option support.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/omit-frame-pointer-1.c: New test.
* gcc.target/riscv/omit-frame-pointer-2.c: New test.
* gcc.target/riscv/omit-frame-pointer-3.c: New test.
* gcc.target/riscv/omit-frame-pointer-4.c: New test.
* gcc.target/riscv/omit-frame-pointer-test.c: New test.
Signed-off-by: Yanzhang Wang <yanzhang.wang@intel.com>
This patch is a conservative fix for PR target/110792, a wrong-code
regression affecting doubleword rotations by BITS_PER_WORD, which
effectively swaps the highpart and lowpart words, when the source to be
rotated resides in memory. The issue is that if the register used to
hold the lowpart of the destination is mentioned in the address of
the memory operand, the current define_insn_and_split unintentionally
clobbers it before reading the highpart.
Hence, for the testcase, the incorrectly generated code looks like:
salq $4, %rdi // calculate address
movq WHIRL_S+8(%rdi), %rdi // accidentally clobber addr
movq WHIRL_S(%rdi), %rbp // load (wrong) lowpart
Traditionally, the textbook way to fix this would be to add an
explicit early clobber to the instruction's constraints.
(define_insn_and_split "<insn>32di2_doubleword"
- [(set (match_operand:DI 0 "register_operand" "=r,r,r")
+ [(set (match_operand:DI 0 "register_operand" "=r,r,&r")
(any_rotate:DI (match_operand:DI 1 "nonimmediate_operand" "0,r,o")
(const_int 32)))]
but unfortunately this currently generates significantly worse code,
due to a strange choice of reloads (effectively memcpy), which ends up
looking like:
salq $4, %rdi // calculate address
movdqa WHIRL_S(%rdi), %xmm0 // load the double word in SSE reg.
movaps %xmm0, -16(%rsp) // store the SSE reg back to the stack
movq -8(%rsp), %rdi // load highpart
movq -16(%rsp), %rbp // load lowpart
Note that reload's "&" doesn't distinguish between the memory being
early clobbered, vs the registers used in an addressing mode being
early clobbered.
The fix proposed in this patch is to remove the third alternative, that
allowed offsetable memory as an operand, forcing reload to place the
operand into a register before the rotation. This results in:
salq $4, %rdi
movq WHIRL_S(%rdi), %rax
movq WHIRL_S+8(%rdi), %rdi
movq %rax, %rbp
I believe there's a more advanced solution, by swapping the order of
the loads (if first destination register is mentioned in the address),
or inserting a lea insn (if both destination registers are mentioned
in the address), but this fix is a minimal "safe" solution, that
should hopefully be suitable for backporting.
2023-08-03 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/110792
* config/i386/i386.md (<any_rotate>ti3): For rotations by 64 bits
place operand in a register before gen_<insn>64ti2_doubleword.
(<any_rotate>di3): Likewise, for rotations by 32 bits, place
operand in a register before gen_<insn>32di2_doubleword.
(<any_rotate>32di2_doubleword): Constrain operand to be in register.
(<any_rotate>64ti2_doubleword): Likewise.
gcc/testsuite/ChangeLog
PR target/110792
* g++.target/i386/pr110792.C: New 32-bit C++ test case.
* gcc.target/i386/pr110792.c: New 64-bit C test case.
Update in v2:
* Sync with upstream for the vfmul duplicated declaration.
Original log:
This patch would like to support the rounding mode API for the VFMUL
for the below samples.
* __riscv_vfmul_vv_f32m1_rm
* __riscv_vfmul_vv_f32m1_rm_m
* __riscv_vfmul_vf_f32m1_rm
* __riscv_vfmul_vf_f32m1_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(vfmul_frm_obj): New declaration.
(Base): Likewise.
* config/riscv/riscv-vector-builtins-bases.h: Likewise.
* config/riscv/riscv-vector-builtins-functions.def
(vfmul_frm): New function definition.
* config/riscv/vector.md: Add vfmul to frm_mode.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-single-mul.c: New test.
As Jakub noticed in https://gcc.gnu.org/pipermail/gcc-patches/2023-August/626039.html
what I did was not totally correct because sometimes chosing the wrong type.
So to get back to what the original code but keeping around the use of bitwise_inverted_equal_p,
we just need to check if the types of the two catupures are the same type.
Also adds a testcase for the problem Jakub found.
Committed as obvious after a bootstrap and test.
gcc/ChangeLog:
* match.pd (`~X & X`): Check that the types match.
(`~x | x`, `~x ^ x`): Likewise.
gcc/testsuite/ChangeLog:
* gcc.c-torture/execute/20230802-1.c: New test.
This patch would like to remove the redudant declaration.
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.h: Remove
redudant declaration.
This patch would like to support the rounding mode API for the VFWSUB
for the below samples.
* __riscv_vfwsub_vv_f64m2_rm
* __riscv_vfwsub_vv_f64m2_rm_m
* __riscv_vfwsub_vf_f64m2_rm
* __riscv_vfwsub_vf_f64m2_rm_m
* __riscv_vfwsub_wv_f64m2_rm
* __riscv_vfwsub_wv_f64m2_rm_m
* __riscv_vfwsub_wf_f64m2_rm
* __riscv_vfwsub_wf_f64m2_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc (BASE): Add
vfwsub frm.
* config/riscv/riscv-vector-builtins-bases.h: Add declaration.
* config/riscv/riscv-vector-builtins-functions.def (vfwsub_frm):
Add vfwsub function definitions.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-widening-sub.c: New test.
This patch adds a hook to the end of ana::on_finish_translation_unit
which calls relevant stashing-related callbacks registered during plugin
initialization. This feature is used to stash named types and global
variables for a CPython analyzer plugin [PR107646].
gcc/analyzer/ChangeLog:
PR analyzer/107646
* analyzer-language.cc (run_callbacks): New function.
(on_finish_translation_unit): New function.
* analyzer-language.h (GCC_ANALYZER_LANGUAGE_H): New include.
(class translation_unit): New vfuncs.
gcc/c/ChangeLog:
PR analyzer/107646
* c-parser.cc: New functions on stashing values for the
analyzer.
gcc/testsuite/ChangeLog:
PR analyzer/107646
* gcc.dg/plugin/plugin.exp: Add new plugin and test.
* gcc.dg/plugin/analyzer_cpython_plugin.c: New plugin.
* gcc.dg/plugin/cpython-plugin-test-1.c: New test.
Signed-off-by: Eric Feng <ef2648@columbia.edu>
In certain cases a constant may not fit into the mode used to perform a
comparison. This may be the case for sign-extended constants which are
used during an unsigned comparison as e.g. in
(set (reg:CC 100 cc)
(compare:CC (mem:SI (reg/v/f:SI 115 [ a ]) [1 *a_4(D)+0 S4 A64])
(const_int -2147483648 [0xffffffff80000000])))
Fixed by ensuring that the constant fits into comparison mode.
Furthermore, on some targets as e.g. sparc the constant used in a
comparison is chopped off before combine which leads to failing test
cases (see PR 110869). Fixed by not requiring that the source mode has
to be DImode, and excluding sparc from the last two test cases entirely
since there the constant cannot be further reduced.
gcc/ChangeLog:
PR rtl-optimization/110867
* combine.cc (simplify_compare_const): Try the optimization only
in case the constant fits into the comparison mode.
gcc/testsuite/ChangeLog:
PR rtl-optimization/110869
* gcc.dg/cmp-mem-const-1.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-2.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-3.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-4.c: Relax mode for constant.
* gcc.dg/cmp-mem-const-5.c: Exclude sparc since here the
constant is already reduced.
* gcc.dg/cmp-mem-const-6.c: Exclude sparc since here the
constant is already reduced.
So we're being a bit too aggressive with the .opt zicond patterns.
> (define_insn "*czero.eqz.<GPR:mode><X:mode>.opt1"
> [(set (match_operand:GPR 0 "register_operand" "=r")
> (if_then_else:GPR (eq (match_operand:X 1 "register_operand" "r")
> (const_int 0))
> (match_operand:GPR 2 "register_operand" "1")
> (match_operand:GPR 3 "register_operand" "r")))]
> "(TARGET_ZICOND || 1) && rtx_equal_p (operands[1], operands[2])"
> "czero.eqz\t%0,%3,%1"
> )
The RTL semantics here are op0 = (op1 == 0) ? op1 : op2. That maps
directly to czero.eqz. ie, we select op1 when we know it's zero, op2
otherwise. So this pattern is fine.
> (define_insn "*czero.eqz.<GPR:mode><X:mode>.opt2"
> [(set (match_operand:GPR 0 "register_operand" "=r")
> (if_then_else:GPR (eq (match_operand:X 1 "register_operand" "r")
> (const_int 0))
> (match_operand:GPR 2 "register_operand" "r")
> (match_operand:GPR 3 "register_operand" "1")))]
> "(TARGET_ZICOND || 1) && rtx_equal_p (operands[1], operands[3])"
> "czero.nez\t%0,%2,%1"
> )
The RTL semantics of this pattern are are: op0 = (op1 == 0) ? op2 : op1;
That's not something that can be expressed by the zicond extension as it
selects op1 if and only if op1 is not equal to zero.
> (define_insn "*czero.nez.<GPR:mode><X:mode>.opt3"
> [(set (match_operand:GPR 0 "register_operand" "=r")
> (if_then_else:GPR (ne (match_operand:X 1 "register_operand" "r")
> (const_int 0))
> (match_operand:GPR 2 "register_operand" "r")
> (match_operand:GPR 3 "register_operand" "1")))]
> "(TARGET_ZICOND || 1) && rtx_equal_p (operands[1], operands[3])"
> "czero.eqz\t%0,%2,%1"
> )
The RTL semantics of this pattern are op0 = (op1 != 0) ? op2 : op1.
That maps to czero.nez. But the output template uses czero.eqz. Opps.
> (define_insn "*czero.nez.<GPR:mode><X:mode>.opt4"
> [(set (match_operand:GPR 0 "register_operand" "=r")
> (if_then_else:GPR (ne (match_operand:X 1 "register_operand" "r")
> (const_int 0))
> (match_operand:GPR 2 "register_operand" "1")
> (match_operand:GPR 3 "register_operand" "r")))]
> "(TARGET_ZICOND || 1) && rtx_equal_p (operands[1], operands[2])"
> "czero.nez\t%0,%3,%1"
> )
The RTL semantics of this pattern are op0 = (op1 != 0) ? op1 : op2 which
obviously doesn't match to any zicond instruction as op1 is selected
when it is not zero.
So two of the patterns are just totally bogus as they are not
implementable with zicond. They are removed. The asm template for the
.opt3 pattern is fixed to use czero.nez and its name is changed to
.opt2.
gcc/
* config/riscv/zicond.md: Remove incorrect zicond patterns and
renumber/rename them.
(zero.nez.<GPR:MODE><X:mode>.opt2): Fix output string.
The only exported PHI allocation already adds the PHI node to a block.
* tree-phinodes.h (add_phi_node_to_bb): Remove.
* tree-phinodes.cc (add_phi_node_to_bb): Make static.
The following delays sinking of loads within the same innermost
loop when it was unconditional before. That's a not uncommon
issue preventing vectorization when masked loads are not available.
PR tree-optimization/92335
* tree-ssa-sink.cc (select_best_block): Before loop
optimizations avoid sinking unconditional loads/stores
in innermost loops to conditional executed places.
* gcc.dg/tree-ssa/ssa-sink-10.c: Disable vectorizing.
* gcc.dg/tree-ssa/predcom-9.c: Clone from ssa-sink-10.c,
expect predictive commoning to happen instead of sinking.
* gcc.dg/vect/pr65947-3.c: Ajdust.
This slighly improves bitwise_inverted_equal_p
for comparisons. Instead of just comparing the
comparisons operands also valueize them.
This will allow ccp and others to match the 2 comparisons
without an extra pass happening.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* gimple-match-head.cc (gimple_bitwise_inverted_equal_p): Valueize
the comparison operands before comparing them.
This is a simple patch to move these 2 patterns over to use
bitwise_inverted_equal_p. It also allows us to remove 2 other patterns
which were used on comparisons as they are now handled by
the original pattern.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/ChangeLog:
* match.pd (`~X & X`, `~X | X`): Move over to
use bitwise_inverted_equal_p, removing :c as bitwise_inverted_equal_p
handles that already.
Remove range test simplifications to true/false as they
are now handled by these patterns.
In some cases (usually dealing with bools only), there could be some statements
left behind which are considered trivial dead.
An example is:
```
bool f(bool a, bool b)
{
if (!a && !b)
return 0;
if (!a && b)
return 0;
if (a && !b)
return 0;
return 1;
}
```
Where during phiopt2, the IR had:
```
_3 = ~b_7(D);
_4 = _3 & a_6(D);
_4 != 0 ? 0 : 1
```
match-and-simplify would transform that into:
```
_11 = ~a_6(D);
_12 = b_7(D) | _11;
```
But phiopt would leave around the statements defining _4 and _3.
This helps by marking the conditional's lhs and rhs to see if they are
trivial dead.
OK? Bootstrapped and tested on x86_64-linux-gnu.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (match_simplify_replacement): Mark's cond
statement's lhs and rhs to check if trivial dead.
Rename inserted_exprs to exprs_maybe_dce; also move it so
bitmap is not allocated if not needed.
This patch would like to support the rounding mode API for the VFWADD
VFSUB and VFRSUB as below samples.
* __riscv_vfwadd_vv_f64m2_rm
* __riscv_vfwadd_vv_f64m2_rm_m
* __riscv_vfwadd_vf_f64m2_rm
* __riscv_vfwadd_vf_f64m2_rm_m
* __riscv_vfwadd_wv_f64m2_rm
* __riscv_vfwadd_wv_f64m2_rm_m
* __riscv_vfwadd_wf_f64m2_rm
* __riscv_vfwadd_wf_f64m2_rm_m
Signed-off-by: Pan Li <pan2.li@intel.com>
gcc/ChangeLog:
* config/riscv/riscv-vector-builtins-bases.cc
(class widen_binop_frm): New class for binop frm.
(BASE): Add vfwadd_frm.
* config/riscv/riscv-vector-builtins-bases.h: New declaration.
* config/riscv/riscv-vector-builtins-functions.def
(vfwadd_frm): New function definition.
* config/riscv/riscv-vector-builtins-shapes.cc
(BASE_NAME_MAX_LEN): New macro.
(struct alu_frm_def): Leverage new base class.
(struct build_frm_base): New build base for frm.
(struct widen_alu_frm_def): New struct for widen alu frm.
(SHAPE): Add widen_alu_frm shape.
* config/riscv/riscv-vector-builtins-shapes.h: New declaration.
* config/riscv/vector.md (frm_mode): Add vfwalu type.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/float-point-widening-add.c: New test.
This patch commonizes loop_count_in computatoin with
expected_loop_iterations_by_profile (and moves it to cfgloopanal.cc rather than
manip) and fixes roundoff error in scale_loop_profile. I alos noticed that
I managed to misapply the template change to gcc.dg/unroll-1.c.
Bootstrapped/regtested x86_64-linux, comitted.
gcc/ChangeLog:
* cfgloop.h (loop_count_in): Declare.
* cfgloopanal.cc (expected_loop_iterations_by_profile): Use count_in.
(loop_count_in): Move here from ...
* cfgloopmanip.cc (loop_count_in): ... here.
(scale_loop_profile): Improve dumping; cast iteration bound to sreal.
gcc/testsuite/ChangeLog:
* gcc.dg/unroll-1.c: Fix template.
Loop distribution and ifcvt introduces verisons of loops which may be removed
later if vectorization fails. Ifcvt does this by temporarily breaking profile
and producing conditional that has two arms with 100% probability because we
know one of the versions will be removed.
Loop distribution is trickier, since it introduces test for alignment that
either survives to final code if vecotorization suceeds or is turned if it
fails.
Here we need to assign some reasonable probabilities for the case vectorization
goes well, so this code adds logic to scale profile back in case we remove the
call.
This is not perfect since we drop precise BB counts to guessed. It is not big
deal since we do not use much reliablity of bb counts after this point. Other
option would be to apply scale only if vectorization succeeds which however
needs bit more work at tree-loop-distribution side and would need all code in
this patch with small change that fold_loop_internal_call will have to know how
to adjust if conditional stays. I decided to go for easier solution for now.
Bootstrapped/regtested x86_64-linux, committed.
gcc/ChangeLog:
* cfg.cc (scale_strictly_dominated_blocks): New function.
* cfg.h (scale_strictly_dominated_blocks): Declare.
* tree-cfg.cc (fold_loop_internal_call): Fixup CFG profile.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/pr98308.c: Check that profile is consistent.
The following removes the code checking whether a noop copy
is between something involved in the return sequence composed
of a SET and USE. Instead of checking for this special-case
the following makes us only ever remove noop copies between
pseudos - which is the case that is necessary for IRA/LRA
interfacing to function according to the comment. That makes
looking for the return reg special case unnecessary, reducing
the compile-time in LRA non-specific to zero for the testcase.
PR rtl-optimization/110587
* lra-spills.cc (return_regno_p): Remove.
(regno_in_use_p): Likewise.
(lra_final_code_change): Do not remove noop moves
between hard registers.
AVX512FP16 supports vfmaddsubXXXph and vfmsubaddXXXph.
Also remove scalar mode from fmaddsub/fmsubadd pattern since there's
no scalar instruction for that.
gcc/ChangeLog:
PR target/81904
* config/i386/sse.md (vec_fmaddsub<mode>4): Extend to vector
HFmode, use mode iterator VFH instead.
(vec_fmsubadd<mode>4): Ditto.
(<sd_mask_codefor>fma_fmaddsub_<mode><sd_maskz_name><round_name>):
Remove scalar mode from iterator, use VFH_AVX512VL instead.
(<sd_mask_codefor>fma_fmsubadd_<mode><sd_maskz_name><round_name>):
Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr81904.c: New test.
vlddqu + vinserti128 will use shuffle port in addition to load port
comparing to vbroadcasti128, For latency perspective,vbroadcasti is no
worse than vlddqu + vinserti128.
gcc/ChangeLog:
* config/i386/sse.md (*avx2_lddqu_inserti_to_bcasti): New
pre_reload define_insn_and_split.
gcc/testsuite/ChangeLog:
* gcc.target/i386/vlddqu_vinserti128.c: New test.
This patch implements a reasonable cost model for using Zicond to
implement conditional moves. Essentially the Zicond insns are always
COSTS_N_INSNS (1).
Note there is still a problem with the costing model in general that
results in failure to if-convert as often as we should. In simplest
terms the insn costing model sums the cost of the SET_SRC and the
cost of the SET_DEST. Thus the conditional move is considered twice
as costly as it should be. That will have to be addressed separately.
gcc/
* config/riscv/riscv.cc (riscv_rtx_costs): Add costing for
using Zicond to implement some conditional moves.
c-torture/execute/pr59014-2.c fails with the Zicond work on rv64. We
miscompile the "foo" routine because we have eliminated a required sign
extension.
The key routine looks like this:
foo (long long int x, long long int y)
{
if (((int) x | (int) y) != 0)
return 6;
return x + y;
}
So we kindof do the expected thing. We IOR X and Y, sign extend the result
from 32 to 64 bits, then emit a suitable conditional branch. ie:
> (insn 10 4 12 2 (set (reg:DI 142)
> (ior:DI (reg/v:DI 138 [ x ])
> (reg/v:DI 139 [ y ]))) "j.c":6:16 99 {iordi3}
> (nil))
> (insn 12 10 13 2 (set (reg:DI 144)
> (sign_extend:DI (subreg:SI (reg:DI 142) 0))) "j.c":6:6 116 {extendsidi2}
> (nil))
> (jump_insn 13 12 14 2 (set (pc)
> (if_then_else (ne (reg:DI 144)
> (const_int 0 [0]))
> (label_ref:DI 27)
> (pc))) "j.c":6:6 243 {*branchdi}
> (expr_list:REG_DEAD (reg:DI 144)
> (int_list:REG_BR_PROB 233216732 (nil)))
When we if-convert that we generate this sequence:
> (insn 10 4 12 2 (set (reg:DI 142)
> (ior:DI (reg/v:DI 138 [ x ])
> (reg/v:DI 139 [ y ]))) "j.c":6:16 99 {iordi3}
> (nil))
> (insn 12 10 30 2 (set (reg:DI 144)
> (sign_extend:DI (subreg:SI (reg:DI 142) 0))) "j.c":6:6 116 {extendsidi2}
> (nil))
> (insn 30 12 31 2 (set (reg:DI 147)
> (const_int 6 [0x6])) "j.c":8:12 179 {*movdi_64bit}
> (nil))
> (insn 31 30 33 2 (set (reg:DI 146)
> (plus:DI (reg/v:DI 138 [ x ])
> (reg/v:DI 139 [ y ]))) "j.c":8:12 5 {adddi3}
> (nil))
> (insn 33 31 34 2 (set (reg:DI 149)
> (if_then_else:DI (ne:DI (reg:DI 144)
> (const_int 0 [0]))
> (const_int 0 [0])
> (reg:DI 146))) "j.c":8:12 11368 {*czero.nez.didi}
> (nil))
> (insn 34 33 35 2 (set (reg:DI 148)
> (if_then_else:DI (eq:DI (reg:DI 144)
> (const_int 0 [0]))
> (const_int 0 [0])
> (reg:DI 147))) "j.c":8:12 11367 {*czero.eqz.didi}
> (nil))
> (insn 35 34 21 2 (set (reg:DI 137 [ <retval> ])
> (ior:DI (reg:DI 148)
> (reg:DI 149))) "j.c":8:12 99 {iordi3}
> (nil))
Which looks basically OK. The sign extended subreg is a bit worrisome though.
And sure enough when we get into combine:
> Failed to match this instruction:
> (parallel [
> (set (reg:DI 149)
> (if_then_else:DI (eq:DI (subreg:SI (reg:DI 142) 0)
> (const_int 0 [0]))
> (reg:DI 146)
> (const_int 0 [0])))
> (set (reg:DI 144)
> (sign_extend:DI (subreg:SI (reg:DI 142) 0)))
> ])
> Successfully matched this instruction:
> (set (reg:DI 144)
> (sign_extend:DI (subreg:SI (reg:DI 142) 0)))
> Successfully matched this instruction:
> (set (reg:DI 149)
> (if_then_else:DI (eq:DI (subreg:SI (reg:DI 142) 0)
> (const_int 0 [0]))
> (reg:DI 146)
> (const_int 0 [0])))
> allowing combination of insns 12 and 33
Since we need the side effect we first try the PARALLEL with two sets.
That, as expected, fails. Generic combine code then tries to pull apart
the two sets as distinct insns resulting in this conditional move:
> (insn 33 31 34 2 (set (reg:DI 149)
> (if_then_else:DI (eq:DI (subreg:SI (reg:DI 142) 0)
> (const_int 0 [0]))
> (reg:DI 146)
> (const_int 0 [0]))) "j.c":8:12 11347 {*czero.nez.disi}
> (expr_list:REG_DEAD (reg:DI 146)
> (nil)))
Bzzt. We can't actually implement this RTL in the hardware. Basically
it's asking to do 32bit comparison on rv64, ignoring the upper 32 bits
of the input register. That's not actually how zicond works.
The operands to the comparison need to be in DImode for rv64 and SImode
for rv32. That's the X iterator. Note the mode of the comparison
operands may be different than the mode of the destination. ie, we might
have a 64bit comparison and produce a 32bit sign extended result much
like the setcc insns support.
This patch changes the 6 zicond patterns to use the X iterator on the
comparison inputs and fixes the testsuite failure.
gcc/
* config/riscv/zicond.md: Use the X iterator instead of ANYI
on the comparison input operands.
This provides some basic costing to conditional moves. The underlying
primitive of an IF-THEN-ELSE which turns into czero is a single insn
(COSTS_N_INSNS (1)).
But these insns were still consistently showing up with the wrong cost (8
instead of 4). This was chased down to computing the cost of the destination
and the cost of the source independently, then summing them. That seems
horribly wrong for register destinations. So this patch special cases
an INSN that is just a SET of a register destination so that the cost
comes from the SET_SRC.
Long term the whole costing model needs a review.
gcc/
* config/riscv/riscv.cc (riscv_rtx_costs, case IF_THEN_ELSE): Add
Zicond costing.
(case SET): For INSNs that just set a REG, take the cost from the
SET_SRC.
Co-authored-by: Jeff Law <jlaw@ventanamicro.com>