Commit graph

158414 commits

Author SHA1 Message Date
Michael Weiser
35c4515b8b [PATCH, PR83492] Fix selection of aarch64 big-endian shift parameters based on __AARCH64EB__
2017-12-20  Michael Weiser  <michael.weiser@gmx.de>

	PR preprocessor/83492
	* lex.c (search_line_fast) [__ARM_NEON && __ARM_64BIT_STATE]:
	Fix selection of big-endian shift parameters by using
	__ARM_BIG_ENDIAN.

From-SVN: r255896
2017-12-20 15:07:01 +00:00
Alexandre Oliva
67a8d7199f [SFN] debug markers before labels no more
Make sure that gimple and RTL IRs don't have debug markers before
labels.  When we build the CFG, we move labels before any markers
appearing before them.  Then, make sure we don't mistakenly
reintroduce them.

This reverts some of the complexity that had been brought about by the
initial SFN patches.

for  gcc/ChangeLog

	PR bootstrap/83396
	* cfgexpand.c (label_rtx_for_bb): Revert SFN changes that
	allowed debug stmts before labels.
	(expand_gimple_basic_block): Likewise.
	* gimple-iterator.c (gimple_find_edge_insert_loc): Likewise.
	* gimple-iterator.h (gsi_after_labels): Likewise.
	* tree-cfgcleanup (remove_forwarder_block): Likewise, but
	rename reused variable, and simplify using gsi_move_before.
	* tree-ssa-tail-merge.c (find_duplicate): Likewise.
	* tree-cfg.c (make_edges, cleanup_dead_labels): Likewise.
	(gimple_can_merge_blocks_p, verify_gimple_in_cfg): Likewise.
	(gimple_verify_flow_info, gimple_block_label): Likewise.
	(make_blocks): Move debug markers after adjacent labels.
	* cfgrtl.c (skip_insns_after_block): Revert SFN changes that
	allowed debug insns outside blocks.
	* df-scan.c (df_insn_delete): Likewise.
	* lra-constraints.c (update_ebb_live_info): Likewise.
	* var-tracking.c (get_first_insn, vt_emit_notes): Likewise.
	(vt_initialize, delete_vta_debug_insns): Likewise.
	(reemit_marker_as_note): Drop BB parm.  Adjust callers.

From-SVN: r255895
2017-12-20 14:48:34 +00:00
Richard Sandiford
8a91d54553 poly_int: store merging
This patch makes pass_store_merging track polynomial sizes
and offsets.  store_immediate_info remains restricted to stores
with a constant offset and size.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* poly-int-types.h (round_down_to_byte_boundary): New macro.
	(round_up_to_byte_boundary): Likewise.
	* expr.h (get_bit_range): Add temporary shim.
	* gimple-ssa-store-merging.c (store_operand_info): Change the
	bitsize, bitpos, bitregion_start and bitregion_end fields from
	unsigned HOST_WIDE_INT to poly_uint64.
	(merged_store_group): Likewise load_align_base.
	(compatible_load_p, compatible_load_p): Update accordingly.
	(imm_store_chain_info::coalesce_immediate_stores): Likewise.
	(split_group, imm_store_chain_info::output_merged_store): Likewise.
	(mem_valid_for_store_merging): Return the bitsize, bitpos,
	bitregion_start and bitregion_end as poly_uint64s rather than
	unsigned HOST_WIDE_INTs.  Track polynomial offsets internally.
	(handled_load): Take the bitsize, bitpos,
	bitregion_start and bitregion_end as poly_uint64s rather than
	unsigned HOST_WIDE_INTs.
	(pass_store_merging::process_store): Update call to
	mem_valid_for_store_merging.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255894
2017-12-20 12:56:50 +00:00
Richard Sandiford
7df9b6f12a poly_int: get_object_alignment_2
This patch makes get_object_alignment_2 track polynomial offsets
and sizes.  The real work is done by get_inner_reference, but we
then need to handle the alignment correctly.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* builtins.c (get_object_alignment_2): Track polynomial offsets
	and sizes.  Update the alignment handling.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255893
2017-12-20 12:56:32 +00:00
Richard Sandiford
06889da8cc poly_int: expand_debug_expr
This patch makes expand_debug_expr track polynomial memory offsets.
It simplifies the handling of the case in which the reference is not
to the first byte of the base, which seemed non-trivial enough to
make it worth splitting out as a separate patch.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree.h (get_inner_reference): Add a version that returns the
	offset and size as poly_int64_pods rather than HOST_WIDE_INTs.
	* cfgexpand.c (expand_debug_expr): Track polynomial offsets.  Simply
	the case in which bitpos is not associated with the first byte.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255892
2017-12-20 12:56:24 +00:00
Richard Sandiford
a85d87b20c poly_int: get_inner_reference_aff
This patch makes get_inner_reference_aff return the size as a
poly_widest_int rather than a widest_int.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-affine.h (get_inner_reference_aff): Return the size as a
	poly_widest_int.
	* tree-affine.c (get_inner_reference_aff): Likewise.
	* tree-data-ref.c (dr_may_alias_p): Update accordingly.
	* tree-ssa-loop-im.c (mem_refs_may_alias_p): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255891
2017-12-20 12:56:12 +00:00
Richard Sandiford
c036acdeec poly_int: pointer_may_wrap_p
This patch changes the bitpos argument to pointer_may_wrap_p from
HOST_WIDE_INT to poly_int64.  A later patch makes the callers track
polynomial offsets.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* fold-const.c (pointer_may_wrap_p): Take the offset as a
	HOST_WIDE_INT rather than a poly_int64.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255890
2017-12-20 12:56:05 +00:00
Richard Sandiford
4a022c701b poly_int: symbolic_number
This patch changes symbol_number::bytepos from a HOST_WIDE_INT
to a poly_int64.  perform_symbolic_merge can cope with symbolic
offsets as long as the difference between the two offsets is
constant.  (This could happen for a constant-sized field that
occurs at a variable offset, for example.)

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* gimple-ssa-store-merging.c (symbolic_number::bytepos): Change from
	HOST_WIDE_INT to poly_int64_pod.
	(perform_symbolic_merge): Update accordingly.
	(bswap_replace): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255889
2017-12-20 12:55:57 +00:00
Richard Sandiford
cc8bea0916 poly_int: aff_tree
This patch changes the type of aff_tree::offset from widest_int to
poly_widest_int and adjusts the function interfaces in the same way.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-affine.h (aff_tree::offset): Change from widest_int
	to poly_widest_int.
	(wide_int_ext_for_comb): Delete.
	(aff_combination_const, aff_comb_cannot_overlap_p): Take the
	constants as poly_widest_int rather than widest_int.
	(aff_combination_constant_multiple_p): Return the multiplier
	as a poly_widest_int.
	(aff_combination_zero_p, aff_combination_singleton_var_p): Handle
	polynomial offsets.
	* tree-affine.c (wide_int_ext_for_comb): Make original widest_int
	version static and add an overload for poly_widest_int.
	(aff_combination_const, aff_combination_add_cst)
	(wide_int_constant_multiple_p, aff_comb_cannot_overlap_p): Take
	the constants as poly_widest_int rather than widest_int.
	(tree_to_aff_combination): Generalize INTEGER_CST case to
	poly_int_tree_p.
	(aff_combination_to_tree): Track offsets as poly_widest_ints.
	(aff_combination_add_product, aff_combination_mult): Handle
	polynomial offsets.
	(aff_combination_constant_multiple_p): Return the multiplier
	as a poly_widest_int.
	* tree-predcom.c (determine_offset): Return the offset as a
	poly_widest_int.
	(split_data_refs_to_components, suitable_component_p): Update
	accordingly.
	(valid_initializer_p): Update call to
	aff_combination_constant_multiple_p.
	* tree-ssa-address.c (addr_to_parts): Handle polynomial offsets.
	* tree-ssa-loop-ivopts.c (get_address_cost_ainc): Take the step
	as a poly_int64 rather than a HOST_WIDE_INT.
	(get_address_cost): Handle polynomial offsets.
	(iv_elimination_compare_lt): Likewise.
	(rewrite_use_nonlinear_expr): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255888
2017-12-20 12:55:45 +00:00
Richard Sandiford
a90c88042b poly_int: get_addr_base_and_unit_offset
This patch changes the values returned by
get_addr_base_and_unit_offset from HOST_WIDE_INT to poly_int64.

maxsize in gimple_fold_builtin_memory_op goes from HOST_WIDE_INT
to poly_uint64 (rather than poly_int) to match the previous use
of tree_fits_uhwi_p.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-dfa.h (get_addr_base_and_unit_offset_1): Return the offset
	as a poly_int64_pod rather than a HOST_WIDE_INT.
	(get_addr_base_and_unit_offset): Likewise.
	* tree-dfa.c (get_addr_base_and_unit_offset_1): Likewise.
	(get_addr_base_and_unit_offset): Likewise.
	* doc/match-and-simplify.texi: Change off from HOST_WIDE_INT
	to poly_int64 in example.
	* fold-const.c (fold_binary_loc): Update call to
	get_addr_base_and_unit_offset.
	* gimple-fold.c (gimple_fold_builtin_memory_op): Likewise.
	(maybe_canonicalize_mem_ref_addr): Likewise.
	(gimple_fold_stmt_to_constant_1): Likewise.
	* gimple-ssa-warn-restrict.c (builtin_memref::builtin_memref):
	Likewise.
	* ipa-param-manipulation.c (ipa_modify_call_arguments): Likewise.
	* match.pd: Likewise.
	* omp-low.c (lower_omp_target): Likewise.
	* tree-sra.c (build_ref_for_offset): Likewise.
	(build_debug_ref_for_model): Likewise.
	* tree-ssa-address.c (maybe_fold_tmr): Likewise.
	* tree-ssa-alias.c (ao_ref_init_from_ptr_and_size): Likewise.
	* tree-ssa-ccp.c (optimize_memcpy): Likewise.
	* tree-ssa-forwprop.c (forward_propagate_addr_expr_1): Likewise.
	(constant_pointer_difference): Likewise.
	* tree-ssa-loop-niter.c (expand_simple_operations): Likewise.
	* tree-ssa-phiopt.c (jump_function_from_stmt): Likewise.
	* tree-ssa-pre.c (create_component_ref_by_pieces_1): Likewise.
	* tree-ssa-sccvn.c (vn_reference_fold_indirect): Likewise.
	(vn_reference_maybe_forwprop_address, vn_reference_lookup_3): Likewise.
	(set_ssa_val_to): Likewise.
	* tree-ssa-strlen.c (get_addr_stridx, addr_stridxptr)
	(maybe_diag_stxncpy_trunc): Likewise.
	* tree-vrp.c (vrp_prop::check_array_ref): Likewise.
	* tree.c (build_simple_mem_ref_loc): Likewise.
	(array_at_struct_end_p): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255887
2017-12-20 12:55:37 +00:00
Richard Sandiford
588db50c8c poly_int: get_ref_base_and_extent
This patch changes the types of the bit offsets and sizes returned
by get_ref_base_and_extent to poly_int64.

There are some callers that can't sensibly operate on polynomial
offsets or handle cases where the offset and size aren't known
exactly.  This includes the IPA devirtualisation code (since
there's no defined way of having vtables at variable offsets)
and some parts of the DWARF code.  The patch therefore adds
a helper function get_ref_base_and_extent_hwi that either returns
exact HOST_WIDE_INT bit positions and sizes or returns a null
base to indicate failure.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-dfa.h (get_ref_base_and_extent): Return the base, size and
	max_size as poly_int64_pods rather than HOST_WIDE_INTs.
	(get_ref_base_and_extent_hwi): Declare.
	* tree-dfa.c (get_ref_base_and_extent): Return the base, size and
	max_size as poly_int64_pods rather than HOST_WIDE_INTs.
	(get_ref_base_and_extent_hwi): New function.
	* cfgexpand.c (expand_debug_expr): Update call to
	get_ref_base_and_extent.
	* dwarf2out.c (add_var_loc_to_decl): Likewise.
	* gimple-fold.c (get_base_constructor): Return the offset as a
	poly_int64_pod rather than a HOST_WIDE_INT.
	(fold_const_aggregate_ref_1): Track polynomial sizes and offsets.
	* ipa-polymorphic-call.c
	(ipa_polymorphic_call_context::set_by_invariant)
	(extr_type_from_vtbl_ptr_store): Track polynomial offsets.
	(ipa_polymorphic_call_context::ipa_polymorphic_call_context)
	(check_stmt_for_type_change): Use get_ref_base_and_extent_hwi
	rather than get_ref_base_and_extent.
	(ipa_polymorphic_call_context::get_dynamic_type): Likewise.
	* ipa-prop.c (ipa_load_from_parm_agg, compute_complex_assign_jump_func)
	(get_ancestor_addr_info, determine_locally_known_aggregate_parts):
	Likewise.
	* ipa-param-manipulation.c (ipa_get_adjustment_candidate): Update
	call to get_ref_base_and_extent.
	* tree-sra.c (create_access, get_access_for_expr): Likewise.
	* tree-ssa-alias.c (ao_ref_base, aliasing_component_refs_p)
	(stmt_kills_ref_p): Likewise.
	* tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise.
	* tree-ssa-scopedtables.c (avail_expr_hash, equal_mem_array_ref_p):
	Likewise.
	* tree-ssa-sccvn.c (vn_reference_lookup_3): Likewise.
	Use get_ref_base_and_extent_hwi rather than get_ref_base_and_extent
	when calling native_encode_expr.
	* tree-ssa-structalias.c (get_constraint_for_component_ref): Update
	call to get_ref_base_and_extent.
	(do_structure_copy): Use get_ref_base_and_extent_hwi rather than
	get_ref_base_and_extent.
	* var-tracking.c (track_expr_p): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255886
2017-12-20 12:55:27 +00:00
Richard Sandiford
80d0198b73 poly_int: ipa_parm_adjustment
This patch changes the type of ipa_parm_adjustment::offset from
HOST_WIDE_INT to poly_int64 and updates uses accordingly.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* ipa-param-manipulation.h (ipa_parm_adjustment::offset): Change from
	HOST_WIDE_INT to poly_int64_pod.
	* ipa-param-manipulation.c (ipa_modify_call_arguments): Track
	polynomail parameter offsets.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255885
2017-12-20 12:54:55 +00:00
Richard Sandiford
21810de45d poly_int: DWARF CFA offsets
This patch makes the DWARF code use poly_int64 rather than
HOST_WIDE_INT for CFA offsets.  The main changes are:

- to make reg_save use a DW_CFA_expression representation when
  the offset isn't constant and

- to record the CFA information alongside a def_cfa_expression
  if either offset is polynomial, since it's quite difficult
  to reconstruct the CFA information otherwise.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* gengtype.c (main): Handle poly_int64_pod.
	* dwarf2out.h (dw_cfi_oprnd_cfa_loc): New dw_cfi_oprnd_type.
	(dw_cfi_oprnd::dw_cfi_cfa_loc): New field.
	(dw_cfa_location::offset, dw_cfa_location::base_offset): Change
	from HOST_WIDE_INT to poly_int64_pod.
	* dwarf2cfi.c (queued_reg_save::cfa_offset): Likewise.
	(copy_cfa): New function.
	(lookup_cfa_1): Use the cached dw_cfi_cfa_loc, if it exists.
	(cfi_oprnd_equal_p): Handle dw_cfi_oprnd_cfa_loc.
	(cfa_equal_p, dwarf2out_frame_debug_adjust_cfa)
	(dwarf2out_frame_debug_cfa_offset, dwarf2out_frame_debug_expr)
	(initial_return_save): Treat offsets as poly_ints.
	(def_cfa_0): Likewise.  Cache the CFA in dw_cfi_cfa_loc if either
	offset is nonconstant.
	(reg_save): Take the offset as a poly_int64.  Fall back to
	DW_CFA_expression for nonconstant offsets.
	(queue_reg_save): Take the offset as a poly_int64.
	* dwarf2out.c (dw_cfi_oprnd2_desc): Handle DW_CFA_def_cfa_expression.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255884
2017-12-20 12:54:44 +00:00
Richard Sandiford
fdbfe4e552 poly_int: operand_subword
This patch makes operand_subword and operand_subword_force take
polynomial offsets.  This is a fairly old-school interface and
these days should only be used when splitting multiword operations
into word operations.  It still doesn't hurt to support polynomial
offsets and it helps make callers easier to write.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* rtl.h (operand_subword, operand_subword_force): Take the offset
	as a poly_uint64 an unsigned int.
	* emit-rtl.c (operand_subword, operand_subword_force): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255883
2017-12-20 12:54:36 +00:00
Richard Sandiford
91914e56a5 poly_int: SUBREG_BYTE
This patch changes SUBREG_BYTE from an int to a poly_int.
Since valid SUBREG_BYTEs must be contained within the mode of the
SUBREG_REG, the required range is the same as for GET_MODE_SIZE,
i.e. unsigned short.  The patch therefore uses poly_uint16(_pod)
for the SUBREG_BYTE.

Using poly_uint16_pod rtx fields requires a new field code ('p').
Since there are no other uses of 'p' besides SUBREG_BYTE, the patch
doesn't add an XPOLY or whatever; all uses should go via SUBREG_BYTE
instead.

The patch doesn't bother implementing 'p' support for legacy
define_peepholes, since none of the remaining ones have subregs
in their patterns.

As it happened, the rtl documentation used SUBREG as an example of a
code with mixed field types, accessed via XEXP (x, 0) and XINT (x, 1).
Since there's no direct replacement for XINT, and since people should
never use it even if there were, the patch changes the example to use
INT_LIST instead.

The patch also changes subreg-related helper functions so that they too
take and return polynomial offsets.  This makes the patch quite big, but
it's mostly mechanical.  The patch generally sticks to existing choices
wrt signedness.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* doc/rtl.texi: Update documentation of SUBREG_BYTE.  Document the
	'p' format code.  Use INT_LIST rather than SUBREG as the example of
	a code with an XINT and an XEXP.  Remove the implication that
	accessing an rtx field using XINT is expected to work.
	* rtl.def (SUBREG): Change format from "ei" to "ep".
	* rtl.h (rtunion::rt_subreg): New field.
	(XCSUBREG): New macro.
	(SUBREG_BYTE): Use it.
	(subreg_shape): Change offset from an unsigned int to a poly_uint16.
	Update constructor accordingly.
	(subreg_shape::operator ==): Update accordingly.
	(subreg_shape::unique_id): Return an unsigned HOST_WIDE_INT rather
	than an unsigned int.
	(subreg_lsb, subreg_lowpart_offset, subreg_highpart_offset): Return
	a poly_uint64 rather than an unsigned int.
	(subreg_lsb_1): Likewise.  Take the offset as a poly_uint64 rather
	than an unsigned int.
	(subreg_size_offset_from_lsb, subreg_size_lowpart_offset)
	(subreg_size_highpart_offset): Return a poly_uint64 rather than
	an unsigned int.  Take the sizes as poly_uint64s.
	(subreg_offset_from_lsb): Return a poly_uint64 rather than
	an unsigned int.  Take the shift as a poly_uint64 rather than
	an unsigned int.
	(subreg_regno_offset, subreg_offset_representable_p): Take the offset
	as a poly_uint64 rather than an unsigned int.
	(simplify_subreg_regno): Likewise.
	(byte_lowpart_offset): Return the memory offset as a poly_int64
	rather than an int.
	(subreg_memory_offset): Likewise.  Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	(simplify_subreg, simplify_gen_subreg, subreg_get_info)
	(gen_rtx_SUBREG, validate_subreg): Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	* rtl.c (rtx_format): Describe 'p' in comment.
	(copy_rtx, rtx_equal_p_cb, rtx_equal_p): Handle 'p'.
	* emit-rtl.c (validate_subreg, gen_rtx_SUBREG): Take the subreg
	offset as a poly_uint64 rather than an unsigned int.
	(byte_lowpart_offset): Return the memory offset as a poly_int64
	rather than an int.
	(subreg_memory_offset): Likewise.  Take the subreg offset as a
	poly_uint64 rather than an unsigned int.
	(subreg_size_lowpart_offset, subreg_size_highpart_offset): Take the
	mode sizes as poly_uint64s rather than unsigned ints.  Return a
	poly_uint64 rather than an unsigned int.
	(subreg_lowpart_p): Treat subreg offsets as poly_ints.
	(copy_insn_1): Handle 'p'.
	* rtlanal.c (set_noop_p): Treat subregs offsets as poly_uint64s.
	(subreg_lsb_1): Take the subreg offset as a poly_uint64 rather than
	an unsigned int.  Return the shift in the same way.
	(subreg_lsb): Return the shift as a poly_uint64 rather than an
	unsigned int.
	(subreg_size_offset_from_lsb): Take the sizes and shift as
	poly_uint64s rather than unsigned ints.  Return the offset as
	a poly_uint64.
	(subreg_get_info, subreg_regno_offset, subreg_offset_representable_p)
	(simplify_subreg_regno): Take the offset as a poly_uint64 rather than
	an unsigned int.
	* rtlhash.c (add_rtx): Handle 'p'.
	* genemit.c (gen_exp): Likewise.
	* gengenrtl.c (type_from_format, gendef): Likewise.
	* gensupport.c (subst_pattern_match, get_alternatives_number)
	(collect_insn_data, alter_predicate_for_insn, alter_constraints)
	(subst_dup): Likewise.
	* gengtype.c (adjust_field_rtx_def): Likewise.
	* genrecog.c (find_operand, find_matching_operand, validate_pattern)
	(match_pattern_2): Likewise.
	(rtx_test::SUBREG_FIELD): New rtx_test::kind_enum.
	(rtx_test::subreg_field): New function.
	(operator ==, safe_to_hoist_p, transition_parameter_type)
	(print_nonbool_test, print_test): Handle SUBREG_FIELD.
	* genattrtab.c (attr_rtx_1): Say that 'p' is deliberately not handled.
	* genpeep.c (match_rtx): Likewise.
	* print-rtl.c (print_poly_int): Include if GENERATOR_FILE too.
	(rtx_writer::print_rtx_operand): Handle 'p'.
	(print_value): Handle SUBREG.
	* read-rtl.c (apply_int_iterator): Likewise.
	(rtx_reader::read_rtx_operand): Handle 'p'.
	* alias.c (rtx_equal_for_memref_p): Likewise.
	* cselib.c (rtx_equal_for_cselib_1, cselib_hash_rtx): Likewise.
	* caller-save.c (replace_reg_with_saved_mem): Treat subreg offsets
	as poly_ints.
	* calls.c (expand_call): Likewise.
	* combine.c (combine_simplify_rtx, expand_field_assignment): Likewise.
	(make_extraction, gen_lowpart_for_combine): Likewise.
	* loop-invariant.c (hash_invariant_expr_1, invariant_expr_equal_p):
	Likewise.
	* cse.c (remove_invalid_subreg_refs): Take the offset as a poly_uint64
	rather than an unsigned int.  Treat subreg offsets as poly_ints.
	(exp_equiv_p): Handle 'p'.
	(hash_rtx_cb): Likewise.  Treat subreg offsets as poly_ints.
	(equiv_constant, cse_insn): Treat subreg offsets as poly_ints.
	* dse.c (find_shift_sequence): Likewise.
	* dwarf2out.c (rtl_for_decl_location): Likewise.
	* expmed.c (extract_low_bits): Likewise.
	* expr.c (emit_group_store, undefined_operand_subword_p): Likewise.
	(expand_expr_real_2): Likewise.
	* final.c (alter_subreg): Likewise.
	(leaf_renumber_regs_insn): Handle 'p'.
	* function.c (assign_parm_find_stack_rtl, assign_parm_setup_stack):
	Treat subreg offsets as poly_ints.
	* fwprop.c (forward_propagate_and_simplify): Likewise.
	* ifcvt.c (noce_emit_move_insn, noce_emit_cmove): Likewise.
	* ira.c (get_subreg_tracking_sizes): Likewise.
	* ira-conflicts.c (go_through_subreg): Likewise.
	* ira-lives.c (process_single_reg_class_operands): Likewise.
	* jump.c (rtx_renumbered_equal_p): Likewise.  Handle 'p'.
	* lower-subreg.c (simplify_subreg_concatn): Take the subreg offset
	as a poly_uint64 rather than an unsigned int.
	(simplify_gen_subreg_concatn, resolve_simple_move): Treat
	subreg offsets as poly_ints.
	* lra-constraints.c (operands_match_p): Handle 'p'.
	(match_reload, curr_insn_transform): Treat subreg offsets as poly_ints.
	* lra-spills.c (assign_mem_slot): Likewise.
	* postreload.c (move2add_valid_value_p): Likewise.
	* recog.c (general_operand, indirect_operand): Likewise.
	* regcprop.c (copy_value, maybe_mode_change): Likewise.
	(copyprop_hardreg_forward_1): Likewise.
	* reginfo.c (simplifiable_subregs_hasher::hash, simplifiable_subregs)
	(record_subregs_of_mode): Likewise.
	* rtlhooks.c (gen_lowpart_general, gen_lowpart_if_possible): Likewise.
	* reload.c (operands_match_p): Handle 'p'.
	(find_reloads_subreg_address): Treat subreg offsets as poly_ints.
	* reload1.c (alter_reg, choose_reload_regs): Likewise.
	(compute_reload_subreg_offset): Likewise, and return an poly_int64.
	* simplify-rtx.c (simplify_truncation, simplify_binary_operation_1):
	(test_vector_ops_duplicate): Treat subreg offsets as poly_ints.
	(simplify_const_poly_int_tests<N>::run): Likewise.
	(simplify_subreg, simplify_gen_subreg): Take the subreg offset as
	a poly_uint64 rather than an unsigned int.
	* valtrack.c (debug_lowpart_subreg): Likewise.
	* var-tracking.c (var_lowpart): Likewise.
	(loc_cmp): Handle 'p'.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255882
2017-12-20 12:54:28 +00:00
Richard Sandiford
9dcf1f868c poly_int: ira subreg liveness tracking
Normmaly the IRA-reload interface tries to track the liveness of
individual bytes of an allocno if the allocno is sometimes written
to as a SUBREG.  This isn't possible for variable-sized allocnos,
but it doesn't matter because targets with variable-sized registers
should use LRA instead.

This patch adds a get_subreg_tracking_sizes function for deciding
whether it is possible to model a partial read or write.  Later
patches make it return false if anything is variable.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* ira.c (get_subreg_tracking_sizes): New function.
	(init_live_subregs): Take an integer size rather than a register.
	(build_insn_chain): Use get_subreg_tracking_sizes.  Update calls
	to init_live_subregs.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255881
2017-12-20 12:54:17 +00:00
Richard Sandiford
7f679e470b poly_int: store_field & co
This patch makes store_field and related routines use poly_ints
for bit positions and sizes.  It keeps the existing choices
between signed and unsigned types (there are a mixture of both).

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* expr.c (store_constructor_field): Change bitsize from a
	unsigned HOST_WIDE_INT to a poly_uint64 and bitpos from a
	HOST_WIDE_INT to a poly_int64.
	(store_constructor): Change size from a HOST_WIDE_INT to
	a poly_int64.
	(store_field): Likewise bitsize and bitpos.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255880
2017-12-20 12:54:10 +00:00
Richard Sandiford
8c59e5e735 poly_int: C++ bitfield regions
This patch changes C++ bitregion_start/end values from constants to
poly_ints.  Although it's unlikely that the size needs to be polynomial
in practice, the offset could be with future language extensions.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* expmed.h (store_bit_field): Change bitregion_start and
	bitregion_end from unsigned HOST_WIDE_INT to poly_uint64.
	* expmed.c (adjust_bit_field_mem_for_reg, strict_volatile_bitfield_p)
	(store_bit_field_1, store_integral_bit_field, store_bit_field)
	(store_fixed_bit_field, store_split_bit_field): Likewise.
	* expr.c (store_constructor_field, store_field): Likewise.
	(optimize_bitfield_assignment_op): Likewise.  Make the same change
	to bitsize and bitpos.
	* machmode.h (bit_field_mode_iterator): Change m_bitregion_start
	and m_bitregion_end from HOST_WIDE_INT to poly_int64.  Make the
	same change in the constructor arguments.
	(get_best_mode): Change bitregion_start and bitregion_end from
	unsigned HOST_WIDE_INT to poly_uint64.
	* stor-layout.c (bit_field_mode_iterator::bit_field_mode_iterator):
	Change bitregion_start and bitregion_end from HOST_WIDE_INT to
	poly_int64.
	(bit_field_mode_iterator::next_mode): Update for new types
	of m_bitregion_start and m_bitregion_end.
	(get_best_mode): Change bitregion_start and bitregion_end from
	unsigned HOST_WIDE_INT to poly_uint64.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255879
2017-12-20 12:54:01 +00:00
Richard Sandiford
fc60a41612 poly_int: extract_bit_field bitrange
Similar to the previous store_bit_field patch, but for extractions
rather than insertions.  The patch splits out the extraction-as-subreg
handling into a new function (extract_bit_field_as_subreg), both for
ease of writing and because a later patch will add another caller.

The simplify_gen_subreg overload is temporary; it goes away
in a later patch.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* rtl.h (simplify_gen_subreg): Add a temporary overload that
	accepts poly_uint64 offsets.
	* expmed.h (extract_bit_field): Take bitsize and bitnum as
	poly_uint64s rather than unsigned HOST_WIDE_INTs.
	* expmed.c (lowpart_bit_field_p): Likewise.
	(extract_bit_field_as_subreg): New function, split out from...
	(extract_bit_field_1): ...here.  Take bitsize and bitnum as
	poly_uint64s rather than unsigned HOST_WIDE_INTs.  For vector
	extractions, check that BITSIZE matches the size of the extracted
	value and that BITNUM is an exact multiple of that size.
	If all else fails, try forcing the value into memory if
	BITNUM is variable, and adjusting the address so that the
	offset is constant.  Split the part that can only handle constant
	bitsize and bitnum out into...
	(extract_integral_bit_field): ...this new function.
	(extract_bit_field): Take bitsize and bitnum as poly_uint64s
	rather than unsigned HOST_WIDE_INTs.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255878
2017-12-20 12:53:52 +00:00
Richard Sandiford
2d7b38df8e poly_int: store_bit_field bitrange
This patch changes the bitnum and bitsize arguments to
store_bit_field from unsigned HOST_WIDE_INTs to poly_uint64s.
The later part of store_bit_field_1 still needs to operate
on constant bit positions and sizes, so the patch splits
it out into a subfunction (store_integral_bit_field).

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* expmed.h (store_bit_field): Take bitsize and bitnum as
	poly_uint64s rather than unsigned HOST_WIDE_INTs.
	* expmed.c (simple_mem_bitfield_p): Likewise.  Add a parameter
	that returns the byte size.
	(store_bit_field_1): Take bitsize and bitnum as
	poly_uint64s rather than unsigned HOST_WIDE_INTs.  Update call
	to simple_mem_bitfield_p.  Split the part that can only handle
	constant bitsize and bitnum out into...
	(store_integral_bit_field): ...this new function.
	(store_bit_field): Take bitsize and bitnum as poly_uint64s rather
	than unsigned HOST_WIDE_INTs.
	(extract_bit_field_1): Update call to simple_mem_bitfield_p.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255877
2017-12-20 12:53:44 +00:00
Richard Sandiford
73ca989cb8 poly_int: lra frame offsets
This patch makes LRA use poly_int64s rather than HOST_WIDE_INTs
to store a frame offset (including in things like eliminations).

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* lra-int.h (lra_reg): Change offset from int to poly_int64.
	(lra_insn_recog_data): Change sp_offset from HOST_WIDE_INT
	to poly_int64.
	(lra_eliminate_regs_1, eliminate_regs_in_insn): Change
	update_sp_offset from a HOST_WIDE_INT to a poly_int64.
	(lra_update_reg_val_offset, lra_reg_val_equal_p): Take the
	offset as a poly_int64 rather than an int.
	* lra-assigns.c (find_hard_regno_for_1): Handle poly_int64 offsets.
	(setup_live_pseudos_and_spill_after_risky_transforms): Likewise.
	* lra-constraints.c (equiv_address_substitution): Track offsets
	as poly_int64s.
	(emit_inc): Check poly_int_rtx_p instead of CONST_INT_P.
	(curr_insn_transform): Handle the new form of sp_offset.
	* lra-eliminations.c (lra_elim_table): Change previous_offset
	and offset from HOST_WIDE_INT to poly_int64.
	(print_elim_table, update_reg_eliminate): Update accordingly.
	(self_elim_offsets): Change from HOST_WIDE_INT to poly_int64_pod.
	(get_elimination): Update accordingly.
	(form_sum): Check poly_int_rtx_p instead of CONST_INT_P.
	(lra_eliminate_regs_1, eliminate_regs_in_insn): Change
	update_sp_offset from a HOST_WIDE_INT to a poly_int64.  Handle
	poly_int64 offsets generally.
	(curr_sp_change): Change from HOST_WIDE_INT to poly_int64.
	(mark_not_eliminable, init_elimination): Update accordingly.
	(remove_reg_equal_offset_note): Return a bool and pass the new
	offset back by pointer as a poly_int64.
	* lra-remat.c (change_sp_offset): Take sp_offset as a poly_int64
	rather than a HOST_WIDE_INT.
	(do_remat): Track offsets poly_int64s.
	* lra.c (lra_update_insn_recog_data, setup_sp_offset): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255876
2017-12-20 12:53:35 +00:00
Richard Sandiford
d05d755107 poly_int: MEM_OFFSET and MEM_SIZE
This patch changes the MEM_OFFSET and MEM_SIZE memory attributes
from HOST_WIDE_INT to poly_int64.  Most of it is mechanical,
but there is one nonbovious change in widen_memory_access.
Previously the main while loop broke with:

      /* Similarly for the decl.  */
      else if (DECL_P (attrs.expr)
               && DECL_SIZE_UNIT (attrs.expr)
               && TREE_CODE (DECL_SIZE_UNIT (attrs.expr)) == INTEGER_CST
               && compare_tree_int (DECL_SIZE_UNIT (attrs.expr), size) >= 0
               && (! attrs.offset_known_p || attrs.offset >= 0))
        break;

but it seemed wrong to optimistically assume the best case
when the offset isn't known (and thus might be negative).
As it happens, the "! attrs.offset_known_p" condition was
always false, because we'd already nullified attrs.expr in
that case:

  /* If we don't know what offset we were at within the expression, then
     we can't know if we've overstepped the bounds.  */
  if (! attrs.offset_known_p)
    attrs.expr = NULL_TREE;

The patch therefore drops "! attrs.offset_known_p ||" when
converting the offset check to the may/must interface.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* rtl.h (mem_attrs): Add a default constructor.  Change size and
	offset from HOST_WIDE_INT to poly_int64.
	* emit-rtl.h (set_mem_offset, set_mem_size, adjust_address_1)
	(adjust_automodify_address_1, set_mem_attributes_minus_bitpos)
	(widen_memory_access): Take the sizes and offsets as poly_int64s
	rather than HOST_WIDE_INTs.
	* alias.c (ao_ref_from_mem): Handle the new form of MEM_OFFSET.
	(offset_overlap_p): Take poly_int64s rather than HOST_WIDE_INTs
	and ints.
	(adjust_offset_for_component_ref): Change the offset from a
	HOST_WIDE_INT to a poly_int64.
	(nonoverlapping_memrefs_p): Track polynomial offsets and sizes.
	* cfgcleanup.c (merge_memattrs): Update after mem_attrs changes.
	* dce.c (find_call_stack_args): Likewise.
	* dse.c (record_store): Likewise.
	* dwarf2out.c (tls_mem_loc_descriptor, dw_sra_loc_expr): Likewise.
	* print-rtl.c (rtx_writer::print_rtx): Likewise.
	* read-rtl-function.c (test_loading_mem): Likewise.
	* rtlanal.c (may_trap_p_1): Likewise.
	* simplify-rtx.c (delegitimize_mem_from_attrs): Likewise.
	* var-tracking.c (int_mem_offset, track_expr_p): Likewise.
	* emit-rtl.c (mem_attrs_eq_p, get_mem_align_offset): Likewise.
	(mem_attrs::mem_attrs): New function.
	(set_mem_attributes_minus_bitpos): Change bitpos from a
	HOST_WIDE_INT to poly_int64.
	(set_mem_alias_set, set_mem_addr_space, set_mem_align, set_mem_expr)
	(clear_mem_offset, clear_mem_size, change_address)
	(get_spill_slot_decl, set_mem_attrs_for_spill): Directly
	initialize mem_attrs.
	(set_mem_offset, set_mem_size, adjust_address_1)
	(adjust_automodify_address_1, offset_address, widen_memory_access):
	Likewise.  Take poly_int64s rather than HOST_WIDE_INT.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255875
2017-12-20 12:53:23 +00:00
Richard Sandiford
a02ee6ef88 poly_int: rtx_addr_can_trap_p_1
This patch changes the offset and size arguments of
rtx_addr_can_trap_p_1 from HOST_WIDE_INT to poly_int64.  It also
uses a size of -1 rather than 0 to represent an unknown size and
BLKmode rather than VOIDmode to represent an unknown mode.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* rtlanal.c (rtx_addr_can_trap_p_1): Take the offset and size
	as poly_int64s rather than HOST_WIDE_INTs.  Use a size of -1
	rather than 0 to represent an unknown size.  Assert that the size
	is known when the mode isn't BLKmode.
	(may_trap_p_1): Use -1 for unknown sizes.
	(rtx_addr_can_trap_p): Likewise.  Pass BLKmode rather than VOIDmode.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255874
2017-12-20 12:53:12 +00:00
Richard Sandiford
02ce5d903e poly_int: dse.c
This patch makes RTL DSE use poly_int for offsets and sizes.
The local phase can optimise them normally but the global phase
treats them as wild accesses.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* dse.c (store_info): Change offset and width from HOST_WIDE_INT
	to poly_int64.  Update commentary for positions_needed.large.
	(read_info_type): Change offset and width from HOST_WIDE_INT
	to poly_int64.
	(set_usage_bits): Likewise.
	(canon_address): Return the offset as a poly_int64 rather than
	a HOST_WIDE_INT.  Use strip_offset_and_add.
	(set_all_positions_unneeded, any_positions_needed_p): Use
	positions_needed.large to track stores with non-constant widths.
	(all_positions_needed_p): Likewise.  Take the offset and width
	as poly_int64s rather than ints.  Assert that rhs is nonnull.
	(record_store): Cope with non-constant offsets and widths.
	Nullify the rhs of an earlier store if we can't tell which bytes
	of it are needed.
	(find_shift_sequence): Take the access_size and shift as poly_int64s
	rather than ints.
	(get_stored_val): Take the read_offset and read_width as poly_int64s
	rather than HOST_WIDE_INTs.
	(check_mem_read_rtx, scan_stores, scan_reads, dse_step5): Handle
	non-constant offsets and widths.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255873
2017-12-20 12:53:05 +00:00
Richard Sandiford
b9c257340b poly_int: ao_ref and vn_reference_op_t
This patch changes the offset, size and max_size fields
of ao_ref from HOST_WIDE_INT to poly_int64 and propagates
the change through the code that references it.  This includes
changing the off field of vn_reference_op_struct in the same way.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* inchash.h (inchash:#️⃣:add_poly_int): New function.
	* tree-ssa-alias.h (ao_ref::offset, ao_ref::size, ao_ref::max_size):
	Use poly_int64 rather than HOST_WIDE_INT.
	(ao_ref::max_size_known_p): New function.
	* tree-ssa-sccvn.h (vn_reference_op_struct::off): Use poly_int64_pod
	rather than HOST_WIDE_INT.
	* tree-ssa-alias.c (ao_ref_base): Apply get_ref_base_and_extent
	to temporaries until its interface is adjusted to match.
	(ao_ref_init_from_ptr_and_size): Handle polynomial offsets and sizes.
	(aliasing_component_refs_p, decl_refs_may_alias_p)
	(indirect_ref_may_alias_decl_p, indirect_refs_may_alias_p): Take
	the offsets and max_sizes as poly_int64s instead of HOST_WIDE_INTs.
	(refs_may_alias_p_1, stmt_kills_ref_p): Adjust for changes to
	ao_ref fields.
	* alias.c (ao_ref_from_mem): Likewise.
	* tree-ssa-dce.c (mark_aliased_reaching_defs_necessary_1): Likewise.
	* tree-ssa-dse.c (valid_ao_ref_for_dse, normalize_ref)
	(clear_bytes_written_by, setup_live_bytes_from_ref, compute_trims)
	(maybe_trim_complex_store, maybe_trim_constructor_store)
	(live_bytes_read, dse_classify_store): Likewise.
	* tree-ssa-sccvn.c (vn_reference_compute_hash, vn_reference_eq):
	(copy_reference_ops_from_ref, ao_ref_init_from_vn_reference)
	(fully_constant_vn_reference_p, valueize_refs_1): Likewise.
	(vn_reference_lookup_3): Likewise.
	* tree-ssa-uninit.c (warn_uninitialized_vars): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255872
2017-12-20 12:52:58 +00:00
Richard Sandiford
5ffca72c5d poly_int: indirect_refs_may_alias_p
This patch makes indirect_refs_may_alias_p use ranges_may_overlap_p
rather than ranges_overlap_p.  Unlike the former, the latter can handle
negative offsets, so the fix for PR44852 should no longer be necessary.
It can also handle offset_int, so avoids unchecked truncations to
HOST_WIDE_INT.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-ssa-alias.c (indirect_ref_may_alias_decl_p)
	(indirect_refs_may_alias_p): Use ranges_may_overlap_p
	instead of ranges_overlap_p.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255871
2017-12-20 12:52:50 +00:00
Richard Sandiford
b506575ff6 poly_int: same_addr_size_stores_p
This patch makes tree-ssa-alias.c:same_addr_size_stores_p handle
poly_int sizes and offsets.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree-ssa-alias.c (same_addr_size_stores_p): Take the offsets and
	sizes as poly_int64s rather than HOST_WIDE_INTs.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255870
2017-12-20 12:52:43 +00:00
Richard Sandiford
30acf28296 poly_int: fold_ctor_reference
This patch changes the offset and size arguments to
fold_ctor_reference from unsigned HOST_WIDE_INT to poly_uint64.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* gimple-fold.h (fold_ctor_reference): Take the offset and size
	as poly_uint64 rather than unsigned HOST_WIDE_INT.
	* gimple-fold.c (fold_ctor_reference): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255869
2017-12-20 12:52:37 +00:00
Richard Sandiford
74c74aa05e poly_int: DWARF locations
This patch adds support for DWARF location expressions
that involve polynomial offsets.  It adds a target hook that
says how the runtime invariants used in the offsets should be
represented in DWARF.  SVE vectors have to be a multiple of
128 bits in size, so the GCC port uses the number of 128-bit
blocks minus one as the runtime invariant.  However, in DWARF,
the vector length is exposed via a pseudo "VG" register that
holds the number of 64-bit elements in a vector.  Thus:

  indeterminate 1 == (VG / 2) - 1

The hook needs to be general enough to express this.
Note that in most cases the division and subtraction fold
away into surrounding expressions.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* target.def (dwarf_poly_indeterminate_value): New hook.
	* targhooks.h (default_dwarf_poly_indeterminate_value): Declare.
	* targhooks.c (default_dwarf_poly_indeterminate_value): New function.
	* doc/tm.texi.in (TARGET_DWARF_POLY_INDETERMINATE_VALUE): Document.
	* doc/tm.texi: Regenerate.
	* dwarf2out.h (build_cfa_loc, build_cfa_aligned_loc): Take the
	offset as a poly_int64.
	* dwarf2out.c (new_reg_loc_descr): Move later in file.  Take the
	offset as a poly_int64.
	(loc_descr_plus_const, loc_list_plus_const, build_cfa_aligned_loc):
	Take the offset as a poly_int64.
	(build_cfa_loc): Likewise.  Use loc_descr_plus_const.
	(frame_pointer_fb_offset): Change to a poly_int64.
	(int_loc_descriptor): Take the offset as a poly_int64.  Use
	targetm.dwarf_poly_indeterminate_value for polynomial offsets.
	(based_loc_descr): Take the offset as a poly_int64.
	Use strip_offset_and_add to handle (plus X (const)).
	Use new_reg_loc_descr instead of an open-coded version of the
	previous implementation.
	(mem_loc_descriptor): Handle CONST_POLY_INT.
	(compute_frame_pointer_to_fb_displacement): Take the offset as a
	poly_int64.  Use strip_offset_and_add to handle (plus X (const)).

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255868
2017-12-20 12:52:30 +00:00
Richard Sandiford
84bc717b51 poly_int: REG_OFFSET
This patch changes the type of the reg_attrs offset field
from HOST_WIDE_INT to poly_int64 and updates uses accordingly.
This includes changing reg_attr_hasher::hash to use inchash.
(Doing this has no effect on code generation since the only
use of the hasher is to avoid creating duplicate objects.)

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
            Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* rtl.h (reg_attrs::offset): Change from HOST_WIDE_INT to poly_int64.
	(gen_rtx_REG_offset): Take the offset as a poly_int64.
	* inchash.h (inchash:#️⃣:add_poly_hwi): New function.
	* gengtype.c (main): Register poly_int64.
	* emit-rtl.c (reg_attr_hasher::hash): Use inchash.  Treat the
	offset as a poly_int.
	(reg_attr_hasher::equal): Use must_eq to compare offsets.
	(get_reg_attrs, update_reg_offset, gen_rtx_REG_offset): Take the
	offset as a poly_int64.
	(set_reg_attrs_from_value): Treat the offset as a poly_int64.
	* print-rtl.c (print_poly_int): New function.
	(rtx_writer::print_rtx_operand_code_r): Treat REG_OFFSET as
	a poly_int.
	* var-tracking.c (track_offset_p, get_tracked_reg_offset): New
	functions.
	(var_reg_set, var_reg_delete_and_set, var_reg_delete): Use them.
	(same_variable_part_p, track_loc_p): Take the offset as a poly_int64.
	(vt_get_decl_and_offset): Return the offset as a poly_int64.
	Enforce track_offset_p for parts of a PARALLEL.
	(vt_add_function_parameter): Use const_offset for the final
	offset to track.  Use get_tracked_reg_offset for the parts
	of a PARALLEL.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255867
2017-12-20 12:52:22 +00:00
Richard Sandiford
37b2b8f957 poly_int: TRULY_NOOP_TRUNCATION
This patch makes TRULY_NOOP_TRUNCATION take the mode sizes as
poly_uint64s instead of unsigned ints.  The function bodies
don't need to change.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
            Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* target.def (truly_noop_truncation): Take poly_uint64s instead of
	unsigned ints.  Change default to hook_bool_puint64_puint64_true.
	* doc/tm.texi: Regenerate.
	* hooks.h (hook_bool_uint_uint_true): Delete.
	(hook_bool_puint64_puint64_true): Declare.
	* hooks.c (hook_bool_uint_uint_true): Delete.
	(hook_bool_puint64_puint64_true): New function.
	* config/mips/mips.c (mips_truly_noop_truncation): Take poly_uint64s
	instead of unsigned ints.
	* config/spu/spu.c (spu_truly_noop_truncation): Likewise.
	* config/tilegx/tilegx.c (tilegx_truly_noop_truncation): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255866
2017-12-20 12:52:12 +00:00
Richard Sandiford
f8832fe1a7 poly_int: create_integer_operand
This patch generalises create_integer_operand so that it accepts
poly_int64s rather than HOST_WIDE_INTs.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* optabs.h (expand_operand): Add an int_value field.
	(create_expand_operand): Add an int_value parameter and use it
	to initialize the new expand_operand field.
	(create_integer_operand): Replace with a declaration of a function
	that accepts poly_int64s.  Move the implementation to...
	* optabs.c (create_integer_operand): ...here.
	(maybe_legitimize_operand): For EXPAND_INTEGER, check whether
	the mode preserves the value of int_value, instead of calling
	const_int_operand on the rtx.  Use gen_int_mode to generate
	the new rtx.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255865
2017-12-20 12:52:04 +00:00
Richard Sandiford
dc3f380505 poly_int: dump routines
2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* dumpfile.h (dump_dec): Declare.
	* dumpfile.c (dump_dec): New function.
	* pretty-print.h (pp_wide_integer): Turn into a function and
	declare a poly_int version.
	* pretty-print.c (pp_wide_integer): New function for poly_ints.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255864
2017-12-20 12:51:57 +00:00
Richard Sandiford
36fd640865 poly_int: tree constants
This patch adds a tree representation for poly_ints.  Unlike the
rtx version, the coefficients are INTEGER_CSTs rather than plain
integers, so that we can easily access them as poly_widest_ints
and poly_offset_ints.

The patch also adjusts some places that previously
relied on "constant" meaning "INTEGER_CST".  It also makes
sure that the TYPE_SIZE agrees with the TYPE_SIZE_UNIT for
vector booleans, given the existing:

	/* Several boolean vector elements may fit in a single unit.  */
	if (VECTOR_BOOLEAN_TYPE_P (type)
	    && type->type_common.mode != BLKmode)
	  TYPE_SIZE_UNIT (type)
	    = size_int (GET_MODE_SIZE (type->type_common.mode));
	else
	  TYPE_SIZE_UNIT (type) = int_const_binop (MULT_EXPR,
						   TYPE_SIZE_UNIT (innertype),
						   size_int (nunits));

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* doc/generic.texi (POLY_INT_CST): Document.
	* tree.def (POLY_INT_CST): New tree code.
	* treestruct.def (TS_POLY_INT_CST): New tree layout.
	* tree-core.h (tree_poly_int_cst): New struct.
	(tree_node): Add a poly_int_cst field.
	* tree.h (POLY_INT_CST_P, POLY_INT_CST_COEFF): New macros.
	(wide_int_to_tree, force_fit_type): Take a poly_wide_int_ref
	instead of a wide_int_ref.
	(build_int_cst, build_int_cst_type): Take a poly_int64 instead
	of a HOST_WIDE_INT.
	(build_int_cstu, build_array_type_nelts): Take a poly_uint64
	instead of an unsigned HOST_WIDE_INT.
	(build_poly_int_cst, tree_fits_poly_int64_p, tree_fits_poly_uint64_p)
	(ptrdiff_tree_p): Declare.
	(tree_to_poly_int64, tree_to_poly_uint64): Likewise.  Provide
	extern inline implementations if the target doesn't use POLY_INT_CST.
	(poly_int_tree_p): New function.
	(wi::unextended_tree): New class.
	(wi::int_traits <unextended_tree>): New override.
	(wi::extended_tree): Add a default constructor.
	(wi::extended_tree::get_tree): New function.
	(wi::widest_extended_tree, wi::offset_extended_tree): New typedefs.
	(wi::tree_to_widest_ref, wi::tree_to_offset_ref): Use them.
	(wi::tree_to_poly_widest_ref, wi::tree_to_poly_offset_ref)
	(wi::tree_to_poly_wide_ref): New typedefs.
	(wi::ints_for): Provide overloads for extended_tree and
	unextended_tree.
	(poly_int_cst_value, wi::to_poly_widest, wi::to_poly_offset)
	(wi::to_wide): New functions.
	(wi::fits_to_boolean_p, wi::fits_to_tree_p): Handle poly_ints.
	* tree.c (poly_int_cst_hasher): New struct.
	(poly_int_cst_hash_table): New variable.
	(tree_node_structure_for_code, tree_code_size, simple_cst_equal)
	(valid_constant_size_p, add_expr, drop_tree_overflow): Handle
	POLY_INT_CST.
	(initialize_tree_contains_struct): Handle TS_POLY_INT_CST.
	(init_ttree): Initialize poly_int_cst_hash_table.
	(build_int_cst, build_int_cst_type, build_invariant_address): Take
	a poly_int64 instead of a HOST_WIDE_INT.
	(build_int_cstu, build_array_type_nelts): Take a poly_uint64
	instead of an unsigned HOST_WIDE_INT.
	(wide_int_to_tree): Rename to...
	(wide_int_to_tree_1): ...this.
	(build_new_poly_int_cst, build_poly_int_cst): New functions.
	(force_fit_type): Take a poly_wide_int_ref instead of a wide_int_ref.
	(wide_int_to_tree): New function that takes a poly_wide_int_ref.
	(ptrdiff_tree_p, tree_to_poly_int64, tree_to_poly_uint64)
	(tree_fits_poly_int64_p, tree_fits_poly_uint64_p): New functions.
	* lto-streamer-out.c (DFS::DFS_write_tree_body, hash_tree): Handle
	TS_POLY_INT_CST.
	* tree-streamer-in.c (lto_input_ts_poly_tree_pointers): Likewise.
	(streamer_read_tree_body): Likewise.
	* tree-streamer-out.c (write_ts_poly_tree_pointers): Likewise.
	(streamer_write_tree_body): Likewise.
	* tree-streamer.c (streamer_check_handled_ts_structures): Likewise.
	* asan.c (asan_protect_global): Require the size to be an INTEGER_CST.
	* cfgexpand.c (expand_debug_expr): Handle POLY_INT_CST.
	* expr.c (expand_expr_real_1, const_vector_from_tree): Likewise.
	* gimple-expr.h (is_gimple_constant): Likewise.
	* gimplify.c (maybe_with_size_expr): Likewise.
	* print-tree.c (print_node): Likewise.
	* tree-data-ref.c (data_ref_compare_tree): Likewise.
	* tree-pretty-print.c (dump_generic_node): Likewise.
	* tree-ssa-address.c (addr_for_mem_ref): Likewise.
	* tree-vect-data-refs.c (dr_group_sort_cmp): Likewise.
	* tree-vrp.c (compare_values_warnv): Likewise.
	* tree-ssa-loop-ivopts.c (determine_base_object, constant_multiple_of)
	(get_loop_invariant_expr, add_candidate_1, get_computation_aff_1)
	(force_expr_to_var_cost): Likewise.
	* tree-ssa-loop.c (for_each_index): Likewise.
	* fold-const.h (build_invariant_address, size_int_kind): Take a
	poly_int64 instead of a HOST_WIDE_INT.
	* fold-const.c (fold_negate_expr_1, const_binop, const_unop)
	(fold_convert_const, multiple_of_p, fold_negate_const): Handle
	POLY_INT_CST.
	(size_binop_loc): Likewise.  Allow int_const_binop_1 to fail.
	(int_const_binop_2): New function, split out from...
	(int_const_binop_1): ...here.  Handle POLY_INT_CST.
	(size_int_kind): Take a poly_int64 instead of a HOST_WIDE_INT.
	* expmed.c (make_tree): Handle CONST_POLY_INT_P.
	* gimple-ssa-strength-reduction.c (slsr_process_add)
	(slsr_process_mul): Check for INTEGER_CSTs before using them
	as candidates.
	* stor-layout.c (bits_from_bytes): New function.
	(bit_from_pos): Use it.
	(layout_type): Likewise.  For vectors, multiply the TYPE_SIZE_UNIT
	by BITS_PER_UNIT to get the TYPE_SIZE.
	* tree-cfg.c (verify_expr, verify_types_in_gimple_reference): Allow
	MEM_REF and TARGET_MEM_REF offsets to be a POLY_INT_CST.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255863
2017-12-20 12:51:50 +00:00
Richard Sandiford
0c12fc9b2d poly_int: rtx constants
This patch adds an rtl representation of poly_int values.
There were three possible ways of doing this:

(1) Add a new rtl code for the poly_ints themselves and store the
    coefficients as trailing wide_ints.  This would give constants like:

      (const_poly_int [c0 c1 ... cn])

    The runtime value would be:

      c0 + c1 * x1 + ... + cn * xn

(2) Like (1), but use rtxes for the coefficients.  This would give
    constants like:

      (const_poly_int [(const_int c0)
                       (const_int c1)
                       ...
                       (const_int cn)])

    although the coefficients could be const_wide_ints instead
    of const_ints where appropriate.

(3) Add a new rtl code for the polynomial indeterminates,
    then use them in const wrappers.  A constant like c0 + c1 * x1
    would then look like:

      (const:M (plus:M (mult:M (const_param:M x1)
                               (const_int c1))
                       (const_int c0)))

There didn't seem to be that much to choose between them.  The main
advantage of (1) is that it's a more efficient representation and
that we can refer to the cofficients directly as wide_int_storage.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* doc/rtl.texi (const_poly_int): Document.  Also document the
	rtl sharing behavior.
	* gengenrtl.c (excluded_rtx): Return true for CONST_POLY_INT.
	* rtl.h (const_poly_int_def): New struct.
	(rtx_def::u): Add a cpi field.
	(CASE_CONST_UNIQUE, CASE_CONST_ANY): Add CONST_POLY_INT.
	(CONST_POLY_INT_P, CONST_POLY_INT_COEFFS): New macros.
	(wi::rtx_to_poly_wide_ref): New typedef
	(const_poly_int_value, wi::to_poly_wide, rtx_to_poly_int64)
	(poly_int_rtx_p): New functions.
	(trunc_int_for_mode): Declare a poly_int64 version.
	(plus_constant): Take a poly_int64 instead of a HOST_WIDE_INT.
	(immed_wide_int_const): Take a poly_wide_int_ref rather than
	a wide_int_ref.
	(strip_offset): Declare.
	(strip_offset_and_add): New function.
	* rtl.def (CONST_POLY_INT): New rtx code.
	* rtl.c (rtx_size): Handle CONST_POLY_INT.
	(shared_const_p): Use poly_int_rtx_p.
	* emit-rtl.h (gen_int_mode): Take a poly_int64 instead of a
	HOST_WIDE_INT.
	(gen_int_shift_amount): Likewise.
	* emit-rtl.c (const_poly_int_hasher): New class.
	(const_poly_int_htab): New variable.
	(init_emit_once): Initialize it when NUM_POLY_INT_COEFFS > 1.
	(const_poly_int_hasher::hash): New function.
	(const_poly_int_hasher::equal): Likewise.
	(gen_int_mode): Take a poly_int64 instead of a HOST_WIDE_INT.
	(immed_wide_int_const): Rename to...
	(immed_wide_int_const_1): ...this and make static.
	(immed_wide_int_const): New function, taking a poly_wide_int_ref
	instead of a wide_int_ref.
	(gen_int_shift_amount): Take a poly_int64 instead of a HOST_WIDE_INT.
	(gen_lowpart_common): Handle CONST_POLY_INT.
	* cse.c (hash_rtx_cb, equiv_constant): Likewise.
	* cselib.c (cselib_hash_rtx): Likewise.
	* dwarf2out.c (const_ok_for_output_1): Likewise.
	* expr.c (convert_modes): Likewise.
	* print-rtl.c (rtx_writer::print_rtx, print_value): Likewise.
	* rtlhash.c (add_rtx): Likewise.
	* explow.c (trunc_int_for_mode): Add a poly_int64 version.
	(plus_constant): Take a poly_int64 instead of a HOST_WIDE_INT.
	Handle existing CONST_POLY_INT rtxes.
	* expmed.h (expand_shift): Take a poly_int64 instead of a
	HOST_WIDE_INT.
	* expmed.c (expand_shift): Likewise.
	* rtlanal.c (strip_offset): New function.
	(commutative_operand_precedence): Give CONST_POLY_INT the same
	precedence as CONST_DOUBLE and put CONST_WIDE_INT between that
	and CONST_INT.
	* rtl-tests.c (const_poly_int_tests): New struct.
	(rtl_tests_c_tests): Use it.
	* simplify-rtx.c (simplify_const_unary_operation): Handle
	CONST_POLY_INT.
	(simplify_const_binary_operation): Likewise.
	(simplify_binary_operation_1): Fold additions of symbolic constants
	and CONST_POLY_INTs.
	(simplify_subreg): Handle extensions and truncations of
	CONST_POLY_INTs.
	(simplify_const_poly_int_tests): New struct.
	(simplify_rtx_c_tests): Use it.
	* wide-int.h (storage_ref): Add default constructor.
	(wide_int_ref_storage): Likewise.
	(trailing_wide_ints): Use GTY((user)).
	(trailing_wide_ints::operator[]): Add a const version.
	(trailing_wide_ints::get_precision): New function.
	(trailing_wide_ints::extra_size): Likewise.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255862
2017-12-20 12:51:36 +00:00
Richard Sandiford
abd3c80010 Add a gen_int_shift_amount helper function
This patch adds a helper routine that constructs rtxes
for constant shift amounts, given the mode of the value
being shifted.  As well as helping with the SVE patches, this
is one step towards allowing CONST_INTs to have a real mode.

One long-standing problem has been to decide what the mode
of a shift count should be for arbitrary rtxes (as opposed to those
directly tied to a target pattern).  Realistic choices would be
the mode of the shifted elements, word_mode, QImode, a 64-bit mode,
or the same mode as the shift optabs (in which case what should the
mode be when the target doesn't have a pattern?)

For now the patch picks a 64-bit mode, but with a ??? comment.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* emit-rtl.h (gen_int_shift_amount): Declare.
	* emit-rtl.c (gen_int_shift_amount): New function.
	* asan.c (asan_emit_stack_protection): Use gen_int_shift_amount
	instead of GEN_INT.
	* calls.c (shift_return_value): Likewise.
	* cse.c (fold_rtx): Likewise.
	* dse.c (find_shift_sequence): Likewise.
	* expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1)
	(expand_shift, expand_smod_pow2): Likewise.
	* lower-subreg.c (shift_cost): Likewise.
	* optabs.c (expand_superword_shift, expand_doubleword_mult)
	(expand_unop, expand_binop, shift_amt_for_vec_perm_mask)
	(expand_vec_perm_var): Likewise.
	* simplify-rtx.c (simplify_unary_operation_1): Likewise.
	(simplify_binary_operation_1): Likewise.
	* combine.c (try_combine, find_split_point, force_int_to_mode)
	(simplify_shift_const_1, simplify_shift_const): Likewise.
	(change_zero_ext): Likewise.  Use simplify_gen_binary.

Co-Authored-By: Alan Hayward <alan.hayward@arm.com>
Co-Authored-By: David Sherwood <david.sherwood@arm.com>

From-SVN: r255861
2017-12-20 12:51:22 +00:00
Richard Sandiford
27d229f709 Fix multiple_p for two non-poly_ints
Fix a stupid inversion.  This function is very rarely used and was
mostly to help split patches up, which is why it didn't get picked
up during initial testing.

2017-12-20  Richard Sandiford  <richard.sandiford@linaro.org>

gcc/
	* poly-int.h (multiple_p): Fix handling of two non-poly_ints.

gcc/testsuite/
	* gcc.dg/plugin/poly-int-tests.h (test_nonpoly_multiple_p): New
	function.
	(test_nonpoly_type): Call it.

From-SVN: r255860
2017-12-20 12:50:35 +00:00
Kyrylo Tkachov
f4dd468f53 [arm][doc] Document accepted -march=armv8.3-a extension options
I noticed that we helpfully list the extensions that are accepted
by the -march options on arm but we were missing the information
for 'armv8.3-a'.

This patchlet corrects that.
Built the documentation and it looked ok.

	* doc/invoke.texi (ARM Options): Document accepted extension options
	for -march=armv8.3-a.

From-SVN: r255859
2017-12-20 10:34:37 +00:00
Richard Earnshaw
87fd6bde4c [arm] PR target/83105: Minor change of default CPU for arm-linux-gnueabi
When GCC for ARM/linux is configured with --with-float=hard, or
--with-float=softfp the compiler will now die when trying to build the
support libraries because the baseline architecture is too old to
support VFP (older versions of GCC just emitted the VFP instructions
anyway, even though they wouldn't run on that version of the
architecture; but we're now more prickly about it).

This patch fixed the problem by raising the default architecture
(actually the default CPU) to ARMv5te (ARM10e) when we need to generate
HW floating-point code.

	PR target/83105
	* config.gcc (arm*-*-linux*): When configured with --with-float=hard
	or --with-float=softfp, set the default CPU to arm10e.

From-SVN: r255858
2017-12-20 10:30:00 +00:00
Kyrylo Tkachov
0e0cefc62e [aarch64][libstdc++] Use __ARM_BIG_ENDIAN instead of __AARCH64EB__ in opt_random.h
As has been spotted at https://gcc.gnu.org/ml/gcc-patches/2017-12/msg01289.html
we check the __AARCH64EB__ macro for aarch64 big-endian
detection in config/cpu/aarch64/opt/ext/opt_random.h.
That works just fine with GCC but the standardised ACLE[1] macro
for that purpose is __ARM_BIG_ENDIAN so there is a possibility
that non-GCC compilers that include this header are not aware
of this predefine.

So this patch changes the use of __AARCH64EB__ to
the more portable __ARM_BIG_ENDIAN.

Tested on aarch64-none-elf and aarch64_be-none-elf.

Preapproved by Jeff at https://gcc.gnu.org/ml/gcc-patches/2017-12/msg01326.html

	* config/cpu/aarch64/opt/ext/opt_random.h (__VEXT): Check
	__ARM_BIG_ENDIAN instead of __AARCH64EB__.

From-SVN: r255857
2017-12-20 10:28:13 +00:00
Eric Botcazou
98f8b67f22 constraints.md (J, K, L): Use IN_RANGE macro.
* config/visium/constraints.md (J, K, L): Use IN_RANGE macro.
	* config/visium/predicates.md (const_shift_operand): Likewise.
	* config/visium/visium.c (visium_legitimize_address): Fix oversight.
	(visium_legitimize_reload_address): Likewise.

From-SVN: r255856
2017-12-20 09:52:15 +00:00
Paolo Carlini
c58257d94e 2017-12-20 Paolo Carlini <paolo.carlini@oracle.com>
* Committing ChangeLog entry.

From-SVN: r255855
2017-12-20 09:47:05 +00:00
Eric Botcazou
278f422cdf trans.c (Loop_Statement_to_gnu): Use IN_RANGE macro.
* gcc-interface/trans.c (Loop_Statement_to_gnu): Use IN_RANGE macro.
	* gcc-interface/misc.c (gnat_get_array_descr_info): Likewise.
	(default_pass_by_ref): Likewise.
	* gcc-interface/decl.c (gnat_to_gnu_entity): Likewise.

From-SVN: r255854
2017-12-20 09:38:47 +00:00
Kyrylo Tkachov
378056b26a [arm] PR target/82975: Guard against reg_renumber being NULL in arm.h
Commit missing hunk to arm.h TEST_REGNO comment.

	PR target/82975
	* config/arm/arm.h (TEST_REGNO): Adjust comment as expected in
	r255830.

From-SVN: r255853
2017-12-20 09:29:13 +00:00
Jakub Jelinek
5b8b4a883d re PR c++/83490 (ICE in find_call_stack_args, at dce.c:392)
PR c++/83490
	* calls.c (compute_argument_addresses): Ignore TYPE_EMPTY_P arguments.

	* g++.dg/abi/empty29.C: New test.

From-SVN: r255852
2017-12-20 10:12:09 +01:00
Martin Liska
ee050a6e44 Add two test-cases for (PR middle-end/82404).
2017-12-20  Martin Liska  <mliska@suse.cz>

	PR middle-end/82404
	* g++.dg/pr82404.C: New test.
	* gcc.dg/pr82404.c: New test.

From-SVN: r255851
2017-12-20 08:50:56 +00:00
Julia Koval
6557be99af Enable VPCLMULQDQ support
gcc/
	* common/config/i386/i386-common.c (OPTION_MASK_ISA_VPCLMULQDQ_SET,
	OPTION_MASK_ISA_VPCLMULQDQ_UNSET): New.
	(ix86_handle_option): Handle -mvpclmulqdq, move cx6 to flags2.
	* config.gcc: Include vpclmulqdqintrin.h.
	* config/i386/cpuid.h: Handle bit_VPCLMULQDQ.
	* config/i386/driver-i386.c (host_detect_local_cpu): Handle -mvpclmulqdq.
	* config/i386/i386-builtin.def (__builtin_ia32_vpclmulqdq_v2di,
	__builtin_ia32_vpclmulqdq_v4di, __builtin_ia32_vpclmulqdq_v8di): New.
	* config/i386/i386-c.c (__VPCLMULQDQ__): New.
	* config/i386/i386.c (isa2_opts): Add -mcx16.
	(isa_opts): Add -mpclmulqdq, remove -mcx16.
	(ix86_option_override_internal): Move mcx16 to flags2.
	(ix86_valid_target_attribute_inner_p): Add vpclmulqdq.
	(ix86_expand_builtin): Handle OPTION_MASK_ISA_VPCLMULQDQ.
	* config/i386/i386.h (TARGET_VPCLMULQDQ, TARGET_VPCLMULQDQ_P): New.
	* config/i386/i386.opt: Add mvpclmulqdq, move mcx16 to flags2.
	* config/i386/immintrin.h: Include vpclmulqdqintrin.h.
	* config/i386/sse.md (vpclmulqdq_<mode>): New pattern.
	* config/i386/vpclmulqdqintrin.h (_mm512_clmulepi64_epi128,
	_mm_clmulepi64_epi128, _mm256_clmulepi64_epi128): New intrinsics.
	* doc/invoke.texi: Add -mvpclmulqdq.

gcc/testsuite/
	* gcc.target/i386/avx-1.c: Handle new intrinsics.
	* gcc.target/i386/sse-13.c: Ditto.
	* gcc.target/i386/sse-23.c: Ditto.
	* gcc.target/i386/avx512-check.h: Handle bit_VPCLMULQDQ.
	* gcc.target/i386/avx512f-vpclmulqdq-2.c: New test.
	* gcc.target/i386/avx512vl-vpclmulqdq-2.c: Ditto.
	* gcc.target/i386/vpclmulqdq.c: Ditto.
	* gcc.target/i386/i386.exp (check_effective_target_vpclmulqdq): New.

From-SVN: r255850
2017-12-20 06:20:44 +00:00
Tom de Vries
4b522b8f33 Don't call targetm.calls.static_chain in non-static function
2017-12-20  Tom de Vries  <tom@codesourcery.com>

	PR middle-end/83423
	* config/i386/i386.c (ix86_static_chain): Move DECL_STATIC_CHAIN test ...
	* calls.c (rtx_for_static_chain): ... here.  New function.
	* calls.h (rtx_for_static_chain): Declare.
	* builtins.c (expand_builtin_setjmp_receiver): Use rtx_for_static_chain
	instead of targetm.calls.static_chain.
	* df-scan.c (df_get_entry_block_def_set): Same.

From-SVN: r255849
2017-12-20 00:46:38 +00:00
GCC Administrator
f00b0bad2a Daily bump.
From-SVN: r255848
2017-12-20 00:16:17 +00:00
Paolo Carlini
1c97d579f2 re PR c++/82593 (Internal compiler error: in process_init_constructor_array, at cp/typeck2.c:1294)
/cp
2017-12-19  Paolo Carlini  <paolo.carlini@oracle.com>

	PR c++/82593
	* decl.c (check_array_designated_initializer): Not static.
	* cp-tree.h (check_array_designated_initializer): Declare.
	* typeck2.c (process_init_constructor_array): Call the latter.
	* parser.c (cp_parser_initializer_list): Check the return value
	of require_potential_rvalue_constant_expression.

/testsuite
2017-12-19  Paolo Carlini  <paolo.carlini@oracle.com>

	PR c++/82593
	* g++.dg/cpp0x/desig2.C: New.
	* g++.dg/cpp0x/desig3.C: Likewise.
	* g++.dg/cpp0x/desig4.C: Likewise.

From-SVN: r255845
2017-12-19 22:14:59 +00:00