invoke.texi (-fvar-tracking-assignments): New.
gcc/ChangeLog: * doc/invoke.texi (-fvar-tracking-assignments): New. (-fvar-tracking-assignments-toggle): New. (-fdump-final-insns=file): Mark filename as optional. (--param min-nondebug-insn-uid): New. (-gdwarf-@{version}): Mention version 4. * opts.c (common_handle_option): Accept it. * tree-vrp.c (find_assert_locations_1): Skip debug stmts. * regrename.c (regrename_optimize): Drop last. Don't count debug insns as uses. Don't reject change because of debug insn. (do_replace): Reject DEBUG_INSN as chain starter. Take base_regno from the chain starter, and check for inexact matches in DEBUG_INSNS. (scan_rtx_reg): Accept inexact matches in DEBUG_INSNs. (build_def_use): Simplify and fix the marking of DEBUG_INSNs. * sched-ebb.c (schedule_ebbs): Skip boundary debug insns. * fwprop.c (forward_propagate_and_simplify): ...into debug insns. * doc/gimple.texi (is_gimple_debug): New. (gimple_debug_bind_p): New. (is_gimple_call, gimple_assign_cast_p): End sentence with period. * doc/install.texi (bootstrap-debug): More details. (bootstrap-debug-big, bootstrap-debug-lean): Document. (bootstrap-debug-lib): More details. (bootstrap-debug-ckovw): Update. (bootstrap-time): New. * tree-into-ssa.c (mark_def_sites): Skip debug stmts. (insert_phi_nodes_for): Insert debug stmts. (rewrite_stmt): Take iterator. Insert debug stmts. (rewrite_enter_block): Adjust. (maybe_replace_use_in_debug_stmt): New. (rewrite_update_stmt): Use it. (mark_use_interesting): Return early for debug stmts. * tree-ssa-loop-im.c (rewrite_bittest): Propagate DEFs into debug stmts before replacing stmt. (move_computations_stmt): Likewise. * ira-conflicts.c (add_copies): Skip debug insns. * regstat.c (regstat_init_n_sets_and_refs): Discount debug insns. (regstat_bb_compute_ri): Skip debug insns. * tree-ssa-threadupdate.c (redirection_block_p): Skip debug stmts. * tree-ssa-loop-manip.c (find_uses_to_rename_stmt, check_loop_closed_ssa_stmt): Skip debug stmts. * tree-tailcall.c (find_tail_calls): Likewise. * tree-ssa-loop-ch.c (should_duplicate_loop_header_p): Likewise. * tree.h (MAY_HAVE_DEBUG_STMTS): New. (build_var_debug_value_stat): Declare. (build_var_debug_value): Define. (target_for_debug_bind): Declare. * reload.c (find_equiv_reg): Skip debug insns. * rtlanal.c (reg_used_between_p): Skip debug insns. (side_effects_p): Likewise. (canonicalize_condition): Likewise. * ddg.c (create_ddg_dep_from_intra_loop_link): Check that non-debug insns never depend on debug insns. (create_ddg_dep_no_link): Likewise. (add_cross_iteration_register_deps): Use ANTI_DEP for debug insns. Don't add inter-loop dependencies for debug insns. (build_intra_loop_deps): Likewise. (create_ddg): Count debug insns. * ddg.h (struct ddg::num_debug): New. (num_backargs): Pair up with previous int field. * diagnostic.c (diagnostic_report_diagnostic): Skip notes on -fcompare-debug-second. * final.c (get_attr_length_1): Skip debug insns. (rest_of_clean-state): Don't dump CFA_RESTORE_STATE. * gcc.c (invoke_as): Call compare-debug-dump-opt. (driver_self_specs): Map -fdump-final-insns to -fdump-final-insns=.. (get_local_tick): New. (compare_debug_dump_opt_spec_function): Test for . argument and compute output name. Compute temp output spec without flag name. Compute -frandom-seed. (OPT): Undef after use. * cfgloopanal.c (num_loop_insns): Skip debug insns. (average_num_loop_insns): Likewise. * params.h (MIN_NONDEBUG_INSN_UID): New. * gimple.def (GIMPLE_DEBUG): New. * ipa-reference.c (scan_stmt_for_static_refs): Skip debug stmts. * auto-inc-dec.c (merge_in_block): Skip debug insns. (merge_in_block): Fix whitespace. * toplev.c (flag_var_tracking): Update comment. (flag_var_tracking_assignments): New. (flag_var_tracking_assignments_toggle): New. (process_options): Don't open final insns dump file if we're not going to write to it. Compute defaults for var_tracking. * df-scan.c (df_insn_rescan_debug_internal): New. (df_uses_record): Handle debug insns. * haifa-sched.c (ready): Initialize n_debug. (contributes_to_priority): Skip debug insns. (dep_list_size): New. (priority): Use it. (rank_for_schedule): Likewise. Schedule debug insns as soon as they're ready. Disregard previous debug insns to make decisions. (queue_insn): Never queue debug insns. (ready_add, ready_remove_first, ready_remove): Count debug insns. (schedule_insn): Don't reject debug insns because of issue rate. (get_ebb_head_tail, no_real_insns_p): Skip boundary debug insns. (queue_to_ready): Skip and discount debug insns. (choose_ready): Let debug insns through. (schedule_block): Check boundary debug insns. Discount debug insns, schedule them early. Adjust whitespace. (set_priorities): Check for boundary debug insns. (add_jump_dependencies): Use dep_list_size. (prev_non_location_insn): New. (check_cfg): Use it. * tree-ssa-loop-ivopts.c (find-interesting_users): Skip debug stmts. (remove_unused_ivs): Reset debug stmts. * modulo-sched.c (const_iteration_count): Skip debug insns. (res_MII): Discount debug insns. (loop_single_full_bb_p): Skip debug insns. (sms_schedule): Likewise. (sms_schedule_by_order): Likewise. (ps_has_conflicts): Likewise. * caller-save.c (refmarker_fn): New. (save_call_clobbered_regs): Replace regs with saved mem in debug insns. (mark_referenced_regs): Take pointer, mark and arg. Adjust. Call refmarker_fn mark for hardregnos. (mark_reg_as_referenced): New. (replace_reg_with_saved_mem): New. * ipa-pure-const.c (check_stmt): Skip debug stmts. * cse.c (cse_insn): Canonicalize debug insns. Skip them when searching back. (cse_extended_basic_block): Skip debug insns. (count_reg_usage): Likewise. (is_dead_reg): New, split out of... (set_live_p): ... here. (insn_live_p): Use it for debug insns. * tree-stdarg.c (check_all_va_list_escapes): Skip debug stmts. (execute_optimize_stdarg): Likewise. * tree-ssa-dom.c (propagate_rhs_into_lhs): Likewise. * tree-ssa-propagate.c (substitute_and_fold): Don't regard changes in debug stmts as changes. * sel-sched.c (moving_insn_creates_bookkeeping_block_p): New. (moveup_expr): Don't move across debug insns. Don't move debug insn if it would create a bookkeeping block. (moveup_expr_cached): Don't use cache for debug insns that are heads of blocks. (compute_av_set_inside_bb): Skip debug insns. (sel_rank_for_schedule): Schedule debug insns first. Remove dead code. (block_valid_for_bookkeeping_p); Support lax searches. (create_block_for_bookkeeping): Adjust block numbers when encountering debug-only blocks. (find_place_for_bookkeeping): Deal with debug-only blocks. (generate_bookkeeping_insn): Accept no place to insert. (remove_temp_moveop_nops): New argument full_tidying. (prepare_place_to_insert): Deal with debug insns. (advance_state_on_fence): Debug insns don't start cycles. (update_boundaries): Take fence as argument. Deal with debug insns. (schedule_expr_on_boundary): No full_tidying on debug insns. (fill_insns): Deal with debug insns. (track_scheduled_insns_and_blocks): Don't count debug insns. (need_nop_to_preserve_insn_bb): New, split out of... (remove_insn_from_stream): ... this. (fur_orig_expr_not_found): Skip debug insns. * rtl.def (VALUE): Move up. (DEBUG_INSN): New. * tree-ssa-sink.c (all_immediate_uses_same_place): Skip debug stmts. (nearest_common_dominator_of_uses): Take debug_stmts argument. Set it if debug stmts are found. (statement_sink_location): Skip debug stmts. Propagate moving defs into debug stmts. * ifcvt.c (first_active_insn): Skip debug insns. (last_active_insns): Likewise. (cond_exec_process_insns): Likewise. (noce_process_if_block): Likewise. (check_cond_move_block): Likewise. (cond_move_convert_if_block): Likewise. (block_jumps_and_fallthru_p): Likewise. (dead_or_predicable): Likewise. * dwarf2out.c (debug_str_hash_forced): New. (find_AT_string): Add comment. (gen_label_for_indirect_string): New. (get_debug_string_label): New. (AT_string_form): Use it. (mem_loc_descriptor): Handle non-TLS symbols. Handle MINUS , DIV, MOD, AND, IOR, XOR, NOT, ABS, NEG, and CONST_STRING. Accept but discard COMPARE, IF_THEN_ELSE, ROTATE, ROTATERT, TRUNCATE and several operations that cannot be represented with DWARF opcodes. (loc_descriptor): Ignore SIGN_EXTEND and ZERO_EXTEND. Require dwarf_version 4 for DW_OP_implicit_value and DW_OP_stack_value. (dwarf2out_var_location): Take during-call mark into account. (output_indirect_string): Update comment. Output if there are label and references. (prune_indirect_string): New. (prune_unused_types): Call it if debug_str_hash_forced. More in dwarf2out.c, from Jakub Jelinek <jakub@redhat.com>: (dw_long_long_const): Remove. (struct dw_val_struct): Change val_long_long type to rtx. (print_die, attr_checksum, same_dw_val_p, loc_descriptor): Adjust for val_long_long change to CONST_DOUBLE rtx from a long hi/lo pair. (output_die): Likewise. Use HOST_BITS_PER_WIDE_INT size of each component instead of HOST_BITS_PER_LONG. (output_loc_operands): Likewise. For const8* assert HOST_BITS_PER_WIDE_INT rather than HOST_BITS_PER_LONG is >= 64. (output_loc_operands_raw): For const8* assert HOST_BITS_PER_WIDE_INT rather than HOST_BITS_PER_LONG is >= 64. (add_AT_long_long): Remove val_hi and val_lo arguments, add val_const_double. (size_of_die): Use HOST_BITS_PER_WIDE_INT size multiplier instead of HOST_BITS_PER_LONG for dw_val_class_long_long. (add_const_value_attribute): Adjust add_AT_long_long caller. Don't handle TLS SYMBOL_REFs. If CONST wraps a constant, tail recurse. (dwarf_stack_op_name): Handle DW_OP_implicit_value and DW_OP_stack_value. (size_of_loc_descr, output_loc_operands, output_loc_operands_raw): Handle DW_OP_implicit_value. (extract_int): Move prototype earlier. (mem_loc_descriptor): For SUBREG punt if inner mode size is wider than DWARF2_ADDR_SIZE. Handle SIGN_EXTEND and ZERO_EXTEND by DW_OP_shl and DW_OP_shr{a,}. Handle EQ, NE, GT, GE, LT, LE, GTU, GEU, LTU, LEU, SMIN, SMAX, UMIN, UMAX, SIGN_EXTRACT, ZERO_EXTRACT. (loc_descriptor): Compare mode size with DWARF2_ADDR_SIZE instead of Pmode size. (loc_descriptor): Add MODE argument. Handle CONST_INT, CONST_DOUBLE, CONST_VECTOR, CONST, LABEL_REF and SYMBOL_REF if mode != VOIDmode, attempt to handle other expressions. Don't handle TLS SYMBOL_REFs. (concat_loc_descriptor, concatn_loc_descriptor, loc_descriptor_from_tree_1): Adjust loc_descriptor callers. (add_location_or_const_value_attribute): Likewise. For single location loc_lists attempt to use add_const_value_attribute for constant decls. Add DW_AT_const_value even if NOTE_VAR_LOCATION is VAR_LOCATION with CONSTANT_P or CONST_STRING in its expression. * cfgbuild.c (inside_basic_block_p): Handle debug insns. (control_flow_insn_p): Likewise. * tree-parloops.c (eliminate_local_variables_stmt): Handle debug stmt. (separate_decls_in_region_debug_bind): New. (separate_decls_in_region): Process debug bind stmts afterwards. * recog.c (verify_changes): Handle debug insns. (extract_insn): Likewise. (peephole2_optimize): Skip debug insns. * dse.c (scan_insn): Skip debug insns. * sel-sched-ir.c (return_nop_to_pool): Take full_tidying argument. Pass it on. (setup_id_for_insn): Handle debug insns. (maybe_tidy_empty_bb): Adjust whitespace. (tidy_control_flow): Skip debug insns. (sel_remove_insn): Adjust for debug insns. (sel_estimate_number_of_insns): Skip debug insns. (create_insn_rtx_from_pattern): Handle debug insns. (create_copy_of_insn_rtx): Likewise. * sel-sched-.h (sel_bb_end): Declare. (sel_bb_empty_or_nop_p): New. (get_all_loop_exits): Use it. (_eligible_successor_edge_p): Likewise. (return_nop_to_pool): Adjust. * tree-eh.c (tre_empty_eh_handler_p): Skip debug stmts. * ira-lives.c (process_bb_node_lives): Skip debug insns. * gimple-pretty-print.c (dump_gimple_debug): New. (dump_gimple_stmt): Use it. (dump_bb_header): Skip gimple debug stmts. * regmove.c (optimize_reg_copy_1): Discount debug insns. (fixup_match_2): Likewise. (regmove_backward_pass): Likewise. Simplify combined replacement. Handle debug insns. * function.c (instantiate_virtual_regs): Handle debug insns. * function.h (struct emit_status): Add x_cur_debug_insn_uid. * print-rtl.h: Include cselib.h. (print_rtx): Print VALUEs. Split out and recurse for VAR_LOCATIONs. * df.h (df_inns_rescan_debug_internal): Declare. * gcse.c (alloc_hash_table): Estimate n_insns. (cprop_insn): Don't regard debug insns as changes. (bypass_conditional_jumps): Skip debug insns. (one_pre_gcse_pass): Adjust. (one_code_hoisting_pass): Likewise. (compute_ld_motion_mems): Skip debug insns. (one_cprop_pass): Adjust. * tree-if-conv.c (tree_if_convert_stmt): Reset debug stmts. (if_convertible_stmt_p): Handle debug stmts. * init-regs.c (initialize_uninitialized_regs): Skip debug insns. * tree-vect-loop.c (vect_is_simple_reduction): Skip debug stmts. * ira-build.c (create_bb_allocnos): Skip debug insns. * tree-flow-inline.h (has_zero_uses): Discount debug stmts. (has_single_use): Likewise. (single_imm_use): Likewise. (num_imm_uses): Likewise. * tree-ssa-phiopt.c (empty_block_p): Skip debug stmts. * tree-ssa-coalesce.c (build_ssa_conflict_graph): Skip debug stmts. (create_outofssa_var_map): Likewise. * lower-subreg.c (adjust_decomposed_uses): New. (resolve_debug): New. (decompose_multiword_subregs): Use it. * tree-dfa.c (find_referenced_vars): Skip debug stmts. * emit-rtl.c: Include params.h. (cur_debug_insn_uid): Define. (set_new_first_and_last_insn): Set cur_debug_insn_uid too. (copy_rtx_if_shared_1): Handle debug insns. (reset_used_flags): Likewise. (set_used_flags): LIkewise. (get_max_insn_count): New. (next_nondebug_insn): New. (prev_nondebug_insn): New. (make_debug_insn_raw): New. (emit_insn_before_noloc): Handle debug insns. (emit_jump_insn_before_noloc): Likewise. (emit_call_insn_before_noloc): Likewise. (emit_debug_insn_before_noloc): New. (emit_insn_after_noloc): Handle debug insns. (emit_jump_insn_after_noloc): Likewise. (emit_call_insn_after_noloc): Likewise. (emit_debug_insn_after_noloc): Likewise. (emit_insn_after): Take loc from earlier non-debug insn. (emit_jump_insn_after): Likewise. (emit_call_insn_after): Likewise. (emit_debug_insn_after_setloc): New. (emit_debug_insn_after): New. (emit_insn_before): Take loc from later non-debug insn. (emit_jump_insn_before): Likewise. (emit_call_insn_before): Likewise. (emit_debug_insn_before_setloc): New. (emit_debug_insn_before): New. (emit_insn): Handle debug insns. (emit_debug_insn): New. (emit_jump_insn): Handle debug insns. (emit_call_insn): Likewise. (emit): Likewise. (init_emit): Take min-nondebug-insn-uid into account. Initialize cur_debug_insn_uid. (emit_copy_of_insn_after): Handle debug insns. * cfgexpand.c (gimple_assign_rhs_to_tree): Do not overwrite location of single rhs in place. (maybe_dump_rtl_for_gimple_stmt): Dump lineno. (floor_sdiv_adjust): New. (cell_sdiv_adjust): New. (cell_udiv_adjust): New. (round_sdiv_adjust): New. (round_udiv_adjust): New. (wrap_constant): Moved from cselib. (unwrap_constant): New. (expand_debug_expr): New. (expand_debug_locations): New. (expand_gimple_basic_block): Drop hiding redeclaration. Expand debug bind stmts. (gimple_expand_cfg): Expand debug locations. * cselib.c: Include tree-pass.h. (struct expand_value_data): New. (cselib_record_sets_hook): New. (PRESERVED_VALUE_P, LONG_TERM_PRESERVED_VALUE_P): New. (cselib_clear_table): Move, and implemnet in terms of... (cselib_reset_table_with_next_value): ... this. (cselib_get_next_unknown_value): New. (discard_useless_locs): Don't discard preserved values. (cselib_preserve_value): New. (cselib_preserved_value_p): New. (cselib_preserve_definitely): New. (cselib_clear_preserve): New. (cselib_preserve_only_values): New. (new_cselib_val): Take rtx argument. Dump it in details. (cselib_lookup_mem): Adjust. (expand_loc): Take regs_active in struct. Adjust. Silence dumps unless details are requested. (cselib_expand_value_rtx_cb): New. (cselib_expand_value_rtx): Rename and reimplment in terms of... (cselib_expand_value_rtx_1): ... this. Adjust. Silence dumps without details. Copy more subregs. Try to resolve values using a callback. Wrap constants. (cselib_subst_to_values): Adjust. (cselib_log_lookup): New. (cselib_lookup): Call it. (cselib_invalidate_regno): Don't count preserved values as useless. (cselib_invalidate_mem): Likewise. (cselib_record_set): Likewise. (struct set): Renamed to cselib_set, moved to cselib.h. (cselib_record_sets): Adjust. Call hook. (cselib_process_insn): Reset table when it would be cleared. (dump_cselib_val): New. (dump_cselib_table): New. * tree-cfgcleanup.c (tree_forwarded_block_p): Skip debug stmts. (remove_forwarder_block): Support moving debug stmts. * cselib.h (cselib_record_sets_hook): Declare. (cselib_expand_callback): New type. (cselib_expand_value_rtx_cb): Declare. (cselib_reset_table_with_next_value): Declare. (cselib_get_next_unknown_value): Declare. (cselib_preserve_value): Declare. (cselib_preserved_value_p): Declare. (cselib_preserve_only_values): Declare. (dump_cselib_table): Declare. * cfgcleanup.c (flow_find_cross_jump): Skip debug insns. (try_crossjump_to_edge): Likewise. (delete_unreachable_blocks): Remove dominant GIMPLE blocks after dominated blocks when debug stmts are present. * simplify-rtx.c (delegitimize_mem_from_attrs): New. * tree-ssa-live.c (remove_unused_locals): Skip debug stmts. (set_var_live_on_entry): Likewise. * loop-invariant.c (find_invariants_bb): Skip debug insns. * cfglayout.c (curr_location, last_location): Make static. (set_curr_insn_source_location): Don't avoid bouncing. (get_curr_insn_source_location): New. (get_curr_insn_block): New. (duplicate_insn_chain): Handle debug insns. * tree-ssa-forwprop.c (forward_propagate_addr_expr): Propagate into debug stmts. * common.opt (fcompare-debug): Move to sort order. (fdump-unnumbered-links): Likewise. (fvar-tracking-assignments): New. (fvar-tracking-assignments-toggle): New. * tree-ssa-dce.c (mark_stmt_necessary): Don't mark blocks because of debug stmts. (mark_stmt_if_obviously_necessary): Mark debug stmts. (eliminate_unnecessary_stmts): Walk dominated blocks before dominators. * tree-ssa-ter.c (find_replaceable_in_bb): Skip debug stmts. * ira.c (memref_used_between_p): Skip debug insns. (update_equiv_regs): Likewise. * sched-deps.c (sd_lists_size): Accept empty list. (sd_init_insn): Mark debug insns. (sd_finish_insn): Unmark them. (sd_add_dep): Reject non-debug deps on debug insns. (fixup_sched_groups): Give debug insns group treatment. Skip debug insns. (sched_analyze_reg): Don't mark debug insns for sched before call. (sched_analyze_2): Handle debug insns. (sched_analyze_insn): Compute next non-debug insn. Handle debug insns. (deps_analyze_insn): Handle debug insns. (deps_start_bb): Skip debug insns. (init_deps): Initialize last_debug_insn. * tree-ssa.c (target_for_debug_bind): New. (find_released_ssa_name): New. (propagate_var_def_into_debug_stmts): New. (propagate_defs_into_debug_stmts): New. (verify_ssa): Skip debug bind stmts without values. (warn_uninialized_vars): Skip debug stmts. * target-def.h (TARGET_DELEGITIMIZE_ADDRESS): Set default. * rtl.c (rtx_equal_p_cb): Handle VALUEs. (rtx_equal_p): Likewise. * ira-costs.c (scan_one_insn): Skip debug insns. (process_bb_node_for_hard_reg_moves): Likewise. * rtl.h (DEBUG_INSN_P): New. (NONDEBUG_INSN_P): New. (MAY_HAVE_DEBUG_INSNS): New. (INSN_P): Accept debug insns. (RTX_FRAME_RELATED_P): Likewise. (INSN_DELETED_P): Likewise (PAT_VAR_LOCATION_DECL): New. (PAT_VAR_LOCATION_LOC): New. (PAT_VAR_OCATION_STATUS): New. (NOTE_VAR_LOCATION_DECL): Reimplement. (NOTE_VAR_LOCATION_LOC): Likewise. (NOTE_VAR_LOCATION_STATUS): Likewise. (INSN_VAR_LOCATION): New. (INSN_VAR_LOCATION_DECL): New. (INSN_VAR_LOCATION_LOC): New. (INSN_VAR_LOCATION_STATUS): New. (gen_rtx_UNKNOWN_VAR_LOC): New. (VAR_LOC_UNKNOWN_P): New. (NOTE_DURING_CALL_P): New. (SCHED_GROUP_P): Accept debug insns. (emit_debug_insn_before): Declare. (emit_debug_insn_before_noloc): Declare. (emit_debug_insn_beore_setloc): Declare. (emit_debug_insn_after): Declare. (emit_debug_insn_after_noloc): Declare. (emit_debug_insn_after_setloc): Declare. (emit_debug_insn): Declare. (make_debug_insn_raw): Declare. (prev_nondebug_insn): Declare. (next_nondebug_insn): Declare. (delegitimize_mem_from_attrs): Declare. (get_max_insn_count): Declare. (wrap_constant): Declare. (unwrap_constant): Declare. (get_curr_insn_source_location): Declare. (get_curr_insn_block): Declare. * tree-inline.c (insert_debug_decl_map): New. (processing_debug_stmt): New. (remap_decl): Don't create new mappings in debug stmts. (remap_gimple_op_r): Don't add references in debug stmts. (copy_tree_body_r): Likewise. (remap_gimple_stmt): Handle debug bind stmts. (copy_bb): Skip debug stmts. (copy_edges_for_bb): Likewise. (copy_debug_stmt): New. (copy_debug_stmts): New. (copy_body): Copy debug stmts at the end. (insert_init_debug_bind): New. (insert_init_stmt): Take id. Skip and emit debug stmts. (setup_one_parameter): Remap variable earlier, register debug mapping. (estimate_num_insns): Skip debug stmts. (expand_call_inline): Preserve debug_map. (optimize_inline_calls): Check for no debug_stmts left-overs. (unsave_expr_now): Preserve debug_map. (copy_gimple_seq_and_replace_locals): Likewise. (tree_function_versioning): Check for no debug_stmts left-overs. Init and destroy debug_map as needed. Split edges unconditionally. (build_duplicate_type): Init and destroy debug_map as needed. * tree-inline.h: Include gimple.h instead of pointer-set.h. (struct copy_body_data): Add debug_stmts and debug_map. * sched-int.h (struct ready_list): Add n_debug. (struct deps): Add last_debug_insn. (DEBUG_INSN_SCHED_P): New. (BOUNDARY_DEBUG_INSN_P): New. (SCHEDULE_DEBUG_INSN_P): New. (sd_iterator_cond): Accept empty list. * combine.c (create_log_links): Skip debug insns. (combine_instructions): Likewise. (cleanup_auto_inc_dec): New. From Jakub Jelinek: Make sure the return value is always unshared. (struct rtx_subst_pair): New. (auto_adjust_pair): New. (propagate_for_debug_subst): New. (propagate_for_debug): New. (try_combine): Skip debug insns. Propagate removed defs into debug insns. (next_nonnote_nondebug_insn): New. (distribute_notes): Use it. Skip debug insns. (distribute_links): Skip debug insns. * tree-outof-ssa.c (set_location_for_edge): Likewise. * resource.c (mark_target_live_regs): Likewise. * var-tracking.c: Include cselib.h and target.h. (enum micro_operation_type): Add MO_VAL_USE, MO_VAL_LOC, and MO_VAL_SET. (micro_operation_type_name): New. (enum emit_note_where): Add EMIT_NOTE_AFTER_CALL_INSN. (struct micro_operation_def): Update comments. (decl_or_value): New type. Use instead of decls. (struct emit_note_data_def): Add vars. (struct attrs_def): Use decl_or_value. (struct variable_tracking_info_def): Add permp, flooded. (struct location_chain_def): Update comment. (struct variable_part_def): Use decl_or_value. (struct variable_def): Make var_part a variable length array. (valvar_pool): New. (scratch_regs): New. (cselib_hook_called): New. (dv_is_decl_p): New. (dv_is_value_p): New. (dv_as_decl): New. (dv_as_value): New. (dv_as_opaque): New. (dv_onepart_p): New. (dv_pool): New. (IS_DECL_CODE): New. (check_value_is_not_decl): New. (dv_from_decl): New. (dv_from_value): New. (dv_htab_hash): New. (variable_htab_hash): Use it. (variable_htab_eq): Support values. (variable_htab_free): Free from the right pool. (attrs_list_member, attrs_list_insert): Use decl_or_value. (attrs_list_union): Adjust. (attrs_list_mpdv_union): New. (tie_break_pointers): New. (canon_value_cmp): New. (unshare_variable): Return possibly-modified slot. (vars_copy_1): Adjust. (var_reg_decl_set): Adjust. Split out of... (var_reg_set): ... this. (get_init_value): Adjust. (var_reg_delete_and_set): Adjust. (var_reg_delete): Adjust. (var_regno_delete): Adjust. (var_mem_decl_set): Split out of... (var_mem_set): ... this. (var_mem_delete_and_set): Adjust. (var_mem_delete): Adjust. (val_store): New. (val_reset): New. (val_resolve): New. (variable_union): Adjust. Speed up merge of 1-part vars. (variable_canonicalize): Use unshared slot. (VALUED_RECURSED_INTO): New. (find_loc_in_1pdv): New. (struct dfset_merge): New. (insert_into_intersection): New. (intersect_loc_chains): New. (loc_cmp): New. (canonicalize_loc_order_check): New. (canonicalize_values_mark): New. (canonicalize_values_star): New. (variable_merge_over_cur): New. (variable_merge_over_src): New. (dataflow_set_merge): New. (dataflow_set_equiv_regs): New. (remove_duplicate_values): New. (struct dfset_post_merge): New. (variable_post_merge_new_vals): New. (variable_post_merge_perm_vals): New. (dataflow_post_merge_adjust): New. (find_mem_expr_in_1pdv): New. (dataflow_set_preserve_mem_locs): New. (dataflow_set_remove_mem_locs): New. (dataflow_set_clear_at_call): New. (onepart_variable_different_p): New. (variable_different_p): Use it. (dataflow_set_different_1): Adjust. Make detailed dump more verbose. (track_expr_p): Add need_rtl parameter. Don't generate rtl if not needed. (track_loc_p): Pass it true. (struct count_use_info): New. (find_use_val): New. (replace_expr_with_values): New. (log_op_type): New. (use_type): New, partially split out of... (count_uses): ... this. Count new micro-ops. (count_uses_1): Adjust. (count_stores): Adjust. (count_with_sets): New. (VAL_NEEDS_RESOLUTION): New. (VAL_HOLDS_TRACK_EXPR): New. (VAL_EXPR_IS_COPIED): New. (VAL_EXPR_IS_CLOBBERED): New. (add_uses): Adjust. Generate new micro-ops. (add_uses_1): Adjust. (add_stores): Generate new micro-ops. (add_with_sets): New. (find_src_status): Adjust. (find_src_set_src): Adjust. (compute_bb_dataflow): Use dataflow_set_clear_at_call. Handle new micro-ops. Canonicalize value equivalances. (vt_find_locations): Compute total size of hash tables for dumping. Perform merge for var-tracking-assignments. Don't disregard single-block loops. (dump_attrs_list): Handle decl_or_value. (dump_variable): Take variable. Deal with decl_or_value. (dump_variable_slot): New. (dump_vars): Use it. (dump_dataflow_sets): Adjust. (set_slot_part): New, extended to support one-part variables after splitting out of... (set_variable_part): ... this. (clobber_slot_part): New, split out of... (clobber_variable_part): ... this. (delete_slot_part): New, split out of... (delete_variable_part): .... this. (check_wrap_constant): New. (vt_expand_loc_callback): New. (vt_expand_loc): New. (emit_note_insn_var_location): Adjust. Handle values. Handle EMIT_NOTE_AFTER_CALL_INSN. (emit_notes_for_differences_1): Adjust. Handle values. (emit_notes_for_differences_2): Likewise. (emit_notes_for_differences): Adjust. (emit_notes_in_bb): Take pointer to set. Emit AFTER_CALL_INSN notes. Adjust. Handle new micro-ops. (vt_add_function_parameters): Adjust. Create and bind values. (vt_initialize): Adjust. Initialize scratch_regs and valvar_pool, flooded and perm.. Initialize and use cselib. Log operations. Move some code to count_with_sets and add_with_sets. (delete_debug_insns): New. (vt_debug_insns_local): New. (vt_finalize): Release permp, valvar_pool, scratch_regs. Finish cselib. (var_tracking_main): If var-tracking-assignments is enabled but var-tracking isn't, delete debug insns and leave. Likewise if we exceed limits or fail the stack adjustments tests, and after all var-tracking processing. More in var-tracking, from Jakub Jelinek <jakub@redhat.com>: (dataflow_set): Add traversed_vars. (value_chain, const_value_chain): New typedefs. (value_chain_pool, value_chains): New variables. (value_chain_htab_hash, value_chain_htab_eq, add_value_chain, add_value_chains, add_cselib_value_chains, remove_value_chain, remove_value_chains, remove_cselib_value_chains): New functions. (shared_hash_find_slot_unshare_1, shared_hash_find_slot_1, shared_hash_find_slot_noinsert_1, shared_hash_find_1): New static inlines. (shared_hash_find_slot_unshare, shared_hash_find_slot, shared_hash_find_slot_noinsert, shared_hash_find): Update. (dst_can_be_shared): New variable. (unshare_variable): Unshare set->vars if shared, use shared_hash_*. Clear dst_can_be_shared. If set->traversed_vars is non-NULL and different from set->vars, look up slot again instead of using the passed in slot. (dataflow_set_init): Initialize traversed_vars. (variable_union): Use shared_hash_*. Use initially NO_INSERT lookup if set->vars is shared. Don't keep slot cleared before calling unshare_variable. Unshare set->vars if needed. Adjust unshare_variable callers. Clear dst_can_be_shared if needed. Even ->refcount == 1 vars must be unshared if set->vars is shared and var needs to be modified. (dataflow_set_union): Set traversed_vars during canonicalization. (VALUE_CHANGED, DECL_CHANGED): Define. (set_dv_changed, dv_changed_p): New static inlines. (track_expr_p): Clear DECL_CHANGED. (dump_dataflow_sets): Set it. (variable_was_changed): Call set_dv_changed. (emit_note_insn_var_location): Likewise. (changed_variables_stack): New variable. (check_changed_vars_1, check_changed_vars_2): New functions. (emit_notes_for_changes): Do nothing if changed_variables is empty. Traverse changed_variables with check_changed_vars_1, call check_changed_vars_2 on each changed_variables_stack entry. (emit_notes_in_bb): Add SET argument. Just clear it at the beginning, use it instead of local &set, don't destroy it at the end. (vt_emit_notes): Call dataflow_set_clear early on all VTI(bb)->out sets, never use them, instead use emit_notes_in_bb computed set, dataflow_set_clear also VTI(bb)->in when we are done with the basic block. Initialize changed_variables_stack, free it afterwards. If ENABLE_CHECKING verify that after noting differences to an empty set value_chains hash table is empty. (vt_initialize): Initialize value_chains and value_chain_pool. (vt_finalize): Delete value_chains htab, free value_chain_pool. (variable_tracking_main): Call dump_dataflow_sets before calling vt_emit_notes, not after it. * tree-flow.h (propagate_defs_into_debug_stmts): Declare. (propagate_var_def_into_debug_stmts): Declare. * df-problems.c (df_lr_bb_local_compute): Skip debug insns. (df_set_note): Reject debug insns. (df_whole_mw_reg_dead_p): Take added_notes_p argument. Don't add notes to debug insns. (df_note_bb_compute): Adjust. Likewise. (df_simulate_uses): Skip debug insns. (df_simulate_initialize_backwards): Likewise. * reg-stack.c (subst_stack_regs_in_debug_insn): New. (subst_stack_regs_pat): Reject debug insns. (convert_regs_1): Handle debug insns. * Makefile.in (TREE_INLINE_H): Take pointer-set.h from GIMPLE_H. (print-rtl.o): Depend on cselib.h. (cselib.o): Depend on TREE_PASS_H. (var-tracking.o): Depend on cselib.h and TARGET_H. * sched-rgn.c (rgn_estimate_number_of_insns): Discount debug insns. (init_ready_list): Skip boundary debug insns. (add_branch_dependences): Skip debug insns. (free_block_dependencies): Check for blocks with only debug insns. (compute_priorities): Likewise. * gimple.c (gss_for_code): Handle GIMPLE_DEBUG. (gimple_build_with_ops_stat): Take subcode as unsigned. Adjust all callers. (gimple_build_debug_bind_stat): New. (empty_body_p): Skip debug stmts. (gimple_has_side_effects): Likewise. (gimple_rhs_has_side_effects): Likewise. * gimple.h (enum gimple_debug_subcode, GIMPLE_DEBUG_BIND): New. (gimple_build_debug_bind_stat): Declare. (gimple_build_debug_bind): Define. (is_gimple_debug): New. (gimple_debug_bind_p): New. (gimple_debug_bind_get_var): New. (gimple_debug_bind_get_value): New. (gimple_debug_bind_get_value_ptr): New. (gimple_debug_bind_set_var): New. (gimple_debug_bind_set_value): New. (GIMPLE_DEBUG_BIND_NOVALUE): New internal temporary macro. (gimple_debug_bind_reset_value): New. (gimple_debug_bind_has_value_p): New. (gsi_next_nondebug): New. (gsi_prev_nondebug): New. (gsi_start_nondebug_bb): New. (gsi_last_nondebug_bb): New. * sched-vis.c (print_pattern): Handle VAR_LOCATION. (print_insn): Handle DEBUG_INSN. * tree-cfg.c (remove_bb): Walk stmts backwards. Let loc of first insn prevail. (first_stmt): Skip debug stmts. (first_non_label_stmt): Likewise. (last_stmt): Likewise. (has_zero_uses_1): New. (single_imm_use_1): New. (verify_gimple_debug): New. (verify_types_in_gimple_stmt): Handle debug stmts. (verify_stmt): Likewise. (debug_loop_num): Skip debug stmts. (remove_edge_and_dominated_blocks): Remove dominators last. * tree-ssa-reasssoc.c (rewrite_expr_tree): Propagate into debug stmts. (linearize_expr): Likewise. * config/i386/i386.c (ix86_delegitimize_address): Call default implementation. * config/ia64/ia64.c (ia64_safe_itanium_class): Handle debug insns. (group_barrier_needed): Skip debug insns. (emit_insn_group_barriers): Likewise. (emit_all_insn_group_barriers): Likewise. (ia64_variable_issue): Handle debug insns. (ia64_dfa_new_cycle): Likewise. (final_emit_insn_group_barriers): Skip debug insns. (ia64_dwarf2out_def_steady_cfa): Take frame argument. Don't def cfa without frame. (process_set): Likewise. (process_for_unwind_directive): Pass frame on. * config/rs6000/rs6000.c (TARGET_DELEGITIMIZE_ADDRESS): Define. (rs6000_delegitimize_address): New. (rs6000_debug_adjust_cost): Handle debug insns. (is_microcoded_insn): Likewise. (is_cracked_insn): Likewise. (is_nonpipeline_insn): Likewise. (insn_must_be_first_in_group): Likewise. (insn_must_be_last_in_group): Likewise. (force_new_group): Likewise. * cfgrtl.c (rtl_split_block): Emit INSN_DELETED note if block contains only debug insns. (rtl_merge_blocks): Skip debug insns. (purge_dead_edges): Likewise. (rtl_block_ends_with_call_p): Skip debug insns. * dce.c (deletable_insn_p): Handle VAR_LOCATION. (mark_reg_dependencies): Skip debug insns. * params.def (PARAM_MIN_NONDEBUG_INSN_UID): New. * tree-ssanames.c (release_ssa_name): Propagate def into debug stmts. * tree-ssa-threadedge.c (record_temporary_equivalences_from_stmts): Skip debug stmts. * regcprop.c (replace_oldest_value_addr): Skip debug insns. (replace_oldest_value_mem): Use ALL_REGS for debug insns. (copyprop_hardreg_forward_1): Handle debug insns. * reload1.c (reload): Skip debug insns. Replace unassigned pseudos in debug insns with their equivalences. (eliminate_regs_in_insn): Skip debug insns. (emit_input_reload_insns): Skip debug insns at first, adjust them later. * tree-ssa-operands.c (add_virtual_operand): Reject debug stmts. (get_indirect_ref_operands): Pass opf_no_vops on. (get_expr_operands): Likewise. Skip debug stmts. (parse_ssa_operands): Scan debug insns with opf_no_vops. gcc/testsuite/ChangeLog: * gcc.dg/guality/guality.c: New. * gcc.dg/guality/guality.h: New. * gcc.dg/guality/guality.exp: New. * gcc.dg/guality/example.c: New. * lib/gcc-dg.exp (cleanup-dump): Remove .gk files. (cleanup-saved-temps): Likewise, .gkd files too. gcc/cp/ChangeLog: * cp-tree.h (TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS): New. * cp-lang.c (cxx_dwarf_name): Pass it. * error.c (count_non_default_template_args): Take flags as argument. Adjust all callers. Skip counting of default arguments if the new flag is given. ChangeLog: * Makefile.tpl (BUILD_CONFIG): Default to bootstrap-debug. * Makefile.in: Rebuilt. contrib/ChangeLog: * compare-debug: Look for .gkd files and compare them. config/ChangeLog: * bootstrap-debug.mk: Add comments. * bootstrap-debug-big.mk: New. * bootstrap-debug-lean.mk: New. * bootstrap-debug-ckovw.mk: Add comments. * bootstrap-debug-lib.mk: Drop CFLAGS for stages. Use -g0 for TFLAGS in stage1. Drop -fvar-tracking-assignments-toggle. From-SVN: r151312
This commit is contained in:
parent
8fc68cba09
commit
b5b8b0ac64
135 changed files with 11099 additions and 1202 deletions
|
@ -771,7 +771,7 @@ EXTRA_GCC_FLAGS = \
|
|||
GCC_FLAGS_TO_PASS = $(BASE_FLAGS_TO_PASS) $(EXTRA_HOST_FLAGS) $(EXTRA_GCC_FLAGS)
|
||||
|
||||
@if gcc
|
||||
BUILD_CONFIG =
|
||||
BUILD_CONFIG = bootstrap-debug
|
||||
ifneq ($(BUILD_CONFIG),)
|
||||
include $(foreach CONFIG, $(BUILD_CONFIG), $(srcdir)/config/$(CONFIG).mk)
|
||||
endif
|
||||
|
|
|
@ -619,7 +619,7 @@ EXTRA_GCC_FLAGS = \
|
|||
GCC_FLAGS_TO_PASS = $(BASE_FLAGS_TO_PASS) $(EXTRA_HOST_FLAGS) $(EXTRA_GCC_FLAGS)
|
||||
|
||||
@if gcc
|
||||
BUILD_CONFIG =
|
||||
BUILD_CONFIG = bootstrap-debug
|
||||
ifneq ($(BUILD_CONFIG),)
|
||||
include $(foreach CONFIG, $(BUILD_CONFIG), $(srcdir)/config/$(CONFIG).mk)
|
||||
endif
|
||||
|
|
8
config/bootstrap-debug-big.mk
Normal file
8
config/bootstrap-debug-big.mk
Normal file
|
@ -0,0 +1,8 @@
|
|||
# This BUILD_CONFIG option is a bit like bootstrap-debug-lean, but it
|
||||
# trades space for speed: instead of recompiling programs during
|
||||
# stage3, it generates dumps during stage2 and stage3, saving them all
|
||||
# until the final compare.
|
||||
|
||||
STAGE2_CFLAGS += -gtoggle -fdump-final-insns
|
||||
STAGE3_CFLAGS += -fdump-final-insns
|
||||
do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
|
16
config/bootstrap-debug-ckovw.mk
Normal file
16
config/bootstrap-debug-ckovw.mk
Normal file
|
@ -0,0 +1,16 @@
|
|||
# This BUILD_CONFIG option is to be used along with
|
||||
# bootstrap-debug-lean and bootstrap-debug-lib in a full bootstrap, to
|
||||
# check that all host and target files are built with -fcompare-debug.
|
||||
|
||||
# These arrange for a simple warning to be issued if -fcompare-debug
|
||||
# is not given.
|
||||
# BOOT_CFLAGS += -fcompare-debug="-w%n-fcompare-debug not overridden"
|
||||
# TFLAGS += -fcompare-debug="-w%n-fcompare-debug not overridden"
|
||||
|
||||
# GCC_COMPARE_DEBUG="-w%n-fcompare-debug not overridden";
|
||||
|
||||
FORCE_COMPARE_DEBUG = \
|
||||
GCC_COMPARE_DEBUG=$${GCC_COMPARE_DEBUG--fcompare-debug-not-overridden}; \
|
||||
export GCC_COMPARE_DEBUG;
|
||||
POSTSTAGE1_HOST_EXPORTS += $(FORCE_COMPARE_DEBUG)
|
||||
BASE_TARGET_EXPORTS += $(FORCE_COMPARE_DEBUG)
|
11
config/bootstrap-debug-lean.mk
Normal file
11
config/bootstrap-debug-lean.mk
Normal file
|
@ -0,0 +1,11 @@
|
|||
# This BUILD_CONFIG option is a bit like bootstrap-debug, but in
|
||||
# addition to comparing stripped object files, it also compares
|
||||
# compiler internal state during stage3.
|
||||
|
||||
# This makes it slower than bootstrap-debug, for there's additional
|
||||
# dumping and recompilation during stage3. bootstrap-debug-big can
|
||||
# avoid the recompilation, if plenty of disk space is available.
|
||||
|
||||
STAGE2_CFLAGS += -gtoggle -fcompare-debug=
|
||||
STAGE3_CFLAGS += -fcompare-debug
|
||||
do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
|
12
config/bootstrap-debug-lib.mk
Normal file
12
config/bootstrap-debug-lib.mk
Normal file
|
@ -0,0 +1,12 @@
|
|||
# This BUILD_CONFIG option tests that target libraries built during
|
||||
# stage3 would have generated the same executable code if they were
|
||||
# compiled with -g0.
|
||||
|
||||
# It uses -g0 rather than -gtoggle because -g is default on target
|
||||
# library builds, and toggling it where it's supposed to be disabled
|
||||
# breaks e.g. crtstuff on ppc.
|
||||
|
||||
STAGE1_TFLAGS += -g0 -fcompare-debug=
|
||||
STAGE2_TFLAGS += -fcompare-debug=
|
||||
STAGE3_TFLAGS += -fcompare-debug=-g0
|
||||
do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
|
|
@ -1,2 +1,12 @@
|
|||
STAGE2_CFLAGS += -g0
|
||||
# This BUILD_CONFIG option builds checks that toggling debug
|
||||
# information generation doesn't affect the generated object code.
|
||||
|
||||
# It is very lightweight: in addition to not performing any additional
|
||||
# compilation (unlike bootstrap-debug-lean), it actually speeds up
|
||||
# stage2, for no debug information is generated when compiling with
|
||||
# the unoptimized stage1.
|
||||
|
||||
# For more thorough testing, see bootstrap-debug-lean.mk
|
||||
|
||||
STAGE2_CFLAGS += -gtoggle
|
||||
do-compare = $(SHELL) $(srcdir)/contrib/compare-debug $$f1 $$f2
|
||||
|
|
2
config/bootstrap-time.mk
Normal file
2
config/bootstrap-time.mk
Normal file
|
@ -0,0 +1,2 @@
|
|||
BOOT_CFLAGS += -time=$(shell pwd)/time.log
|
||||
TFLAGS += -time=$(shell pwd)/time.log
|
|
@ -141,4 +141,12 @@ $rm "$1.$suf1" "$2.$suf2"
|
|||
|
||||
trap "exit $status; exit" 0 1 2 15
|
||||
|
||||
if test -f "$1".gkd || test -f "$2".gkd; then
|
||||
if cmp "$1".gkd "$2".gkd; then
|
||||
:
|
||||
else
|
||||
status=$?
|
||||
fi
|
||||
fi
|
||||
|
||||
exit $status
|
||||
|
|
|
@ -911,7 +911,7 @@ SCEV_H = tree-scalar-evolution.h $(GGC_H) tree-chrec.h $(PARAMS_H)
|
|||
LAMBDA_H = lambda.h $(TREE_H) vec.h $(GGC_H)
|
||||
TREE_DATA_REF_H = tree-data-ref.h $(LAMBDA_H) omega.h graphds.h $(SCEV_H)
|
||||
VARRAY_H = varray.h $(MACHMODE_H) $(SYSTEM_H) coretypes.h $(TM_H)
|
||||
TREE_INLINE_H = tree-inline.h pointer-set.h
|
||||
TREE_INLINE_H = tree-inline.h $(GIMPLE_H)
|
||||
REAL_H = real.h $(MACHMODE_H)
|
||||
IRA_INT_H = ira.h ira-int.h $(CFGLOOP_H) alloc-pool.h
|
||||
DBGCNT_H = dbgcnt.h dbgcnt.def
|
||||
|
@ -2653,7 +2653,7 @@ rtl.o : rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
|
|||
|
||||
print-rtl.o : print-rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
|
||||
$(RTL_H) $(TREE_H) hard-reg-set.h $(BASIC_BLOCK_H) $(FLAGS_H) \
|
||||
$(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H)
|
||||
$(BCONFIG_H) $(REAL_H) $(DIAGNOSTIC_H) cselib.h
|
||||
rtlanal.o : rtlanal.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TOPLEV_H) \
|
||||
$(RTL_H) hard-reg-set.h $(TM_P_H) insn-config.h $(RECOG_H) $(REAL_H) \
|
||||
$(FLAGS_H) $(REGS_H) output.h $(TARGET_H) $(FUNCTION_H) $(TREE_H) \
|
||||
|
@ -2832,8 +2832,9 @@ coverage.o : coverage.c $(GCOV_IO_H) $(CONFIG_H) $(SYSTEM_H) coretypes.h \
|
|||
$(HASHTAB_H) tree-iterator.h $(CGRAPH_H) $(TREE_PASS_H) gcov-io.c $(TM_P_H)
|
||||
cselib.o : cselib.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
|
||||
$(REGS_H) hard-reg-set.h $(FLAGS_H) $(REAL_H) insn-config.h $(RECOG_H) \
|
||||
$(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) cselib.h $(GGC_H) $(TM_P_H) \
|
||||
gt-cselib.h $(PARAMS_H) alloc-pool.h $(HASHTAB_H) $(TARGET_H)
|
||||
$(EMIT_RTL_H) $(TOPLEV_H) output.h $(FUNCTION_H) $(TREE_PASS_H) \
|
||||
cselib.h gt-cselib.h $(GGC_H) $(TM_P_H) $(PARAMS_H) alloc-pool.h \
|
||||
$(HASHTAB_H) $(TARGET_H)
|
||||
cse.o : cse.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) $(REGS_H) \
|
||||
hard-reg-set.h $(FLAGS_H) insn-config.h $(RECOG_H) $(EXPR_H) $(TOPLEV_H) \
|
||||
output.h $(FUNCTION_H) $(BASIC_BLOCK_H) $(GGC_H) $(TM_P_H) $(TIMEVAR_H) \
|
||||
|
@ -2923,7 +2924,7 @@ regstat.o : regstat.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
|
|||
var-tracking.o : var-tracking.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
|
||||
$(RTL_H) $(TREE_H) hard-reg-set.h insn-config.h reload.h $(FLAGS_H) \
|
||||
$(BASIC_BLOCK_H) output.h sbitmap.h alloc-pool.h $(FIBHEAP_H) $(HASHTAB_H) \
|
||||
$(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H)
|
||||
$(REGS_H) $(EXPR_H) $(TIMEVAR_H) $(TREE_PASS_H) cselib.h $(TARGET_H)
|
||||
profile.o : profile.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
|
||||
$(TREE_H) $(FLAGS_H) output.h $(REGS_H) $(EXPR_H) $(FUNCTION_H) \
|
||||
$(TOPLEV_H) $(COVERAGE_H) $(TREE_FLOW_H) value-prof.h cfghooks.h \
|
||||
|
|
|
@ -1341,7 +1341,7 @@ merge_in_block (int max_reg, basic_block bb)
|
|||
unsigned int uid = INSN_UID (insn);
|
||||
bool insn_is_add_or_inc = true;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
/* This continue is deliberate. We do not want the uses of the
|
||||
|
@ -1414,7 +1414,7 @@ merge_in_block (int max_reg, basic_block bb)
|
|||
|
||||
/* If the inc insn was merged with a mem, the inc insn is gone
|
||||
and there is noting to update. */
|
||||
if (DF_INSN_UID_GET(uid))
|
||||
if (DF_INSN_UID_GET (uid))
|
||||
{
|
||||
df_ref *def_rec;
|
||||
df_ref *use_rec;
|
||||
|
|
|
@ -98,6 +98,9 @@ static int n_regs_saved;
|
|||
static HARD_REG_SET referenced_regs;
|
||||
|
||||
|
||||
typedef void refmarker_fn (rtx *loc, enum machine_mode mode, int hardregno,
|
||||
void *mark_arg);
|
||||
|
||||
static int reg_save_code (int, enum machine_mode);
|
||||
static int reg_restore_code (int, enum machine_mode);
|
||||
|
||||
|
@ -108,8 +111,9 @@ static void finish_saved_hard_regs (void);
|
|||
static int saved_hard_reg_compare_func (const void *, const void *);
|
||||
|
||||
static void mark_set_regs (rtx, const_rtx, void *);
|
||||
static void add_stored_regs (rtx, const_rtx, void *);
|
||||
static void mark_referenced_regs (rtx);
|
||||
static void mark_referenced_regs (rtx *, refmarker_fn *mark, void *mark_arg);
|
||||
static refmarker_fn mark_reg_as_referenced;
|
||||
static refmarker_fn replace_reg_with_saved_mem;
|
||||
static int insert_save (struct insn_chain *, int, int, HARD_REG_SET *,
|
||||
enum machine_mode *);
|
||||
static int insert_restore (struct insn_chain *, int, int, int,
|
||||
|
@ -770,7 +774,7 @@ save_call_clobbered_regs (void)
|
|||
|
||||
gcc_assert (!chain->is_caller_save_insn);
|
||||
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
/* If some registers have been saved, see if INSN references
|
||||
any of them. We must restore them before the insn if so. */
|
||||
|
@ -785,7 +789,8 @@ save_call_clobbered_regs (void)
|
|||
else
|
||||
{
|
||||
CLEAR_HARD_REG_SET (referenced_regs);
|
||||
mark_referenced_regs (PATTERN (insn));
|
||||
mark_referenced_regs (&PATTERN (insn),
|
||||
mark_reg_as_referenced, NULL);
|
||||
AND_HARD_REG_SET (referenced_regs, hard_regs_saved);
|
||||
}
|
||||
|
||||
|
@ -858,6 +863,10 @@ save_call_clobbered_regs (void)
|
|||
n_regs_saved++;
|
||||
}
|
||||
}
|
||||
else if (DEBUG_INSN_P (insn) && n_regs_saved)
|
||||
mark_referenced_regs (&PATTERN (insn),
|
||||
replace_reg_with_saved_mem,
|
||||
save_mode);
|
||||
|
||||
if (chain->next == 0 || chain->next->block != chain->block)
|
||||
{
|
||||
|
@ -947,52 +956,57 @@ add_stored_regs (rtx reg, const_rtx setter, void *data)
|
|||
|
||||
/* Walk X and record all referenced registers in REFERENCED_REGS. */
|
||||
static void
|
||||
mark_referenced_regs (rtx x)
|
||||
mark_referenced_regs (rtx *loc, refmarker_fn *mark, void *arg)
|
||||
{
|
||||
enum rtx_code code = GET_CODE (x);
|
||||
enum rtx_code code = GET_CODE (*loc);
|
||||
const char *fmt;
|
||||
int i, j;
|
||||
|
||||
if (code == SET)
|
||||
mark_referenced_regs (SET_SRC (x));
|
||||
mark_referenced_regs (&SET_SRC (*loc), mark, arg);
|
||||
if (code == SET || code == CLOBBER)
|
||||
{
|
||||
x = SET_DEST (x);
|
||||
code = GET_CODE (x);
|
||||
if ((code == REG && REGNO (x) < FIRST_PSEUDO_REGISTER)
|
||||
loc = &SET_DEST (*loc);
|
||||
code = GET_CODE (*loc);
|
||||
if ((code == REG && REGNO (*loc) < FIRST_PSEUDO_REGISTER)
|
||||
|| code == PC || code == CC0
|
||||
|| (code == SUBREG && REG_P (SUBREG_REG (x))
|
||||
&& REGNO (SUBREG_REG (x)) < FIRST_PSEUDO_REGISTER
|
||||
|| (code == SUBREG && REG_P (SUBREG_REG (*loc))
|
||||
&& REGNO (SUBREG_REG (*loc)) < FIRST_PSEUDO_REGISTER
|
||||
/* If we're setting only part of a multi-word register,
|
||||
we shall mark it as referenced, because the words
|
||||
that are not being set should be restored. */
|
||||
&& ((GET_MODE_SIZE (GET_MODE (x))
|
||||
>= GET_MODE_SIZE (GET_MODE (SUBREG_REG (x))))
|
||||
|| (GET_MODE_SIZE (GET_MODE (SUBREG_REG (x)))
|
||||
&& ((GET_MODE_SIZE (GET_MODE (*loc))
|
||||
>= GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc))))
|
||||
|| (GET_MODE_SIZE (GET_MODE (SUBREG_REG (*loc)))
|
||||
<= UNITS_PER_WORD))))
|
||||
return;
|
||||
}
|
||||
if (code == MEM || code == SUBREG)
|
||||
{
|
||||
x = XEXP (x, 0);
|
||||
code = GET_CODE (x);
|
||||
loc = &XEXP (*loc, 0);
|
||||
code = GET_CODE (*loc);
|
||||
}
|
||||
|
||||
if (code == REG)
|
||||
{
|
||||
int regno = REGNO (x);
|
||||
int regno = REGNO (*loc);
|
||||
int hardregno = (regno < FIRST_PSEUDO_REGISTER ? regno
|
||||
: reg_renumber[regno]);
|
||||
|
||||
if (hardregno >= 0)
|
||||
add_to_hard_reg_set (&referenced_regs, GET_MODE (x), hardregno);
|
||||
mark (loc, GET_MODE (*loc), hardregno, arg);
|
||||
else if (arg)
|
||||
/* ??? Will we ever end up with an equiv expression in a debug
|
||||
insn, that would have required restoring a reg, or will
|
||||
reload take care of it for us? */
|
||||
return;
|
||||
/* If this is a pseudo that did not get a hard register, scan its
|
||||
memory location, since it might involve the use of another
|
||||
register, which might be saved. */
|
||||
else if (reg_equiv_mem[regno] != 0)
|
||||
mark_referenced_regs (XEXP (reg_equiv_mem[regno], 0));
|
||||
mark_referenced_regs (&XEXP (reg_equiv_mem[regno], 0), mark, arg);
|
||||
else if (reg_equiv_address[regno] != 0)
|
||||
mark_referenced_regs (reg_equiv_address[regno]);
|
||||
mark_referenced_regs (®_equiv_address[regno], mark, arg);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1000,12 +1014,100 @@ mark_referenced_regs (rtx x)
|
|||
for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
|
||||
{
|
||||
if (fmt[i] == 'e')
|
||||
mark_referenced_regs (XEXP (x, i));
|
||||
mark_referenced_regs (&XEXP (*loc, i), mark, arg);
|
||||
else if (fmt[i] == 'E')
|
||||
for (j = XVECLEN (x, i) - 1; j >= 0; j--)
|
||||
mark_referenced_regs (XVECEXP (x, i, j));
|
||||
for (j = XVECLEN (*loc, i) - 1; j >= 0; j--)
|
||||
mark_referenced_regs (&XVECEXP (*loc, i, j), mark, arg);
|
||||
}
|
||||
}
|
||||
|
||||
/* Parameter function for mark_referenced_regs() that adds registers
|
||||
present in the insn and in equivalent mems and addresses to
|
||||
referenced_regs. */
|
||||
|
||||
static void
|
||||
mark_reg_as_referenced (rtx *loc ATTRIBUTE_UNUSED,
|
||||
enum machine_mode mode,
|
||||
int hardregno,
|
||||
void *arg ATTRIBUTE_UNUSED)
|
||||
{
|
||||
add_to_hard_reg_set (&referenced_regs, mode, hardregno);
|
||||
}
|
||||
|
||||
/* Parameter function for mark_referenced_regs() that replaces
|
||||
registers referenced in a debug_insn that would have been restored,
|
||||
should it be a non-debug_insn, with their save locations. */
|
||||
|
||||
static void
|
||||
replace_reg_with_saved_mem (rtx *loc,
|
||||
enum machine_mode mode,
|
||||
int regno,
|
||||
void *arg)
|
||||
{
|
||||
unsigned int i, nregs = hard_regno_nregs [regno][mode];
|
||||
rtx mem;
|
||||
enum machine_mode *save_mode = (enum machine_mode *)arg;
|
||||
|
||||
for (i = 0; i < nregs; i++)
|
||||
if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
|
||||
break;
|
||||
|
||||
/* If none of the registers in the range would need restoring, we're
|
||||
all set. */
|
||||
if (i == nregs)
|
||||
return;
|
||||
|
||||
while (++i < nregs)
|
||||
if (!TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
|
||||
break;
|
||||
|
||||
if (i == nregs
|
||||
&& regno_save_mem[regno][nregs])
|
||||
{
|
||||
mem = copy_rtx (regno_save_mem[regno][nregs]);
|
||||
|
||||
if (nregs == (unsigned int) hard_regno_nregs[regno][save_mode[regno]])
|
||||
mem = adjust_address_nv (mem, save_mode[regno], 0);
|
||||
|
||||
if (GET_MODE (mem) != mode)
|
||||
{
|
||||
/* This is gen_lowpart_if_possible(), but without validating
|
||||
the newly-formed address. */
|
||||
int offset = 0;
|
||||
|
||||
if (WORDS_BIG_ENDIAN)
|
||||
offset = (MAX (GET_MODE_SIZE (GET_MODE (mem)), UNITS_PER_WORD)
|
||||
- MAX (GET_MODE_SIZE (mode), UNITS_PER_WORD));
|
||||
if (BYTES_BIG_ENDIAN)
|
||||
/* Adjust the address so that the address-after-the-data is
|
||||
unchanged. */
|
||||
offset -= (MIN (UNITS_PER_WORD, GET_MODE_SIZE (mode))
|
||||
- MIN (UNITS_PER_WORD, GET_MODE_SIZE (GET_MODE (mem))));
|
||||
|
||||
mem = adjust_address_nv (mem, mode, offset);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
mem = gen_rtx_CONCATN (mode, rtvec_alloc (nregs));
|
||||
for (i = 0; i < nregs; i++)
|
||||
if (TEST_HARD_REG_BIT (hard_regs_saved, regno + i))
|
||||
{
|
||||
gcc_assert (regno_save_mem[regno + i][1]);
|
||||
XVECEXP (mem, 0, i) = copy_rtx (regno_save_mem[regno + i][1]);
|
||||
}
|
||||
else
|
||||
{
|
||||
gcc_assert (save_mode[regno] != VOIDmode);
|
||||
XVECEXP (mem, 0, i) = gen_rtx_REG (save_mode [regno],
|
||||
regno + i);
|
||||
}
|
||||
}
|
||||
|
||||
gcc_assert (GET_MODE (mem) == mode);
|
||||
*loc = mem;
|
||||
}
|
||||
|
||||
|
||||
/* Insert a sequence of insns to restore. Place these insns in front of
|
||||
CHAIN if BEFORE_P is nonzero, behind the insn otherwise. MAXRESTORE is
|
||||
|
|
|
@ -62,6 +62,7 @@ inside_basic_block_p (const_rtx insn)
|
|||
|
||||
case CALL_INSN:
|
||||
case INSN:
|
||||
case DEBUG_INSN:
|
||||
return true;
|
||||
|
||||
case BARRIER:
|
||||
|
@ -85,6 +86,7 @@ control_flow_insn_p (const_rtx insn)
|
|||
{
|
||||
case NOTE:
|
||||
case CODE_LABEL:
|
||||
case DEBUG_INSN:
|
||||
return false;
|
||||
|
||||
case JUMP_INSN:
|
||||
|
|
|
@ -1057,10 +1057,10 @@ flow_find_cross_jump (int mode ATTRIBUTE_UNUSED, basic_block bb1,
|
|||
while (true)
|
||||
{
|
||||
/* Ignore notes. */
|
||||
while (!INSN_P (i1) && i1 != BB_HEAD (bb1))
|
||||
while (!NONDEBUG_INSN_P (i1) && i1 != BB_HEAD (bb1))
|
||||
i1 = PREV_INSN (i1);
|
||||
|
||||
while (!INSN_P (i2) && i2 != BB_HEAD (bb2))
|
||||
while (!NONDEBUG_INSN_P (i2) && i2 != BB_HEAD (bb2))
|
||||
i2 = PREV_INSN (i2);
|
||||
|
||||
if (i1 == BB_HEAD (bb1) || i2 == BB_HEAD (bb2))
|
||||
|
@ -1111,13 +1111,13 @@ flow_find_cross_jump (int mode ATTRIBUTE_UNUSED, basic_block bb1,
|
|||
Two, it keeps line number notes as matched as may be. */
|
||||
if (ninsns)
|
||||
{
|
||||
while (last1 != BB_HEAD (bb1) && !INSN_P (PREV_INSN (last1)))
|
||||
while (last1 != BB_HEAD (bb1) && !NONDEBUG_INSN_P (PREV_INSN (last1)))
|
||||
last1 = PREV_INSN (last1);
|
||||
|
||||
if (last1 != BB_HEAD (bb1) && LABEL_P (PREV_INSN (last1)))
|
||||
last1 = PREV_INSN (last1);
|
||||
|
||||
while (last2 != BB_HEAD (bb2) && !INSN_P (PREV_INSN (last2)))
|
||||
while (last2 != BB_HEAD (bb2) && !NONDEBUG_INSN_P (PREV_INSN (last2)))
|
||||
last2 = PREV_INSN (last2);
|
||||
|
||||
if (last2 != BB_HEAD (bb2) && LABEL_P (PREV_INSN (last2)))
|
||||
|
@ -1557,8 +1557,12 @@ try_crossjump_to_edge (int mode, edge e1, edge e2)
|
|||
/* Skip possible basic block header. */
|
||||
if (LABEL_P (newpos2))
|
||||
newpos2 = NEXT_INSN (newpos2);
|
||||
while (DEBUG_INSN_P (newpos2))
|
||||
newpos2 = NEXT_INSN (newpos2);
|
||||
if (NOTE_P (newpos2))
|
||||
newpos2 = NEXT_INSN (newpos2);
|
||||
while (DEBUG_INSN_P (newpos2))
|
||||
newpos2 = NEXT_INSN (newpos2);
|
||||
}
|
||||
|
||||
if (dump_file)
|
||||
|
@ -1643,9 +1647,16 @@ try_crossjump_to_edge (int mode, edge e1, edge e2)
|
|||
/* Skip possible basic block header. */
|
||||
if (LABEL_P (newpos1))
|
||||
newpos1 = NEXT_INSN (newpos1);
|
||||
|
||||
while (DEBUG_INSN_P (newpos1))
|
||||
newpos1 = NEXT_INSN (newpos1);
|
||||
|
||||
if (NOTE_INSN_BASIC_BLOCK_P (newpos1))
|
||||
newpos1 = NEXT_INSN (newpos1);
|
||||
|
||||
while (DEBUG_INSN_P (newpos1))
|
||||
newpos1 = NEXT_INSN (newpos1);
|
||||
|
||||
redirect_from = split_block (src1, PREV_INSN (newpos1))->src;
|
||||
to_remove = single_succ (redirect_from);
|
||||
|
||||
|
@ -2032,20 +2043,64 @@ bool
|
|||
delete_unreachable_blocks (void)
|
||||
{
|
||||
bool changed = false;
|
||||
basic_block b, next_bb;
|
||||
basic_block b, prev_bb;
|
||||
|
||||
find_unreachable_blocks ();
|
||||
|
||||
/* Delete all unreachable basic blocks. */
|
||||
|
||||
for (b = ENTRY_BLOCK_PTR->next_bb; b != EXIT_BLOCK_PTR; b = next_bb)
|
||||
/* When we're in GIMPLE mode and there may be debug insns, we should
|
||||
delete blocks in reverse dominator order, so as to get a chance
|
||||
to substitute all released DEFs into debug stmts. If we don't
|
||||
have dominators information, walking blocks backward gets us a
|
||||
better chance of retaining most debug information than
|
||||
otherwise. */
|
||||
if (MAY_HAVE_DEBUG_STMTS && current_ir_type () == IR_GIMPLE
|
||||
&& dom_info_available_p (CDI_DOMINATORS))
|
||||
{
|
||||
next_bb = b->next_bb;
|
||||
|
||||
if (!(b->flags & BB_REACHABLE))
|
||||
for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
|
||||
{
|
||||
delete_basic_block (b);
|
||||
changed = true;
|
||||
prev_bb = b->prev_bb;
|
||||
|
||||
if (!(b->flags & BB_REACHABLE))
|
||||
{
|
||||
/* Speed up the removal of blocks that don't dominate
|
||||
others. Walking backwards, this should be the common
|
||||
case. */
|
||||
if (!first_dom_son (CDI_DOMINATORS, b))
|
||||
delete_basic_block (b);
|
||||
else
|
||||
{
|
||||
VEC (basic_block, heap) *h
|
||||
= get_all_dominated_blocks (CDI_DOMINATORS, b);
|
||||
|
||||
while (VEC_length (basic_block, h))
|
||||
{
|
||||
b = VEC_pop (basic_block, h);
|
||||
|
||||
prev_bb = b->prev_bb;
|
||||
|
||||
gcc_assert (!(b->flags & BB_REACHABLE));
|
||||
|
||||
delete_basic_block (b);
|
||||
}
|
||||
|
||||
VEC_free (basic_block, heap, h);
|
||||
}
|
||||
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
for (b = EXIT_BLOCK_PTR->prev_bb; b != ENTRY_BLOCK_PTR; b = prev_bb)
|
||||
{
|
||||
prev_bb = b->prev_bb;
|
||||
|
||||
if (!(b->flags & BB_REACHABLE))
|
||||
{
|
||||
delete_basic_block (b);
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
873
gcc/cfgexpand.c
873
gcc/cfgexpand.c
|
@ -70,7 +70,13 @@ gimple_assign_rhs_to_tree (gimple stmt)
|
|||
TREE_TYPE (gimple_assign_lhs (stmt)),
|
||||
gimple_assign_rhs1 (stmt));
|
||||
else if (grhs_class == GIMPLE_SINGLE_RHS)
|
||||
t = gimple_assign_rhs1 (stmt);
|
||||
{
|
||||
t = gimple_assign_rhs1 (stmt);
|
||||
/* Avoid modifying this tree in place below. */
|
||||
if (gimple_has_location (stmt) && CAN_HAVE_LOCATION_P (t)
|
||||
&& gimple_location (stmt) != EXPR_LOCATION (t))
|
||||
t = copy_node (t);
|
||||
}
|
||||
else
|
||||
gcc_unreachable ();
|
||||
|
||||
|
@ -1834,7 +1840,8 @@ maybe_dump_rtl_for_gimple_stmt (gimple stmt, rtx since)
|
|||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
fprintf (dump_file, "\n;; ");
|
||||
print_gimple_stmt (dump_file, stmt, 0, TDF_SLIM);
|
||||
print_gimple_stmt (dump_file, stmt, 0,
|
||||
TDF_SLIM | (dump_flags & TDF_LINENO));
|
||||
fprintf (dump_file, "\n");
|
||||
|
||||
print_rtl (dump_file, since ? NEXT_INSN (since) : since);
|
||||
|
@ -2147,6 +2154,808 @@ expand_gimple_tailcall (basic_block bb, gimple stmt, bool *can_fallthru)
|
|||
return bb;
|
||||
}
|
||||
|
||||
/* Return the difference between the floor and the truncated result of
|
||||
a signed division by OP1 with remainder MOD. */
|
||||
static rtx
|
||||
floor_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
|
||||
{
|
||||
/* (mod != 0 ? (op1 / mod < 0 ? -1 : 0) : 0) */
|
||||
return gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_NE (BImode, mod, const0_rtx),
|
||||
gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_LT (BImode,
|
||||
gen_rtx_DIV (mode, op1, mod),
|
||||
const0_rtx),
|
||||
constm1_rtx, const0_rtx),
|
||||
const0_rtx);
|
||||
}
|
||||
|
||||
/* Return the difference between the ceil and the truncated result of
|
||||
a signed division by OP1 with remainder MOD. */
|
||||
static rtx
|
||||
ceil_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
|
||||
{
|
||||
/* (mod != 0 ? (op1 / mod > 0 ? 1 : 0) : 0) */
|
||||
return gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_NE (BImode, mod, const0_rtx),
|
||||
gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_GT (BImode,
|
||||
gen_rtx_DIV (mode, op1, mod),
|
||||
const0_rtx),
|
||||
const1_rtx, const0_rtx),
|
||||
const0_rtx);
|
||||
}
|
||||
|
||||
/* Return the difference between the ceil and the truncated result of
|
||||
an unsigned division by OP1 with remainder MOD. */
|
||||
static rtx
|
||||
ceil_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1 ATTRIBUTE_UNUSED)
|
||||
{
|
||||
/* (mod != 0 ? 1 : 0) */
|
||||
return gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_NE (BImode, mod, const0_rtx),
|
||||
const1_rtx, const0_rtx);
|
||||
}
|
||||
|
||||
/* Return the difference between the rounded and the truncated result
|
||||
of a signed division by OP1 with remainder MOD. Halfway cases are
|
||||
rounded away from zero, rather than to the nearest even number. */
|
||||
static rtx
|
||||
round_sdiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
|
||||
{
|
||||
/* (abs (mod) >= abs (op1) - abs (mod)
|
||||
? (op1 / mod > 0 ? 1 : -1)
|
||||
: 0) */
|
||||
return gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_GE (BImode, gen_rtx_ABS (mode, mod),
|
||||
gen_rtx_MINUS (mode,
|
||||
gen_rtx_ABS (mode, op1),
|
||||
gen_rtx_ABS (mode, mod))),
|
||||
gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_GT (BImode,
|
||||
gen_rtx_DIV (mode, op1, mod),
|
||||
const0_rtx),
|
||||
const1_rtx, constm1_rtx),
|
||||
const0_rtx);
|
||||
}
|
||||
|
||||
/* Return the difference between the rounded and the truncated result
|
||||
of a unsigned division by OP1 with remainder MOD. Halfway cases
|
||||
are rounded away from zero, rather than to the nearest even
|
||||
number. */
|
||||
static rtx
|
||||
round_udiv_adjust (enum machine_mode mode, rtx mod, rtx op1)
|
||||
{
|
||||
/* (mod >= op1 - mod ? 1 : 0) */
|
||||
return gen_rtx_IF_THEN_ELSE
|
||||
(mode, gen_rtx_GE (BImode, mod,
|
||||
gen_rtx_MINUS (mode, op1, mod)),
|
||||
const1_rtx, const0_rtx);
|
||||
}
|
||||
|
||||
/* Wrap modeless constants in CONST:MODE. */
|
||||
rtx
|
||||
wrap_constant (enum machine_mode mode, rtx x)
|
||||
{
|
||||
if (GET_MODE (x) != VOIDmode)
|
||||
return x;
|
||||
|
||||
if (CONST_INT_P (x)
|
||||
|| GET_CODE (x) == CONST_FIXED
|
||||
|| GET_CODE (x) == CONST_DOUBLE
|
||||
|| GET_CODE (x) == LABEL_REF)
|
||||
{
|
||||
gcc_assert (mode != VOIDmode);
|
||||
|
||||
x = gen_rtx_CONST (mode, x);
|
||||
}
|
||||
|
||||
return x;
|
||||
}
|
||||
|
||||
/* Remove CONST wrapper added by wrap_constant(). */
|
||||
rtx
|
||||
unwrap_constant (rtx x)
|
||||
{
|
||||
rtx ret = x;
|
||||
|
||||
if (GET_CODE (x) != CONST)
|
||||
return x;
|
||||
|
||||
x = XEXP (x, 0);
|
||||
|
||||
if (CONST_INT_P (x)
|
||||
|| GET_CODE (x) == CONST_FIXED
|
||||
|| GET_CODE (x) == CONST_DOUBLE
|
||||
|| GET_CODE (x) == LABEL_REF)
|
||||
ret = x;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Return an RTX equivalent to the value of the tree expression
|
||||
EXP. */
|
||||
|
||||
static rtx
|
||||
expand_debug_expr (tree exp)
|
||||
{
|
||||
rtx op0 = NULL_RTX, op1 = NULL_RTX, op2 = NULL_RTX;
|
||||
enum machine_mode mode = TYPE_MODE (TREE_TYPE (exp));
|
||||
int unsignedp = TYPE_UNSIGNED (TREE_TYPE (exp));
|
||||
|
||||
switch (TREE_CODE_CLASS (TREE_CODE (exp)))
|
||||
{
|
||||
case tcc_expression:
|
||||
switch (TREE_CODE (exp))
|
||||
{
|
||||
case COND_EXPR:
|
||||
goto ternary;
|
||||
|
||||
case TRUTH_ANDIF_EXPR:
|
||||
case TRUTH_ORIF_EXPR:
|
||||
case TRUTH_AND_EXPR:
|
||||
case TRUTH_OR_EXPR:
|
||||
case TRUTH_XOR_EXPR:
|
||||
goto binary;
|
||||
|
||||
case TRUTH_NOT_EXPR:
|
||||
goto unary;
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
break;
|
||||
|
||||
ternary:
|
||||
op2 = expand_debug_expr (TREE_OPERAND (exp, 2));
|
||||
if (!op2)
|
||||
return NULL_RTX;
|
||||
/* Fall through. */
|
||||
|
||||
binary:
|
||||
case tcc_binary:
|
||||
case tcc_comparison:
|
||||
op1 = expand_debug_expr (TREE_OPERAND (exp, 1));
|
||||
if (!op1)
|
||||
return NULL_RTX;
|
||||
/* Fall through. */
|
||||
|
||||
unary:
|
||||
case tcc_unary:
|
||||
op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
|
||||
if (!op0)
|
||||
return NULL_RTX;
|
||||
break;
|
||||
|
||||
case tcc_type:
|
||||
case tcc_statement:
|
||||
gcc_unreachable ();
|
||||
|
||||
case tcc_constant:
|
||||
case tcc_exceptional:
|
||||
case tcc_declaration:
|
||||
case tcc_reference:
|
||||
case tcc_vl_exp:
|
||||
break;
|
||||
}
|
||||
|
||||
switch (TREE_CODE (exp))
|
||||
{
|
||||
case STRING_CST:
|
||||
if (!lookup_constant_def (exp))
|
||||
{
|
||||
op0 = gen_rtx_CONST_STRING (Pmode, TREE_STRING_POINTER (exp));
|
||||
op0 = gen_rtx_MEM (BLKmode, op0);
|
||||
set_mem_attributes (op0, exp, 0);
|
||||
return op0;
|
||||
}
|
||||
/* Fall through... */
|
||||
|
||||
case INTEGER_CST:
|
||||
case REAL_CST:
|
||||
case FIXED_CST:
|
||||
op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER);
|
||||
return op0;
|
||||
|
||||
case COMPLEX_CST:
|
||||
gcc_assert (COMPLEX_MODE_P (mode));
|
||||
op0 = expand_debug_expr (TREE_REALPART (exp));
|
||||
op0 = wrap_constant (GET_MODE_INNER (mode), op0);
|
||||
op1 = expand_debug_expr (TREE_IMAGPART (exp));
|
||||
op1 = wrap_constant (GET_MODE_INNER (mode), op1);
|
||||
return gen_rtx_CONCAT (mode, op0, op1);
|
||||
|
||||
case VAR_DECL:
|
||||
case PARM_DECL:
|
||||
case FUNCTION_DECL:
|
||||
case LABEL_DECL:
|
||||
case CONST_DECL:
|
||||
case RESULT_DECL:
|
||||
op0 = DECL_RTL_IF_SET (exp);
|
||||
|
||||
/* This decl was probably optimized away. */
|
||||
if (!op0)
|
||||
return NULL;
|
||||
|
||||
op0 = copy_rtx (op0);
|
||||
|
||||
if (GET_MODE (op0) == BLKmode)
|
||||
{
|
||||
gcc_assert (MEM_P (op0));
|
||||
op0 = adjust_address_nv (op0, mode, 0);
|
||||
return op0;
|
||||
}
|
||||
|
||||
/* Fall through. */
|
||||
|
||||
adjust_mode:
|
||||
case PAREN_EXPR:
|
||||
case NOP_EXPR:
|
||||
case CONVERT_EXPR:
|
||||
{
|
||||
enum machine_mode inner_mode = GET_MODE (op0);
|
||||
|
||||
if (mode == inner_mode)
|
||||
return op0;
|
||||
|
||||
if (inner_mode == VOIDmode)
|
||||
{
|
||||
inner_mode = TYPE_MODE (TREE_TYPE (TREE_OPERAND (exp, 0)));
|
||||
if (mode == inner_mode)
|
||||
return op0;
|
||||
}
|
||||
|
||||
if (FLOAT_MODE_P (mode) && FLOAT_MODE_P (inner_mode))
|
||||
{
|
||||
if (GET_MODE_BITSIZE (mode) == GET_MODE_BITSIZE (inner_mode))
|
||||
op0 = simplify_gen_subreg (mode, op0, inner_mode, 0);
|
||||
else if (GET_MODE_BITSIZE (mode) < GET_MODE_BITSIZE (inner_mode))
|
||||
op0 = simplify_gen_unary (FLOAT_TRUNCATE, mode, op0, inner_mode);
|
||||
else
|
||||
op0 = simplify_gen_unary (FLOAT_EXTEND, mode, op0, inner_mode);
|
||||
}
|
||||
else if (FLOAT_MODE_P (mode))
|
||||
{
|
||||
if (TYPE_UNSIGNED (TREE_TYPE (TREE_OPERAND (exp, 0))))
|
||||
op0 = simplify_gen_unary (UNSIGNED_FLOAT, mode, op0, inner_mode);
|
||||
else
|
||||
op0 = simplify_gen_unary (FLOAT, mode, op0, inner_mode);
|
||||
}
|
||||
else if (FLOAT_MODE_P (inner_mode))
|
||||
{
|
||||
if (unsignedp)
|
||||
op0 = simplify_gen_unary (UNSIGNED_FIX, mode, op0, inner_mode);
|
||||
else
|
||||
op0 = simplify_gen_unary (FIX, mode, op0, inner_mode);
|
||||
}
|
||||
else if (CONSTANT_P (op0)
|
||||
|| GET_MODE_BITSIZE (mode) <= GET_MODE_BITSIZE (inner_mode))
|
||||
op0 = simplify_gen_subreg (mode, op0, inner_mode,
|
||||
subreg_lowpart_offset (mode,
|
||||
inner_mode));
|
||||
else if (unsignedp)
|
||||
op0 = gen_rtx_ZERO_EXTEND (mode, op0);
|
||||
else
|
||||
op0 = gen_rtx_SIGN_EXTEND (mode, op0);
|
||||
|
||||
return op0;
|
||||
}
|
||||
|
||||
case INDIRECT_REF:
|
||||
case ALIGN_INDIRECT_REF:
|
||||
case MISALIGNED_INDIRECT_REF:
|
||||
op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
|
||||
if (!op0)
|
||||
return NULL;
|
||||
|
||||
gcc_assert (GET_MODE (op0) == Pmode
|
||||
|| GET_CODE (op0) == CONST_INT
|
||||
|| GET_CODE (op0) == CONST_DOUBLE);
|
||||
|
||||
if (TREE_CODE (exp) == ALIGN_INDIRECT_REF)
|
||||
{
|
||||
int align = TYPE_ALIGN_UNIT (TREE_TYPE (exp));
|
||||
op0 = gen_rtx_AND (Pmode, op0, GEN_INT (-align));
|
||||
}
|
||||
|
||||
op0 = gen_rtx_MEM (mode, op0);
|
||||
|
||||
set_mem_attributes (op0, exp, 0);
|
||||
|
||||
return op0;
|
||||
|
||||
case TARGET_MEM_REF:
|
||||
if (TMR_SYMBOL (exp) && !DECL_RTL_SET_P (TMR_SYMBOL (exp)))
|
||||
return NULL;
|
||||
|
||||
op0 = expand_debug_expr
|
||||
(tree_mem_ref_addr (build_pointer_type (TREE_TYPE (exp)),
|
||||
exp));
|
||||
if (!op0)
|
||||
return NULL;
|
||||
|
||||
gcc_assert (GET_MODE (op0) == Pmode
|
||||
|| GET_CODE (op0) == CONST_INT
|
||||
|| GET_CODE (op0) == CONST_DOUBLE);
|
||||
|
||||
op0 = gen_rtx_MEM (mode, op0);
|
||||
|
||||
set_mem_attributes (op0, exp, 0);
|
||||
|
||||
return op0;
|
||||
|
||||
case ARRAY_REF:
|
||||
case ARRAY_RANGE_REF:
|
||||
case COMPONENT_REF:
|
||||
case BIT_FIELD_REF:
|
||||
case REALPART_EXPR:
|
||||
case IMAGPART_EXPR:
|
||||
case VIEW_CONVERT_EXPR:
|
||||
{
|
||||
enum machine_mode mode1;
|
||||
HOST_WIDE_INT bitsize, bitpos;
|
||||
tree offset;
|
||||
int volatilep = 0;
|
||||
tree tem = get_inner_reference (exp, &bitsize, &bitpos, &offset,
|
||||
&mode1, &unsignedp, &volatilep, false);
|
||||
rtx orig_op0;
|
||||
|
||||
orig_op0 = op0 = expand_debug_expr (tem);
|
||||
|
||||
if (!op0)
|
||||
return NULL;
|
||||
|
||||
if (offset)
|
||||
{
|
||||
gcc_assert (MEM_P (op0));
|
||||
|
||||
op1 = expand_debug_expr (offset);
|
||||
if (!op1)
|
||||
return NULL;
|
||||
|
||||
op0 = gen_rtx_MEM (mode, gen_rtx_PLUS (Pmode, XEXP (op0, 0), op1));
|
||||
}
|
||||
|
||||
if (MEM_P (op0))
|
||||
{
|
||||
if (bitpos >= BITS_PER_UNIT)
|
||||
{
|
||||
op0 = adjust_address_nv (op0, mode1, bitpos / BITS_PER_UNIT);
|
||||
bitpos %= BITS_PER_UNIT;
|
||||
}
|
||||
else if (bitpos < 0)
|
||||
{
|
||||
int units = (-bitpos + BITS_PER_UNIT - 1) / BITS_PER_UNIT;
|
||||
op0 = adjust_address_nv (op0, mode1, units);
|
||||
bitpos += units * BITS_PER_UNIT;
|
||||
}
|
||||
else if (bitpos == 0 && bitsize == GET_MODE_BITSIZE (mode))
|
||||
op0 = adjust_address_nv (op0, mode, 0);
|
||||
else if (GET_MODE (op0) != mode1)
|
||||
op0 = adjust_address_nv (op0, mode1, 0);
|
||||
else
|
||||
op0 = copy_rtx (op0);
|
||||
if (op0 == orig_op0)
|
||||
op0 = shallow_copy_rtx (op0);
|
||||
set_mem_attributes (op0, exp, 0);
|
||||
}
|
||||
|
||||
if (bitpos == 0 && mode == GET_MODE (op0))
|
||||
return op0;
|
||||
|
||||
if ((bitpos % BITS_PER_UNIT) == 0
|
||||
&& bitsize == GET_MODE_BITSIZE (mode1))
|
||||
{
|
||||
enum machine_mode opmode = GET_MODE (op0);
|
||||
|
||||
gcc_assert (opmode != BLKmode);
|
||||
|
||||
if (opmode == VOIDmode)
|
||||
opmode = mode1;
|
||||
|
||||
/* This condition may hold if we're expanding the address
|
||||
right past the end of an array that turned out not to
|
||||
be addressable (i.e., the address was only computed in
|
||||
debug stmts). The gen_subreg below would rightfully
|
||||
crash, and the address doesn't really exist, so just
|
||||
drop it. */
|
||||
if (bitpos >= GET_MODE_BITSIZE (opmode))
|
||||
return NULL;
|
||||
|
||||
return simplify_gen_subreg (mode, op0, opmode,
|
||||
bitpos / BITS_PER_UNIT);
|
||||
}
|
||||
|
||||
return simplify_gen_ternary (SCALAR_INT_MODE_P (GET_MODE (op0))
|
||||
&& TYPE_UNSIGNED (TREE_TYPE (exp))
|
||||
? SIGN_EXTRACT
|
||||
: ZERO_EXTRACT, mode,
|
||||
GET_MODE (op0) != VOIDmode
|
||||
? GET_MODE (op0) : mode1,
|
||||
op0, GEN_INT (bitsize), GEN_INT (bitpos));
|
||||
}
|
||||
|
||||
case EXC_PTR_EXPR:
|
||||
/* ??? Do not call get_exception_pointer(), we don't want to gen
|
||||
it if it hasn't been created yet. */
|
||||
return get_exception_pointer ();
|
||||
|
||||
case FILTER_EXPR:
|
||||
/* Likewise get_exception_filter(). */
|
||||
return get_exception_filter ();
|
||||
|
||||
case ABS_EXPR:
|
||||
return gen_rtx_ABS (mode, op0);
|
||||
|
||||
case NEGATE_EXPR:
|
||||
return gen_rtx_NEG (mode, op0);
|
||||
|
||||
case BIT_NOT_EXPR:
|
||||
return gen_rtx_NOT (mode, op0);
|
||||
|
||||
case FLOAT_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UNSIGNED_FLOAT (mode, op0);
|
||||
else
|
||||
return gen_rtx_FLOAT (mode, op0);
|
||||
|
||||
case FIX_TRUNC_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UNSIGNED_FIX (mode, op0);
|
||||
else
|
||||
return gen_rtx_FIX (mode, op0);
|
||||
|
||||
case POINTER_PLUS_EXPR:
|
||||
case PLUS_EXPR:
|
||||
return gen_rtx_PLUS (mode, op0, op1);
|
||||
|
||||
case MINUS_EXPR:
|
||||
return gen_rtx_MINUS (mode, op0, op1);
|
||||
|
||||
case MULT_EXPR:
|
||||
return gen_rtx_MULT (mode, op0, op1);
|
||||
|
||||
case RDIV_EXPR:
|
||||
case TRUNC_DIV_EXPR:
|
||||
case EXACT_DIV_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UDIV (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_DIV (mode, op0, op1);
|
||||
|
||||
case TRUNC_MOD_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UMOD (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_MOD (mode, op0, op1);
|
||||
|
||||
case FLOOR_DIV_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UDIV (mode, op0, op1);
|
||||
else
|
||||
{
|
||||
rtx div = gen_rtx_DIV (mode, op0, op1);
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = floor_sdiv_adjust (mode, mod, op1);
|
||||
return gen_rtx_PLUS (mode, div, adj);
|
||||
}
|
||||
|
||||
case FLOOR_MOD_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UMOD (mode, op0, op1);
|
||||
else
|
||||
{
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = floor_sdiv_adjust (mode, mod, op1);
|
||||
adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
|
||||
return gen_rtx_PLUS (mode, mod, adj);
|
||||
}
|
||||
|
||||
case CEIL_DIV_EXPR:
|
||||
if (unsignedp)
|
||||
{
|
||||
rtx div = gen_rtx_UDIV (mode, op0, op1);
|
||||
rtx mod = gen_rtx_UMOD (mode, op0, op1);
|
||||
rtx adj = ceil_udiv_adjust (mode, mod, op1);
|
||||
return gen_rtx_PLUS (mode, div, adj);
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx div = gen_rtx_DIV (mode, op0, op1);
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = ceil_sdiv_adjust (mode, mod, op1);
|
||||
return gen_rtx_PLUS (mode, div, adj);
|
||||
}
|
||||
|
||||
case CEIL_MOD_EXPR:
|
||||
if (unsignedp)
|
||||
{
|
||||
rtx mod = gen_rtx_UMOD (mode, op0, op1);
|
||||
rtx adj = ceil_udiv_adjust (mode, mod, op1);
|
||||
adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
|
||||
return gen_rtx_PLUS (mode, mod, adj);
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = ceil_sdiv_adjust (mode, mod, op1);
|
||||
adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
|
||||
return gen_rtx_PLUS (mode, mod, adj);
|
||||
}
|
||||
|
||||
case ROUND_DIV_EXPR:
|
||||
if (unsignedp)
|
||||
{
|
||||
rtx div = gen_rtx_UDIV (mode, op0, op1);
|
||||
rtx mod = gen_rtx_UMOD (mode, op0, op1);
|
||||
rtx adj = round_udiv_adjust (mode, mod, op1);
|
||||
return gen_rtx_PLUS (mode, div, adj);
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx div = gen_rtx_DIV (mode, op0, op1);
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = round_sdiv_adjust (mode, mod, op1);
|
||||
return gen_rtx_PLUS (mode, div, adj);
|
||||
}
|
||||
|
||||
case ROUND_MOD_EXPR:
|
||||
if (unsignedp)
|
||||
{
|
||||
rtx mod = gen_rtx_UMOD (mode, op0, op1);
|
||||
rtx adj = round_udiv_adjust (mode, mod, op1);
|
||||
adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
|
||||
return gen_rtx_PLUS (mode, mod, adj);
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx mod = gen_rtx_MOD (mode, op0, op1);
|
||||
rtx adj = round_sdiv_adjust (mode, mod, op1);
|
||||
adj = gen_rtx_NEG (mode, gen_rtx_MULT (mode, adj, op1));
|
||||
return gen_rtx_PLUS (mode, mod, adj);
|
||||
}
|
||||
|
||||
case LSHIFT_EXPR:
|
||||
return gen_rtx_ASHIFT (mode, op0, op1);
|
||||
|
||||
case RSHIFT_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_LSHIFTRT (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_ASHIFTRT (mode, op0, op1);
|
||||
|
||||
case LROTATE_EXPR:
|
||||
return gen_rtx_ROTATE (mode, op0, op1);
|
||||
|
||||
case RROTATE_EXPR:
|
||||
return gen_rtx_ROTATERT (mode, op0, op1);
|
||||
|
||||
case MIN_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UMIN (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_SMIN (mode, op0, op1);
|
||||
|
||||
case MAX_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_UMAX (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_SMAX (mode, op0, op1);
|
||||
|
||||
case BIT_AND_EXPR:
|
||||
case TRUTH_AND_EXPR:
|
||||
return gen_rtx_AND (mode, op0, op1);
|
||||
|
||||
case BIT_IOR_EXPR:
|
||||
case TRUTH_OR_EXPR:
|
||||
return gen_rtx_IOR (mode, op0, op1);
|
||||
|
||||
case BIT_XOR_EXPR:
|
||||
case TRUTH_XOR_EXPR:
|
||||
return gen_rtx_XOR (mode, op0, op1);
|
||||
|
||||
case TRUTH_ANDIF_EXPR:
|
||||
return gen_rtx_IF_THEN_ELSE (mode, op0, op1, const0_rtx);
|
||||
|
||||
case TRUTH_ORIF_EXPR:
|
||||
return gen_rtx_IF_THEN_ELSE (mode, op0, const_true_rtx, op1);
|
||||
|
||||
case TRUTH_NOT_EXPR:
|
||||
return gen_rtx_EQ (mode, op0, const0_rtx);
|
||||
|
||||
case LT_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_LTU (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_LT (mode, op0, op1);
|
||||
|
||||
case LE_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_LEU (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_LE (mode, op0, op1);
|
||||
|
||||
case GT_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_GTU (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_GT (mode, op0, op1);
|
||||
|
||||
case GE_EXPR:
|
||||
if (unsignedp)
|
||||
return gen_rtx_GEU (mode, op0, op1);
|
||||
else
|
||||
return gen_rtx_GE (mode, op0, op1);
|
||||
|
||||
case EQ_EXPR:
|
||||
return gen_rtx_EQ (mode, op0, op1);
|
||||
|
||||
case NE_EXPR:
|
||||
return gen_rtx_NE (mode, op0, op1);
|
||||
|
||||
case UNORDERED_EXPR:
|
||||
return gen_rtx_UNORDERED (mode, op0, op1);
|
||||
|
||||
case ORDERED_EXPR:
|
||||
return gen_rtx_ORDERED (mode, op0, op1);
|
||||
|
||||
case UNLT_EXPR:
|
||||
return gen_rtx_UNLT (mode, op0, op1);
|
||||
|
||||
case UNLE_EXPR:
|
||||
return gen_rtx_UNLE (mode, op0, op1);
|
||||
|
||||
case UNGT_EXPR:
|
||||
return gen_rtx_UNGT (mode, op0, op1);
|
||||
|
||||
case UNGE_EXPR:
|
||||
return gen_rtx_UNGE (mode, op0, op1);
|
||||
|
||||
case UNEQ_EXPR:
|
||||
return gen_rtx_UNEQ (mode, op0, op1);
|
||||
|
||||
case LTGT_EXPR:
|
||||
return gen_rtx_LTGT (mode, op0, op1);
|
||||
|
||||
case COND_EXPR:
|
||||
return gen_rtx_IF_THEN_ELSE (mode, op0, op1, op2);
|
||||
|
||||
case COMPLEX_EXPR:
|
||||
gcc_assert (COMPLEX_MODE_P (mode));
|
||||
if (GET_MODE (op0) == VOIDmode)
|
||||
op0 = gen_rtx_CONST (GET_MODE_INNER (mode), op0);
|
||||
if (GET_MODE (op1) == VOIDmode)
|
||||
op1 = gen_rtx_CONST (GET_MODE_INNER (mode), op1);
|
||||
return gen_rtx_CONCAT (mode, op0, op1);
|
||||
|
||||
case ADDR_EXPR:
|
||||
op0 = expand_debug_expr (TREE_OPERAND (exp, 0));
|
||||
if (!op0 || !MEM_P (op0))
|
||||
return NULL;
|
||||
|
||||
return XEXP (op0, 0);
|
||||
|
||||
case VECTOR_CST:
|
||||
exp = build_constructor_from_list (TREE_TYPE (exp),
|
||||
TREE_VECTOR_CST_ELTS (exp));
|
||||
/* Fall through. */
|
||||
|
||||
case CONSTRUCTOR:
|
||||
if (TREE_CODE (TREE_TYPE (exp)) == VECTOR_TYPE)
|
||||
{
|
||||
unsigned i;
|
||||
tree val;
|
||||
|
||||
op0 = gen_rtx_CONCATN
|
||||
(mode, rtvec_alloc (TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp))));
|
||||
|
||||
FOR_EACH_CONSTRUCTOR_VALUE (CONSTRUCTOR_ELTS (exp), i, val)
|
||||
{
|
||||
op1 = expand_debug_expr (val);
|
||||
if (!op1)
|
||||
return NULL;
|
||||
XVECEXP (op0, 0, i) = op1;
|
||||
}
|
||||
|
||||
if (i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)))
|
||||
{
|
||||
op1 = expand_debug_expr
|
||||
(fold_convert (TREE_TYPE (TREE_TYPE (exp)), integer_zero_node));
|
||||
|
||||
if (!op1)
|
||||
return NULL;
|
||||
|
||||
for (; i < TYPE_VECTOR_SUBPARTS (TREE_TYPE (exp)); i++)
|
||||
XVECEXP (op0, 0, i) = op1;
|
||||
}
|
||||
|
||||
return op0;
|
||||
}
|
||||
else
|
||||
goto flag_unsupported;
|
||||
|
||||
case CALL_EXPR:
|
||||
/* ??? Maybe handle some builtins? */
|
||||
return NULL;
|
||||
|
||||
case SSA_NAME:
|
||||
{
|
||||
int part = var_to_partition (SA.map, exp);
|
||||
|
||||
if (part == NO_PARTITION)
|
||||
return NULL;
|
||||
|
||||
gcc_assert (part >= 0 && (unsigned)part < SA.map->num_partitions);
|
||||
|
||||
op0 = SA.partition_to_pseudo[part];
|
||||
goto adjust_mode;
|
||||
}
|
||||
|
||||
case ERROR_MARK:
|
||||
return NULL;
|
||||
|
||||
default:
|
||||
flag_unsupported:
|
||||
#ifdef ENABLE_CHECKING
|
||||
debug_tree (exp);
|
||||
gcc_unreachable ();
|
||||
#else
|
||||
return NULL;
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
/* Expand the _LOCs in debug insns. We run this after expanding all
|
||||
regular insns, so that any variables referenced in the function
|
||||
will have their DECL_RTLs set. */
|
||||
|
||||
static void
|
||||
expand_debug_locations (void)
|
||||
{
|
||||
rtx insn;
|
||||
rtx last = get_last_insn ();
|
||||
int save_strict_alias = flag_strict_aliasing;
|
||||
|
||||
/* New alias sets while setting up memory attributes cause
|
||||
-fcompare-debug failures, even though it doesn't bring about any
|
||||
codegen changes. */
|
||||
flag_strict_aliasing = 0;
|
||||
|
||||
for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
tree value = (tree)INSN_VAR_LOCATION_LOC (insn);
|
||||
rtx val;
|
||||
enum machine_mode mode;
|
||||
|
||||
if (value == NULL_TREE)
|
||||
val = NULL_RTX;
|
||||
else
|
||||
{
|
||||
val = expand_debug_expr (value);
|
||||
gcc_assert (last == get_last_insn ());
|
||||
}
|
||||
|
||||
if (!val)
|
||||
val = gen_rtx_UNKNOWN_VAR_LOC ();
|
||||
else
|
||||
{
|
||||
mode = GET_MODE (INSN_VAR_LOCATION (insn));
|
||||
|
||||
gcc_assert (mode == GET_MODE (val)
|
||||
|| (GET_MODE (val) == VOIDmode
|
||||
&& (CONST_INT_P (val)
|
||||
|| GET_CODE (val) == CONST_FIXED
|
||||
|| GET_CODE (val) == CONST_DOUBLE
|
||||
|| GET_CODE (val) == LABEL_REF)));
|
||||
}
|
||||
|
||||
INSN_VAR_LOCATION_LOC (insn) = val;
|
||||
}
|
||||
|
||||
flag_strict_aliasing = save_strict_alias;
|
||||
}
|
||||
|
||||
/* Expand basic block BB from GIMPLE trees to RTL. */
|
||||
|
||||
static basic_block
|
||||
|
@ -2234,9 +3043,10 @@ expand_gimple_basic_block (basic_block bb)
|
|||
|
||||
for (; !gsi_end_p (gsi); gsi_next (&gsi))
|
||||
{
|
||||
gimple stmt = gsi_stmt (gsi);
|
||||
basic_block new_bb;
|
||||
|
||||
stmt = gsi_stmt (gsi);
|
||||
|
||||
/* Expand this statement, then evaluate the resulting RTL and
|
||||
fixup the CFG accordingly. */
|
||||
if (gimple_code (stmt) == GIMPLE_COND)
|
||||
|
@ -2245,6 +3055,60 @@ expand_gimple_basic_block (basic_block bb)
|
|||
if (new_bb)
|
||||
return new_bb;
|
||||
}
|
||||
else if (gimple_debug_bind_p (stmt))
|
||||
{
|
||||
location_t sloc = get_curr_insn_source_location ();
|
||||
tree sblock = get_curr_insn_block ();
|
||||
gimple_stmt_iterator nsi = gsi;
|
||||
|
||||
for (;;)
|
||||
{
|
||||
tree var = gimple_debug_bind_get_var (stmt);
|
||||
tree value;
|
||||
rtx val;
|
||||
enum machine_mode mode;
|
||||
|
||||
if (gimple_debug_bind_has_value_p (stmt))
|
||||
value = gimple_debug_bind_get_value (stmt);
|
||||
else
|
||||
value = NULL_TREE;
|
||||
|
||||
last = get_last_insn ();
|
||||
|
||||
set_curr_insn_source_location (gimple_location (stmt));
|
||||
set_curr_insn_block (gimple_block (stmt));
|
||||
|
||||
if (DECL_P (var))
|
||||
mode = DECL_MODE (var);
|
||||
else
|
||||
mode = TYPE_MODE (TREE_TYPE (var));
|
||||
|
||||
val = gen_rtx_VAR_LOCATION
|
||||
(mode, var, (rtx)value, VAR_INIT_STATUS_INITIALIZED);
|
||||
|
||||
val = emit_debug_insn (val);
|
||||
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
/* We can't dump the insn with a TREE where an RTX
|
||||
is expected. */
|
||||
INSN_VAR_LOCATION_LOC (val) = const0_rtx;
|
||||
maybe_dump_rtl_for_gimple_stmt (stmt, last);
|
||||
INSN_VAR_LOCATION_LOC (val) = (rtx)value;
|
||||
}
|
||||
|
||||
gsi = nsi;
|
||||
gsi_next (&nsi);
|
||||
if (gsi_end_p (nsi))
|
||||
break;
|
||||
stmt = gsi_stmt (nsi);
|
||||
if (!gimple_debug_bind_p (stmt))
|
||||
break;
|
||||
}
|
||||
|
||||
set_curr_insn_source_location (sloc);
|
||||
set_curr_insn_block (sblock);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (is_gimple_call (stmt) && gimple_call_tail_p (stmt))
|
||||
|
@ -2718,6 +3582,9 @@ gimple_expand_cfg (void)
|
|||
FOR_BB_BETWEEN (bb, init_block->next_bb, EXIT_BLOCK_PTR, next_bb)
|
||||
bb = expand_gimple_basic_block (bb);
|
||||
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
expand_debug_locations ();
|
||||
|
||||
execute_free_datastructures ();
|
||||
finish_out_of_ssa (&SA);
|
||||
|
||||
|
|
|
@ -238,7 +238,7 @@ int epilogue_locator;
|
|||
/* Hold current location information and last location information, so the
|
||||
datastructures are built lazily only when some instructions in given
|
||||
place are needed. */
|
||||
location_t curr_location, last_location;
|
||||
static location_t curr_location, last_location;
|
||||
static tree curr_block, last_block;
|
||||
static int curr_rtl_loc = -1;
|
||||
|
||||
|
@ -290,12 +290,17 @@ set_curr_insn_source_location (location_t location)
|
|||
time locators are not initialized. */
|
||||
if (curr_rtl_loc == -1)
|
||||
return;
|
||||
if (location == last_location)
|
||||
return;
|
||||
curr_location = location;
|
||||
}
|
||||
|
||||
/* Set current scope block. */
|
||||
/* Get current location. */
|
||||
location_t
|
||||
get_curr_insn_source_location (void)
|
||||
{
|
||||
return curr_location;
|
||||
}
|
||||
|
||||
/* Set current scope block. */
|
||||
void
|
||||
set_curr_insn_block (tree b)
|
||||
{
|
||||
|
@ -307,6 +312,13 @@ set_curr_insn_block (tree b)
|
|||
curr_block = b;
|
||||
}
|
||||
|
||||
/* Get current scope block. */
|
||||
tree
|
||||
get_curr_insn_block (void)
|
||||
{
|
||||
return curr_block;
|
||||
}
|
||||
|
||||
/* Return current insn locator. */
|
||||
int
|
||||
curr_insn_locator (void)
|
||||
|
@ -1120,6 +1132,7 @@ duplicate_insn_chain (rtx from, rtx to)
|
|||
{
|
||||
switch (GET_CODE (insn))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case CALL_INSN:
|
||||
case JUMP_INSN:
|
||||
|
|
|
@ -176,8 +176,8 @@ num_loop_insns (const struct loop *loop)
|
|||
{
|
||||
bb = bbs[i];
|
||||
ninsns++;
|
||||
for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn))
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
ninsns++;
|
||||
}
|
||||
free(bbs);
|
||||
|
@ -199,9 +199,9 @@ average_num_loop_insns (const struct loop *loop)
|
|||
{
|
||||
bb = bbs[i];
|
||||
|
||||
binsns = 1;
|
||||
for (insn = BB_HEAD (bb); insn != BB_END (bb); insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn))
|
||||
binsns = 0;
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
binsns++;
|
||||
|
||||
ratio = loop->header->frequency == 0
|
||||
|
|
55
gcc/cfgrtl.c
55
gcc/cfgrtl.c
|
@ -531,7 +531,26 @@ rtl_split_block (basic_block bb, void *insnp)
|
|||
insn = first_insn_after_basic_block_note (bb);
|
||||
|
||||
if (insn)
|
||||
insn = PREV_INSN (insn);
|
||||
{
|
||||
rtx next = insn;
|
||||
|
||||
insn = PREV_INSN (insn);
|
||||
|
||||
/* If the block contains only debug insns, insn would have
|
||||
been NULL in a non-debug compilation, and then we'd end
|
||||
up emitting a DELETED note. For -fcompare-debug
|
||||
stability, emit the note too. */
|
||||
if (insn != BB_END (bb)
|
||||
&& DEBUG_INSN_P (next)
|
||||
&& DEBUG_INSN_P (BB_END (bb)))
|
||||
{
|
||||
while (next != BB_END (bb) && DEBUG_INSN_P (next))
|
||||
next = NEXT_INSN (next);
|
||||
|
||||
if (next == BB_END (bb))
|
||||
emit_note_after (NOTE_INSN_DELETED, next);
|
||||
}
|
||||
}
|
||||
else
|
||||
insn = get_last_insn ();
|
||||
}
|
||||
|
@ -566,11 +585,15 @@ rtl_merge_blocks (basic_block a, basic_block b)
|
|||
{
|
||||
rtx b_head = BB_HEAD (b), b_end = BB_END (b), a_end = BB_END (a);
|
||||
rtx del_first = NULL_RTX, del_last = NULL_RTX;
|
||||
rtx b_debug_start = b_end, b_debug_end = b_end;
|
||||
int b_empty = 0;
|
||||
|
||||
if (dump_file)
|
||||
fprintf (dump_file, "merging block %d into block %d\n", b->index, a->index);
|
||||
|
||||
while (DEBUG_INSN_P (b_end))
|
||||
b_end = PREV_INSN (b_debug_start = b_end);
|
||||
|
||||
/* If there was a CODE_LABEL beginning B, delete it. */
|
||||
if (LABEL_P (b_head))
|
||||
{
|
||||
|
@ -636,9 +659,21 @@ rtl_merge_blocks (basic_block a, basic_block b)
|
|||
/* Reassociate the insns of B with A. */
|
||||
if (!b_empty)
|
||||
{
|
||||
update_bb_for_insn_chain (a_end, b_end, a);
|
||||
update_bb_for_insn_chain (a_end, b_debug_end, a);
|
||||
|
||||
a_end = b_end;
|
||||
a_end = b_debug_end;
|
||||
}
|
||||
else if (b_end != b_debug_end)
|
||||
{
|
||||
/* Move any deleted labels and other notes between the end of A
|
||||
and the debug insns that make up B after the debug insns,
|
||||
bringing the debug insns into A while keeping the notes after
|
||||
the end of A. */
|
||||
if (NEXT_INSN (a_end) != b_debug_start)
|
||||
reorder_insns_nobb (NEXT_INSN (a_end), PREV_INSN (b_debug_start),
|
||||
b_debug_end);
|
||||
update_bb_for_insn_chain (b_debug_start, b_debug_end, a);
|
||||
a_end = b_debug_end;
|
||||
}
|
||||
|
||||
df_bb_delete (b->index);
|
||||
|
@ -2162,6 +2197,11 @@ purge_dead_edges (basic_block bb)
|
|||
bool found;
|
||||
edge_iterator ei;
|
||||
|
||||
if (DEBUG_INSN_P (insn) && insn != BB_HEAD (bb))
|
||||
do
|
||||
insn = PREV_INSN (insn);
|
||||
while ((DEBUG_INSN_P (insn) || NOTE_P (insn)) && insn != BB_HEAD (bb));
|
||||
|
||||
/* If this instruction cannot trap, remove REG_EH_REGION notes. */
|
||||
if (NONJUMP_INSN_P (insn)
|
||||
&& (note = find_reg_note (insn, REG_EH_REGION, NULL)))
|
||||
|
@ -2182,10 +2222,10 @@ purge_dead_edges (basic_block bb)
|
|||
latter can appear when nonlocal gotos are used. */
|
||||
if (e->flags & EDGE_EH)
|
||||
{
|
||||
if (can_throw_internal (BB_END (bb))
|
||||
if (can_throw_internal (insn)
|
||||
/* If this is a call edge, verify that this is a call insn. */
|
||||
&& (! (e->flags & EDGE_ABNORMAL_CALL)
|
||||
|| CALL_P (BB_END (bb))))
|
||||
|| CALL_P (insn)))
|
||||
{
|
||||
ei_next (&ei);
|
||||
continue;
|
||||
|
@ -2193,7 +2233,7 @@ purge_dead_edges (basic_block bb)
|
|||
}
|
||||
else if (e->flags & EDGE_ABNORMAL_CALL)
|
||||
{
|
||||
if (CALL_P (BB_END (bb))
|
||||
if (CALL_P (insn)
|
||||
&& (! (note = find_reg_note (insn, REG_EH_REGION, NULL))
|
||||
|| INTVAL (XEXP (note, 0)) >= 0))
|
||||
{
|
||||
|
@ -2771,7 +2811,8 @@ rtl_block_ends_with_call_p (basic_block bb)
|
|||
while (!CALL_P (insn)
|
||||
&& insn != BB_HEAD (bb)
|
||||
&& (keep_with_call_p (insn)
|
||||
|| NOTE_P (insn)))
|
||||
|| NOTE_P (insn)
|
||||
|| DEBUG_INSN_P (insn)))
|
||||
insn = PREV_INSN (insn);
|
||||
return (CALL_P (insn));
|
||||
}
|
||||
|
|
322
gcc/combine.c
322
gcc/combine.c
|
@ -921,7 +921,7 @@ create_log_links (void)
|
|||
{
|
||||
FOR_BB_INSNS_REVERSE (bb, insn)
|
||||
{
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
/* Log links are created only once. */
|
||||
|
@ -1129,7 +1129,7 @@ combine_instructions (rtx f, unsigned int nregs)
|
|||
insn = next ? next : NEXT_INSN (insn))
|
||||
{
|
||||
next = 0;
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
/* See if we know about function return values before this
|
||||
insn based upon SUBREG flags. */
|
||||
|
@ -2161,6 +2161,209 @@ reg_subword_p (rtx x, rtx reg)
|
|||
&& GET_MODE_CLASS (GET_MODE (x)) == MODE_INT;
|
||||
}
|
||||
|
||||
#ifdef AUTO_INC_DEC
|
||||
/* Replace auto-increment addressing modes with explicit operations to
|
||||
access the same addresses without modifying the corresponding
|
||||
registers. If AFTER holds, SRC is meant to be reused after the
|
||||
side effect, otherwise it is to be reused before that. */
|
||||
|
||||
static rtx
|
||||
cleanup_auto_inc_dec (rtx src, bool after, enum machine_mode mem_mode)
|
||||
{
|
||||
rtx x = src;
|
||||
const RTX_CODE code = GET_CODE (x);
|
||||
int i;
|
||||
const char *fmt;
|
||||
|
||||
switch (code)
|
||||
{
|
||||
case REG:
|
||||
case CONST_INT:
|
||||
case CONST_DOUBLE:
|
||||
case CONST_FIXED:
|
||||
case CONST_VECTOR:
|
||||
case SYMBOL_REF:
|
||||
case CODE_LABEL:
|
||||
case PC:
|
||||
case CC0:
|
||||
case SCRATCH:
|
||||
/* SCRATCH must be shared because they represent distinct values. */
|
||||
return x;
|
||||
case CLOBBER:
|
||||
if (REG_P (XEXP (x, 0)) && REGNO (XEXP (x, 0)) < FIRST_PSEUDO_REGISTER)
|
||||
return x;
|
||||
break;
|
||||
|
||||
case CONST:
|
||||
if (shared_const_p (x))
|
||||
return x;
|
||||
break;
|
||||
|
||||
case MEM:
|
||||
mem_mode = GET_MODE (x);
|
||||
break;
|
||||
|
||||
case PRE_INC:
|
||||
case PRE_DEC:
|
||||
case POST_INC:
|
||||
case POST_DEC:
|
||||
gcc_assert (mem_mode != VOIDmode && mem_mode != BLKmode);
|
||||
if (after == (code == PRE_INC || code == PRE_DEC))
|
||||
x = cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode);
|
||||
else
|
||||
x = gen_rtx_PLUS (GET_MODE (x),
|
||||
cleanup_auto_inc_dec (XEXP (x, 0), after, mem_mode),
|
||||
GEN_INT ((code == PRE_INC || code == POST_INC)
|
||||
? GET_MODE_SIZE (mem_mode)
|
||||
: -GET_MODE_SIZE (mem_mode)));
|
||||
return x;
|
||||
|
||||
case PRE_MODIFY:
|
||||
case POST_MODIFY:
|
||||
if (after == (code == PRE_MODIFY))
|
||||
x = XEXP (x, 0);
|
||||
else
|
||||
x = XEXP (x, 1);
|
||||
return cleanup_auto_inc_dec (x, after, mem_mode);
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
/* Copy the various flags, fields, and other information. We assume
|
||||
that all fields need copying, and then clear the fields that should
|
||||
not be copied. That is the sensible default behavior, and forces
|
||||
us to explicitly document why we are *not* copying a flag. */
|
||||
x = shallow_copy_rtx (x);
|
||||
|
||||
/* We do not copy the USED flag, which is used as a mark bit during
|
||||
walks over the RTL. */
|
||||
RTX_FLAG (x, used) = 0;
|
||||
|
||||
/* We do not copy FRAME_RELATED for INSNs. */
|
||||
if (INSN_P (x))
|
||||
RTX_FLAG (x, frame_related) = 0;
|
||||
|
||||
fmt = GET_RTX_FORMAT (code);
|
||||
for (i = GET_RTX_LENGTH (code) - 1; i >= 0; i--)
|
||||
if (fmt[i] == 'e')
|
||||
XEXP (x, i) = cleanup_auto_inc_dec (XEXP (x, i), after, mem_mode);
|
||||
else if (fmt[i] == 'E' || fmt[i] == 'V')
|
||||
{
|
||||
int j;
|
||||
XVEC (x, i) = rtvec_alloc (XVECLEN (x, i));
|
||||
for (j = 0; j < XVECLEN (x, i); j++)
|
||||
XVECEXP (x, i, j)
|
||||
= cleanup_auto_inc_dec (XVECEXP (src, i, j), after, mem_mode);
|
||||
}
|
||||
|
||||
return x;
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Auxiliary data structure for propagate_for_debug_stmt. */
|
||||
|
||||
struct rtx_subst_pair
|
||||
{
|
||||
rtx from, to;
|
||||
bool changed;
|
||||
#ifdef AUTO_INC_DEC
|
||||
bool adjusted;
|
||||
bool after;
|
||||
#endif
|
||||
};
|
||||
|
||||
/* Clean up any auto-updates in PAIR->to the first time it is called
|
||||
for a PAIR. PAIR->adjusted is used to tell whether we've cleaned
|
||||
up before. */
|
||||
|
||||
static void
|
||||
auto_adjust_pair (struct rtx_subst_pair *pair ATTRIBUTE_UNUSED)
|
||||
{
|
||||
#ifdef AUTO_INC_DEC
|
||||
if (!pair->adjusted)
|
||||
{
|
||||
pair->adjusted = true;
|
||||
pair->to = cleanup_auto_inc_dec (pair->to, pair->after, VOIDmode);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
/* If *LOC is the same as FROM in the struct rtx_subst_pair passed as
|
||||
DATA, replace it with a copy of TO. Handle SUBREGs of *LOC as
|
||||
well. */
|
||||
|
||||
static int
|
||||
propagate_for_debug_subst (rtx *loc, void *data)
|
||||
{
|
||||
struct rtx_subst_pair *pair = (struct rtx_subst_pair *)data;
|
||||
rtx from = pair->from, to = pair->to;
|
||||
rtx x = *loc, s = x;
|
||||
|
||||
if (rtx_equal_p (x, from)
|
||||
|| (GET_CODE (x) == SUBREG && rtx_equal_p ((s = SUBREG_REG (x)), from)))
|
||||
{
|
||||
auto_adjust_pair (pair);
|
||||
if (pair->to != to)
|
||||
to = pair->to;
|
||||
else
|
||||
to = copy_rtx (to);
|
||||
if (s != x)
|
||||
{
|
||||
gcc_assert (GET_CODE (x) == SUBREG && SUBREG_REG (x) == s);
|
||||
to = simplify_gen_subreg (GET_MODE (x), to,
|
||||
GET_MODE (from), SUBREG_BYTE (x));
|
||||
}
|
||||
*loc = to;
|
||||
pair->changed = true;
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Replace occurrences of DEST with SRC in DEBUG_INSNs between INSN
|
||||
and LAST. If MOVE holds, debug insns must also be moved past
|
||||
LAST. */
|
||||
|
||||
static void
|
||||
propagate_for_debug (rtx insn, rtx last, rtx dest, rtx src, bool move)
|
||||
{
|
||||
struct rtx_subst_pair p;
|
||||
rtx next, move_pos = move ? last : NULL_RTX;
|
||||
|
||||
p.from = dest;
|
||||
p.to = src;
|
||||
p.changed = false;
|
||||
|
||||
#ifdef AUTO_INC_DEC
|
||||
p.adjusted = false;
|
||||
p.after = move;
|
||||
#endif
|
||||
|
||||
next = NEXT_INSN (insn);
|
||||
while (next != last)
|
||||
{
|
||||
insn = next;
|
||||
next = NEXT_INSN (insn);
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
for_each_rtx (&INSN_VAR_LOCATION_LOC (insn),
|
||||
propagate_for_debug_subst, &p);
|
||||
if (!p.changed)
|
||||
continue;
|
||||
p.changed = false;
|
||||
if (move_pos)
|
||||
{
|
||||
remove_insn (insn);
|
||||
PREV_INSN (insn) = NEXT_INSN (insn) = NULL_RTX;
|
||||
move_pos = emit_debug_insn_after (insn, move_pos);
|
||||
}
|
||||
else
|
||||
df_insn_rescan (insn);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Delete the conditional jump INSN and adjust the CFG correspondingly.
|
||||
Note that the INSN should be deleted *after* removing dead edges, so
|
||||
|
@ -2217,7 +2420,9 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
I2 and not in I3, a REG_DEAD note must be made. */
|
||||
rtx i3dest_killed = 0;
|
||||
/* SET_DEST and SET_SRC of I2 and I1. */
|
||||
rtx i2dest, i2src, i1dest = 0, i1src = 0;
|
||||
rtx i2dest = 0, i2src = 0, i1dest = 0, i1src = 0;
|
||||
/* Set if I2DEST was reused as a scratch register. */
|
||||
bool i2scratch = false;
|
||||
/* PATTERN (I1) and PATTERN (I2), or a copy of it in certain cases. */
|
||||
rtx i1pat = 0, i2pat = 0;
|
||||
/* Indicates if I2DEST or I1DEST is in I2SRC or I1_SRC. */
|
||||
|
@ -2301,7 +2506,7 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
&& GET_CODE (SET_DEST (PATTERN (i3))) != STRICT_LOW_PART
|
||||
&& ! reg_overlap_mentioned_p (SET_SRC (PATTERN (i3)),
|
||||
SET_DEST (PATTERN (i3)))
|
||||
&& next_real_insn (i2) == i3)
|
||||
&& next_active_insn (i2) == i3)
|
||||
{
|
||||
rtx p2 = PATTERN (i2);
|
||||
|
||||
|
@ -2334,6 +2539,7 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
subst_low_luid = DF_INSN_LUID (i2);
|
||||
|
||||
added_sets_2 = added_sets_1 = 0;
|
||||
i2src = SET_DEST (PATTERN (i3));
|
||||
i2dest = SET_SRC (PATTERN (i3));
|
||||
i2dest_killed = dead_or_set_p (i2, i2dest);
|
||||
|
||||
|
@ -3006,6 +3212,8 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
undobuf.frees = buf;
|
||||
}
|
||||
}
|
||||
|
||||
i2scratch = m_split != 0;
|
||||
}
|
||||
|
||||
/* If recog_for_combine has discarded clobbers, try to use them
|
||||
|
@ -3100,6 +3308,8 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
bool subst_done = false;
|
||||
newi2pat = NULL_RTX;
|
||||
|
||||
i2scratch = true;
|
||||
|
||||
/* Get NEWDEST as a register in the proper mode. We have already
|
||||
validated that we can do this. */
|
||||
if (GET_MODE (i2dest) != split_mode && split_mode != VOIDmode)
|
||||
|
@ -3402,6 +3612,67 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
return 0;
|
||||
}
|
||||
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
struct undo *undo;
|
||||
|
||||
for (undo = undobuf.undos; undo; undo = undo->next)
|
||||
if (undo->kind == UNDO_MODE)
|
||||
{
|
||||
rtx reg = *undo->where.r;
|
||||
enum machine_mode new_mode = GET_MODE (reg);
|
||||
enum machine_mode old_mode = undo->old_contents.m;
|
||||
|
||||
/* Temporarily revert mode back. */
|
||||
adjust_reg_mode (reg, old_mode);
|
||||
|
||||
if (reg == i2dest && i2scratch)
|
||||
{
|
||||
/* If we used i2dest as a scratch register with a
|
||||
different mode, substitute it for the original
|
||||
i2src while its original mode is temporarily
|
||||
restored, and then clear i2scratch so that we don't
|
||||
do it again later. */
|
||||
propagate_for_debug (i2, i3, reg, i2src, false);
|
||||
i2scratch = false;
|
||||
/* Put back the new mode. */
|
||||
adjust_reg_mode (reg, new_mode);
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx tempreg = gen_raw_REG (old_mode, REGNO (reg));
|
||||
rtx first, last;
|
||||
|
||||
if (reg == i2dest)
|
||||
{
|
||||
first = i2;
|
||||
last = i3;
|
||||
}
|
||||
else
|
||||
{
|
||||
first = i3;
|
||||
last = undobuf.other_insn;
|
||||
gcc_assert (last);
|
||||
}
|
||||
|
||||
/* We're dealing with a reg that changed mode but not
|
||||
meaning, so we want to turn it into a subreg for
|
||||
the new mode. However, because of REG sharing and
|
||||
because its mode had already changed, we have to do
|
||||
it in two steps. First, replace any debug uses of
|
||||
reg, with its original mode temporarily restored,
|
||||
with this copy we have created; then, replace the
|
||||
copy with the SUBREG of the original shared reg,
|
||||
once again changed to the new mode. */
|
||||
propagate_for_debug (first, last, reg, tempreg, false);
|
||||
adjust_reg_mode (reg, new_mode);
|
||||
propagate_for_debug (first, last, tempreg,
|
||||
lowpart_subreg (old_mode, reg, new_mode),
|
||||
false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* If we will be able to accept this, we have made a
|
||||
change to the destination of I3. This requires us to
|
||||
do a few adjustments. */
|
||||
|
@ -3592,16 +3863,24 @@ try_combine (rtx i3, rtx i2, rtx i1, int *new_direct_jump_p)
|
|||
|
||||
if (newi2pat)
|
||||
{
|
||||
if (MAY_HAVE_DEBUG_INSNS && i2scratch)
|
||||
propagate_for_debug (i2, i3, i2dest, i2src, false);
|
||||
INSN_CODE (i2) = i2_code_number;
|
||||
PATTERN (i2) = newi2pat;
|
||||
}
|
||||
else
|
||||
SET_INSN_DELETED (i2);
|
||||
{
|
||||
if (MAY_HAVE_DEBUG_INSNS && i2src)
|
||||
propagate_for_debug (i2, i3, i2dest, i2src, i3_subst_into_i2);
|
||||
SET_INSN_DELETED (i2);
|
||||
}
|
||||
|
||||
if (i1)
|
||||
{
|
||||
LOG_LINKS (i1) = 0;
|
||||
REG_NOTES (i1) = 0;
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
propagate_for_debug (i1, i3, i1dest, i1src, false);
|
||||
SET_INSN_DELETED (i1);
|
||||
}
|
||||
|
||||
|
@ -12396,6 +12675,29 @@ reg_bitfield_target_p (rtx x, rtx body)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Return the next insn after INSN that is neither a NOTE nor a
|
||||
DEBUG_INSN. This routine does not look inside SEQUENCEs. */
|
||||
|
||||
static rtx
|
||||
next_nonnote_nondebug_insn (rtx insn)
|
||||
{
|
||||
while (insn)
|
||||
{
|
||||
insn = NEXT_INSN (insn);
|
||||
if (insn == 0)
|
||||
break;
|
||||
if (NOTE_P (insn))
|
||||
continue;
|
||||
if (DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
break;
|
||||
}
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* Given a chain of REG_NOTES originally from FROM_INSN, try to place them
|
||||
as appropriate. I3 and I2 are the insns resulting from the combination
|
||||
|
@ -12649,7 +12951,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2,
|
|||
place = from_insn;
|
||||
else if (reg_referenced_p (XEXP (note, 0), PATTERN (i3)))
|
||||
place = i3;
|
||||
else if (i2 != 0 && next_nonnote_insn (i2) == i3
|
||||
else if (i2 != 0 && next_nonnote_nondebug_insn (i2) == i3
|
||||
&& reg_referenced_p (XEXP (note, 0), PATTERN (i2)))
|
||||
place = i2;
|
||||
else if ((rtx_equal_p (XEXP (note, 0), elim_i2)
|
||||
|
@ -12667,7 +12969,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2,
|
|||
|
||||
for (tem = PREV_INSN (tem); place == 0; tem = PREV_INSN (tem))
|
||||
{
|
||||
if (! INSN_P (tem))
|
||||
if (!NONDEBUG_INSN_P (tem))
|
||||
{
|
||||
if (tem == BB_HEAD (bb))
|
||||
break;
|
||||
|
@ -12868,7 +13170,7 @@ distribute_notes (rtx notes, rtx from_insn, rtx i3, rtx i2, rtx elim_i2,
|
|||
for (tem = PREV_INSN (place); ;
|
||||
tem = PREV_INSN (tem))
|
||||
{
|
||||
if (! INSN_P (tem))
|
||||
if (!NONDEBUG_INSN_P (tem))
|
||||
{
|
||||
if (tem == BB_HEAD (bb))
|
||||
break;
|
||||
|
@ -12958,7 +13260,9 @@ distribute_links (rtx links)
|
|||
(insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR
|
||||
|| BB_HEAD (this_basic_block->next_bb) != insn));
|
||||
insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn)))
|
||||
if (DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
else if (INSN_P (insn) && reg_overlap_mentioned_p (reg, PATTERN (insn)))
|
||||
{
|
||||
if (reg_referenced_p (reg, PATTERN (insn)))
|
||||
place = insn;
|
||||
|
|
|
@ -380,10 +380,6 @@ fcommon
|
|||
Common Report Var(flag_no_common,0) Optimization
|
||||
Do not put uninitialized globals in the common section
|
||||
|
||||
fconserve-stack
|
||||
Common Var(flag_conserve_stack) Optimization
|
||||
Do not perform optimizations increasing noticeably stack usage
|
||||
|
||||
fcompare-debug=
|
||||
Common JoinedOrMissing RejectNegative Var(flag_compare_debug_opt)
|
||||
-fcompare-debug[=<opts>] Compile with and without e.g. -gtoggle, and compare the final-insns dump
|
||||
|
@ -392,6 +388,10 @@ fcompare-debug-second
|
|||
Common RejectNegative Var(flag_compare_debug)
|
||||
Run only the second compilation of -fcompare-debug
|
||||
|
||||
fconserve-stack
|
||||
Common Var(flag_conserve_stack) Optimization
|
||||
Do not perform optimizations increasing noticeably stack usage
|
||||
|
||||
fcprop-registers
|
||||
Common Report Var(flag_cprop_registers) Optimization
|
||||
Perform a register copy-propagation optimization pass
|
||||
|
@ -470,14 +470,14 @@ fdump-unnumbered
|
|||
Common Report Var(flag_dump_unnumbered) VarExists
|
||||
Suppress output of instruction numbers, line number notes and addresses in debugging dumps
|
||||
|
||||
fdwarf2-cfi-asm
|
||||
Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE)
|
||||
Enable CFI tables via GAS assembler directives.
|
||||
|
||||
fdump-unnumbered-links
|
||||
Common Report Var(flag_dump_unnumbered_links) VarExists
|
||||
Suppress output of previous and next insn numbers in debugging dumps
|
||||
|
||||
fdwarf2-cfi-asm
|
||||
Common Report Var(flag_dwarf2_cfi_asm) Init(HAVE_GAS_CFI_DIRECTIVE)
|
||||
Enable CFI tables via GAS assembler directives.
|
||||
|
||||
fearly-inlining
|
||||
Common Report Var(flag_early_inlining) Init(1) Optimization
|
||||
Perform early inlining
|
||||
|
@ -1369,6 +1369,14 @@ fvar-tracking
|
|||
Common Report Var(flag_var_tracking) VarExists Optimization
|
||||
Perform variable tracking
|
||||
|
||||
fvar-tracking-assignments
|
||||
Common Report Var(flag_var_tracking_assignments) VarExists Optimization
|
||||
Perform variable tracking by annotating assignments
|
||||
|
||||
fvar-tracking-assignments-toggle
|
||||
Common Report Var(flag_var_tracking_assignments_toggle) VarExists Optimization
|
||||
Toggle -fvar-tracking-assignments
|
||||
|
||||
fvar-tracking-uninit
|
||||
Common Report Var(flag_var_tracking_uninit) Optimization
|
||||
Perform variable tracking and also tag variables that are uninitialized
|
||||
|
|
|
@ -10752,9 +10752,9 @@ ix86_pic_register_p (rtx x)
|
|||
the DWARF output code. */
|
||||
|
||||
static rtx
|
||||
ix86_delegitimize_address (rtx orig_x)
|
||||
ix86_delegitimize_address (rtx x)
|
||||
{
|
||||
rtx x = orig_x;
|
||||
rtx orig_x = delegitimize_mem_from_attrs (x);
|
||||
/* reg_addend is NULL or a multiple of some register. */
|
||||
rtx reg_addend = NULL_RTX;
|
||||
/* const_addend is NULL or a const_int. */
|
||||
|
@ -10762,6 +10762,8 @@ ix86_delegitimize_address (rtx orig_x)
|
|||
/* This is the result, or NULL. */
|
||||
rtx result = NULL_RTX;
|
||||
|
||||
x = orig_x;
|
||||
|
||||
if (MEM_P (x))
|
||||
x = XEXP (x, 0);
|
||||
|
||||
|
|
|
@ -5521,6 +5521,8 @@ ia64_safe_itanium_class (rtx insn)
|
|||
{
|
||||
if (recog_memoized (insn) >= 0)
|
||||
return get_attr_itanium_class (insn);
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
return ITANIUM_CLASS_IGNORE;
|
||||
else
|
||||
return ITANIUM_CLASS_UNKNOWN;
|
||||
}
|
||||
|
@ -6277,6 +6279,7 @@ group_barrier_needed (rtx insn)
|
|||
switch (GET_CODE (insn))
|
||||
{
|
||||
case NOTE:
|
||||
case DEBUG_INSN:
|
||||
break;
|
||||
|
||||
case BARRIER:
|
||||
|
@ -6434,7 +6437,7 @@ emit_insn_group_barriers (FILE *dump)
|
|||
init_insn_group_barriers ();
|
||||
last_label = 0;
|
||||
}
|
||||
else if (INSN_P (insn))
|
||||
else if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
insns_since_last_label = 1;
|
||||
|
||||
|
@ -6482,7 +6485,7 @@ emit_all_insn_group_barriers (FILE *dump ATTRIBUTE_UNUSED)
|
|||
|
||||
init_insn_group_barriers ();
|
||||
}
|
||||
else if (INSN_P (insn))
|
||||
else if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
if (recog_memoized (insn) == CODE_FOR_insn_group_barrier)
|
||||
init_insn_group_barriers ();
|
||||
|
@ -6975,6 +6978,9 @@ ia64_variable_issue (FILE *dump ATTRIBUTE_UNUSED,
|
|||
pending_data_specs--;
|
||||
}
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
return 1;
|
||||
|
||||
last_scheduled_insn = insn;
|
||||
memcpy (prev_cycle_state, curr_state, dfa_state_size);
|
||||
if (reload_completed)
|
||||
|
@ -7057,6 +7063,10 @@ ia64_dfa_new_cycle (FILE *dump, int verbose, rtx insn, int last_clock,
|
|||
int setup_clocks_p = FALSE;
|
||||
|
||||
gcc_assert (insn && INSN_P (insn));
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
return 0;
|
||||
|
||||
/* When a group barrier is needed for insn, last_scheduled_insn
|
||||
should be set. */
|
||||
gcc_assert (!(reload_completed && safe_group_barrier_needed (insn))
|
||||
|
@ -9043,7 +9053,7 @@ final_emit_insn_group_barriers (FILE *dump ATTRIBUTE_UNUSED)
|
|||
need_barrier_p = 0;
|
||||
prev_insn = NULL_RTX;
|
||||
}
|
||||
else if (INSN_P (insn))
|
||||
else if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
if (recog_memoized (insn) == CODE_FOR_insn_group_barrier)
|
||||
{
|
||||
|
@ -9605,15 +9615,18 @@ ia64_emit_deleted_label_after_insn (rtx insn)
|
|||
/* Define the CFA after INSN with the steady-state definition. */
|
||||
|
||||
static void
|
||||
ia64_dwarf2out_def_steady_cfa (rtx insn)
|
||||
ia64_dwarf2out_def_steady_cfa (rtx insn, bool frame)
|
||||
{
|
||||
rtx fp = frame_pointer_needed
|
||||
? hard_frame_pointer_rtx
|
||||
: stack_pointer_rtx;
|
||||
const char *label = ia64_emit_deleted_label_after_insn (insn);
|
||||
|
||||
if (!frame)
|
||||
return;
|
||||
|
||||
dwarf2out_def_cfa
|
||||
(ia64_emit_deleted_label_after_insn (insn),
|
||||
REGNO (fp),
|
||||
(label, REGNO (fp),
|
||||
ia64_initial_elimination_offset
|
||||
(REGNO (arg_pointer_rtx), REGNO (fp))
|
||||
+ ARG_POINTER_CFA_OFFSET (current_function_decl));
|
||||
|
@ -9706,8 +9719,7 @@ process_set (FILE *asm_out_file, rtx pat, rtx insn, bool unwind, bool frame)
|
|||
if (unwind)
|
||||
fprintf (asm_out_file, "\t.fframe "HOST_WIDE_INT_PRINT_DEC"\n",
|
||||
-INTVAL (op1));
|
||||
if (frame)
|
||||
ia64_dwarf2out_def_steady_cfa (insn);
|
||||
ia64_dwarf2out_def_steady_cfa (insn, frame);
|
||||
}
|
||||
else
|
||||
process_epilogue (asm_out_file, insn, unwind, frame);
|
||||
|
@ -9765,8 +9777,7 @@ process_set (FILE *asm_out_file, rtx pat, rtx insn, bool unwind, bool frame)
|
|||
if (unwind)
|
||||
fprintf (asm_out_file, "\t.vframe r%d\n",
|
||||
ia64_dbx_register_number (dest_regno));
|
||||
if (frame)
|
||||
ia64_dwarf2out_def_steady_cfa (insn);
|
||||
ia64_dwarf2out_def_steady_cfa (insn, frame);
|
||||
return 1;
|
||||
|
||||
default:
|
||||
|
@ -9911,8 +9922,8 @@ process_for_unwind_directive (FILE *asm_out_file, rtx insn)
|
|||
fprintf (asm_out_file, "\t.copy_state %d\n",
|
||||
cfun->machine->state_num);
|
||||
}
|
||||
if (IA64_CHANGE_CFA_IN_EPILOGUE && frame)
|
||||
ia64_dwarf2out_def_steady_cfa (insn);
|
||||
if (IA64_CHANGE_CFA_IN_EPILOGUE)
|
||||
ia64_dwarf2out_def_steady_cfa (insn, frame);
|
||||
need_copy_state = false;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -979,6 +979,7 @@ static void rs6000_init_dwarf_reg_sizes_extra (tree);
|
|||
static rtx rs6000_legitimize_address (rtx, rtx, enum machine_mode);
|
||||
static rtx rs6000_debug_legitimize_address (rtx, rtx, enum machine_mode);
|
||||
static rtx rs6000_legitimize_tls_address (rtx, enum tls_model);
|
||||
static rtx rs6000_delegitimize_address (rtx);
|
||||
static void rs6000_output_dwarf_dtprel (FILE *, int, rtx) ATTRIBUTE_UNUSED;
|
||||
static rtx rs6000_tls_get_addr (void);
|
||||
static rtx rs6000_got_sym (void);
|
||||
|
@ -1436,6 +1437,9 @@ static const struct attribute_spec rs6000_attribute_table[] =
|
|||
#undef TARGET_USE_BLOCKS_FOR_CONSTANT_P
|
||||
#define TARGET_USE_BLOCKS_FOR_CONSTANT_P rs6000_use_blocks_for_constant_p
|
||||
|
||||
#undef TARGET_DELEGITIMIZE_ADDRESS
|
||||
#define TARGET_DELEGITIMIZE_ADDRESS rs6000_delegitimize_address
|
||||
|
||||
#undef TARGET_BUILTIN_RECIPROCAL
|
||||
#define TARGET_BUILTIN_RECIPROCAL rs6000_builtin_reciprocal
|
||||
|
||||
|
@ -5080,6 +5084,33 @@ rs6000_debug_legitimize_address (rtx x, rtx oldx, enum machine_mode mode)
|
|||
return ret;
|
||||
}
|
||||
|
||||
/* If ORIG_X is a constant pool reference, return its known value,
|
||||
otherwise ORIG_X. */
|
||||
|
||||
static rtx
|
||||
rs6000_delegitimize_address (rtx x)
|
||||
{
|
||||
rtx orig_x = delegitimize_mem_from_attrs (x);
|
||||
|
||||
x = orig_x;
|
||||
|
||||
if (!MEM_P (x))
|
||||
return orig_x;
|
||||
|
||||
x = XEXP (x, 0);
|
||||
|
||||
if (legitimate_constant_pool_address_p (x)
|
||||
&& GET_CODE (XEXP (x, 1)) == CONST
|
||||
&& GET_CODE (XEXP (XEXP (x, 1), 0)) == MINUS
|
||||
&& GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 0)) == SYMBOL_REF
|
||||
&& constant_pool_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 0))
|
||||
&& GET_CODE (XEXP (XEXP (XEXP (x, 1), 0), 1)) == SYMBOL_REF
|
||||
&& toc_relative_expr_p (XEXP (XEXP (XEXP (x, 1), 0), 1)))
|
||||
return get_pool_constant (XEXP (XEXP (XEXP (x, 1), 0), 0));
|
||||
|
||||
return orig_x;
|
||||
}
|
||||
|
||||
/* This is called from dwarf2out.c via TARGET_ASM_OUTPUT_DWARF_DTPREL.
|
||||
We need to emit DTP-relative relocations. */
|
||||
|
||||
|
@ -21304,7 +21335,7 @@ rs6000_debug_adjust_cost (rtx insn, rtx link, rtx dep_insn, int cost)
|
|||
static bool
|
||||
is_microcoded_insn (rtx insn)
|
||||
{
|
||||
if (!insn || !INSN_P (insn)
|
||||
if (!insn || !NONDEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -21332,7 +21363,7 @@ is_microcoded_insn (rtx insn)
|
|||
static bool
|
||||
is_cracked_insn (rtx insn)
|
||||
{
|
||||
if (!insn || !INSN_P (insn)
|
||||
if (!insn || !NONDEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -21360,7 +21391,7 @@ is_cracked_insn (rtx insn)
|
|||
static bool
|
||||
is_branch_slot_insn (rtx insn)
|
||||
{
|
||||
if (!insn || !INSN_P (insn)
|
||||
if (!insn || !NONDEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -21519,7 +21550,7 @@ static bool
|
|||
is_nonpipeline_insn (rtx insn)
|
||||
{
|
||||
enum attr_type type;
|
||||
if (!insn || !INSN_P (insn)
|
||||
if (!insn || !NONDEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -22098,8 +22129,8 @@ insn_must_be_first_in_group (rtx insn)
|
|||
enum attr_type type;
|
||||
|
||||
if (!insn
|
||||
|| insn == NULL_RTX
|
||||
|| GET_CODE (insn) == NOTE
|
||||
|| DEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -22229,8 +22260,8 @@ insn_must_be_last_in_group (rtx insn)
|
|||
enum attr_type type;
|
||||
|
||||
if (!insn
|
||||
|| insn == NULL_RTX
|
||||
|| GET_CODE (insn) == NOTE
|
||||
|| DEBUG_INSN_P (insn)
|
||||
|| GET_CODE (PATTERN (insn)) == USE
|
||||
|| GET_CODE (PATTERN (insn)) == CLOBBER)
|
||||
return false;
|
||||
|
@ -22356,7 +22387,7 @@ force_new_group (int sched_verbose, FILE *dump, rtx *group_insns,
|
|||
bool end = *group_end;
|
||||
int i;
|
||||
|
||||
if (next_insn == NULL_RTX)
|
||||
if (next_insn == NULL_RTX || DEBUG_INSN_P (next_insn))
|
||||
return can_issue_more;
|
||||
|
||||
if (rs6000_sched_insert_nops > sched_finish_regroup_exact)
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/* Language-dependent hooks for C++.
|
||||
Copyright 2001, 2002, 2004, 2007, 2008 Free Software Foundation, Inc.
|
||||
Copyright 2001, 2002, 2004, 2007, 2008, 2009 Free Software Foundation, Inc.
|
||||
Contributed by Alexandre Oliva <aoliva@redhat.com>
|
||||
|
||||
This file is part of GCC.
|
||||
|
@ -124,7 +124,9 @@ cxx_dwarf_name (tree t, int verbosity)
|
|||
gcc_assert (DECL_P (t));
|
||||
|
||||
if (verbosity >= 2)
|
||||
return decl_as_string (t, TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME);
|
||||
return decl_as_string (t,
|
||||
TFF_DECL_SPECIFIERS | TFF_UNQUALIFIED_NAME
|
||||
| TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS);
|
||||
|
||||
return cxx_printable_name (t, verbosity);
|
||||
}
|
||||
|
|
|
@ -3987,7 +3987,9 @@ enum overload_flags { NO_SPECIAL = 0, DTOR_FLAG, OP_FLAG, TYPENAME_FLAG };
|
|||
TFF_EXPR_IN_PARENS: parenthesize expressions.
|
||||
TFF_NO_FUNCTION_ARGUMENTS: don't show function arguments.
|
||||
TFF_UNQUALIFIED_NAME: do not print the qualifying scope of the
|
||||
top-level entity. */
|
||||
top-level entity.
|
||||
TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS: do not omit template arguments
|
||||
identical to their defaults. */
|
||||
|
||||
#define TFF_PLAIN_IDENTIFIER (0)
|
||||
#define TFF_SCOPE (1)
|
||||
|
@ -4002,6 +4004,7 @@ enum overload_flags { NO_SPECIAL = 0, DTOR_FLAG, OP_FLAG, TYPENAME_FLAG };
|
|||
#define TFF_EXPR_IN_PARENS (1 << 9)
|
||||
#define TFF_NO_FUNCTION_ARGUMENTS (1 << 10)
|
||||
#define TFF_UNQUALIFIED_NAME (1 << 11)
|
||||
#define TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS (1 << 12)
|
||||
|
||||
/* Returns the TEMPLATE_DECL associated to a TEMPLATE_TEMPLATE_PARM
|
||||
node. */
|
||||
|
|
|
@ -84,7 +84,7 @@ static void dump_template_bindings (tree, tree, VEC(tree,gc) *);
|
|||
static void dump_scope (tree, int);
|
||||
static void dump_template_parms (tree, int, int);
|
||||
|
||||
static int count_non_default_template_args (tree, tree);
|
||||
static int count_non_default_template_args (tree, tree, int);
|
||||
|
||||
static const char *function_category (tree);
|
||||
static void maybe_print_instantiation_context (diagnostic_context *);
|
||||
|
@ -163,13 +163,20 @@ dump_template_argument (tree arg, int flags)
|
|||
match the (optional) default template parameter in PARAMS */
|
||||
|
||||
static int
|
||||
count_non_default_template_args (tree args, tree params)
|
||||
count_non_default_template_args (tree args, tree params, int flags)
|
||||
{
|
||||
tree inner_args = INNERMOST_TEMPLATE_ARGS (args);
|
||||
int n = TREE_VEC_LENGTH (inner_args);
|
||||
int last;
|
||||
|
||||
if (params == NULL_TREE || !flag_pretty_templates)
|
||||
if (params == NULL_TREE
|
||||
/* We use this flag when generating debug information. We don't
|
||||
want to expand templates at this point, for this may generate
|
||||
new decls, which gets decl counts out of sync, which may in
|
||||
turn cause codegen differences between compilations with and
|
||||
without -g. */
|
||||
|| (flags & TFF_NO_OMIT_DEFAULT_TEMPLATE_ARGUMENTS) != 0
|
||||
|| !flag_pretty_templates)
|
||||
return n;
|
||||
|
||||
for (last = n - 1; last >= 0; --last)
|
||||
|
@ -201,7 +208,7 @@ count_non_default_template_args (tree args, tree params)
|
|||
static void
|
||||
dump_template_argument_list (tree args, tree parms, int flags)
|
||||
{
|
||||
int n = count_non_default_template_args (args, parms);
|
||||
int n = count_non_default_template_args (args, parms, flags);
|
||||
int need_comma = 0;
|
||||
int i;
|
||||
|
||||
|
@ -1448,7 +1455,7 @@ dump_template_parms (tree info, int primary, int flags)
|
|||
? DECL_INNERMOST_TEMPLATE_PARMS (TI_TEMPLATE (info))
|
||||
: NULL_TREE);
|
||||
|
||||
len = count_non_default_template_args (args, params);
|
||||
len = count_non_default_template_args (args, params, flags);
|
||||
|
||||
args = INNERMOST_TEMPLATE_ARGS (args);
|
||||
for (ix = 0; ix != len; ix++)
|
||||
|
|
49
gcc/cse.c
49
gcc/cse.c
|
@ -4358,6 +4358,8 @@ cse_insn (rtx insn)
|
|||
apply_change_group ();
|
||||
fold_rtx (x, insn);
|
||||
}
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
canon_reg (PATTERN (insn), insn);
|
||||
|
||||
/* Store the equivalent value in SRC_EQV, if different, or if the DEST
|
||||
is a STRICT_LOW_PART. The latter condition is necessary because SRC_EQV
|
||||
|
@ -5788,7 +5790,7 @@ cse_insn (rtx insn)
|
|||
{
|
||||
prev = PREV_INSN (prev);
|
||||
}
|
||||
while (prev != bb_head && NOTE_P (prev));
|
||||
while (prev != bb_head && (NOTE_P (prev) || DEBUG_INSN_P (prev)));
|
||||
|
||||
/* Do not swap the registers around if the previous instruction
|
||||
attaches a REG_EQUIV note to REG1.
|
||||
|
@ -6244,7 +6246,7 @@ cse_extended_basic_block (struct cse_basic_block_data *ebb_data)
|
|||
|
||||
FIXME: This is a real kludge and needs to be done some other
|
||||
way. */
|
||||
if (INSN_P (insn)
|
||||
if (NONDEBUG_INSN_P (insn)
|
||||
&& num_insns++ > PARAM_VALUE (PARAM_MAX_CSE_INSNS))
|
||||
{
|
||||
flush_hash_table ();
|
||||
|
@ -6536,6 +6538,9 @@ count_reg_usage (rtx x, int *counts, rtx dest, int incr)
|
|||
incr);
|
||||
return;
|
||||
|
||||
case DEBUG_INSN:
|
||||
return;
|
||||
|
||||
case CALL_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
|
@ -6608,6 +6613,19 @@ count_reg_usage (rtx x, int *counts, rtx dest, int incr)
|
|||
}
|
||||
}
|
||||
|
||||
/* Return true if a register is dead. Can be used in for_each_rtx. */
|
||||
|
||||
static int
|
||||
is_dead_reg (rtx *loc, void *data)
|
||||
{
|
||||
rtx x = *loc;
|
||||
int *counts = (int *)data;
|
||||
|
||||
return (REG_P (x)
|
||||
&& REGNO (x) >= FIRST_PSEUDO_REGISTER
|
||||
&& counts[REGNO (x)] == 0);
|
||||
}
|
||||
|
||||
/* Return true if set is live. */
|
||||
static bool
|
||||
set_live_p (rtx set, rtx insn ATTRIBUTE_UNUSED, /* Only used with HAVE_cc0. */
|
||||
|
@ -6628,9 +6646,7 @@ set_live_p (rtx set, rtx insn ATTRIBUTE_UNUSED, /* Only used with HAVE_cc0. */
|
|||
|| !reg_referenced_p (cc0_rtx, PATTERN (tem))))
|
||||
return false;
|
||||
#endif
|
||||
else if (!REG_P (SET_DEST (set))
|
||||
|| REGNO (SET_DEST (set)) < FIRST_PSEUDO_REGISTER
|
||||
|| counts[REGNO (SET_DEST (set))] != 0
|
||||
else if (!is_dead_reg (&SET_DEST (set), counts)
|
||||
|| side_effects_p (SET_SRC (set)))
|
||||
return true;
|
||||
return false;
|
||||
|
@ -6662,6 +6678,29 @@ insn_live_p (rtx insn, int *counts)
|
|||
}
|
||||
return false;
|
||||
}
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
rtx next;
|
||||
|
||||
for (next = NEXT_INSN (insn); next; next = NEXT_INSN (next))
|
||||
if (NOTE_P (next))
|
||||
continue;
|
||||
else if (!DEBUG_INSN_P (next))
|
||||
return true;
|
||||
else if (INSN_VAR_LOCATION_DECL (insn) == INSN_VAR_LOCATION_DECL (next))
|
||||
return false;
|
||||
|
||||
/* If this debug insn references a dead register, drop the
|
||||
location expression for now. ??? We could try to find the
|
||||
def and see if propagation is possible. */
|
||||
if (for_each_rtx (&INSN_VAR_LOCATION_LOC (insn), is_dead_reg, counts))
|
||||
{
|
||||
INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC ();
|
||||
df_insn_rescan (insn);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
else
|
||||
return true;
|
||||
}
|
||||
|
|
407
gcc/cselib.c
407
gcc/cselib.c
|
@ -38,6 +38,7 @@ along with GCC; see the file COPYING3. If not see
|
|||
#include "output.h"
|
||||
#include "ggc.h"
|
||||
#include "hashtab.h"
|
||||
#include "tree-pass.h"
|
||||
#include "cselib.h"
|
||||
#include "params.h"
|
||||
#include "alloc-pool.h"
|
||||
|
@ -54,9 +55,8 @@ static void unchain_one_elt_loc_list (struct elt_loc_list **);
|
|||
static int discard_useless_locs (void **, void *);
|
||||
static int discard_useless_values (void **, void *);
|
||||
static void remove_useless_values (void);
|
||||
static rtx wrap_constant (enum machine_mode, rtx);
|
||||
static unsigned int cselib_hash_rtx (rtx, int);
|
||||
static cselib_val *new_cselib_val (unsigned int, enum machine_mode);
|
||||
static cselib_val *new_cselib_val (unsigned int, enum machine_mode, rtx);
|
||||
static void add_mem_for_addr (cselib_val *, cselib_val *, rtx);
|
||||
static cselib_val *cselib_lookup_mem (rtx, int);
|
||||
static void cselib_invalidate_regno (unsigned int, enum machine_mode);
|
||||
|
@ -64,6 +64,15 @@ static void cselib_invalidate_mem (rtx);
|
|||
static void cselib_record_set (rtx, cselib_val *, cselib_val *);
|
||||
static void cselib_record_sets (rtx);
|
||||
|
||||
struct expand_value_data
|
||||
{
|
||||
bitmap regs_active;
|
||||
cselib_expand_callback callback;
|
||||
void *callback_arg;
|
||||
};
|
||||
|
||||
static rtx cselib_expand_value_rtx_1 (rtx, struct expand_value_data *, int);
|
||||
|
||||
/* There are three ways in which cselib can look up an rtx:
|
||||
- for a REG, the reg_values table (which is indexed by regno) is used
|
||||
- for a MEM, we recursively look up its address and then follow the
|
||||
|
@ -134,6 +143,20 @@ static alloc_pool elt_loc_list_pool, elt_list_pool, cselib_val_pool, value_pool;
|
|||
/* If nonnull, cselib will call this function before freeing useless
|
||||
VALUEs. A VALUE is deemed useless if its "locs" field is null. */
|
||||
void (*cselib_discard_hook) (cselib_val *);
|
||||
|
||||
/* If nonnull, cselib will call this function before recording sets or
|
||||
even clobbering outputs of INSN. All the recorded sets will be
|
||||
represented in the array sets[n_sets]. new_val_min can be used to
|
||||
tell whether values present in sets are introduced by this
|
||||
instruction. */
|
||||
void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets,
|
||||
int n_sets);
|
||||
|
||||
#define PRESERVED_VALUE_P(RTX) \
|
||||
(RTL_FLAG_CHECK1("PRESERVED_VALUE_P", (RTX), VALUE)->unchanging)
|
||||
#define LONG_TERM_PRESERVED_VALUE_P(RTX) \
|
||||
(RTL_FLAG_CHECK1("LONG_TERM_PRESERVED_VALUE_P", (RTX), VALUE)->in_struct)
|
||||
|
||||
|
||||
|
||||
/* Allocate a struct elt_list and fill in its two elements with the
|
||||
|
@ -199,11 +222,19 @@ unchain_one_value (cselib_val *v)
|
|||
}
|
||||
|
||||
/* Remove all entries from the hash table. Also used during
|
||||
initialization. If CLEAR_ALL isn't set, then only clear the entries
|
||||
which are known to have been used. */
|
||||
initialization. */
|
||||
|
||||
void
|
||||
cselib_clear_table (void)
|
||||
{
|
||||
cselib_reset_table_with_next_value (0);
|
||||
}
|
||||
|
||||
/* Remove all entries from the hash table, arranging for the next
|
||||
value to be numbered NUM. */
|
||||
|
||||
void
|
||||
cselib_reset_table_with_next_value (unsigned int num)
|
||||
{
|
||||
unsigned int i;
|
||||
|
||||
|
@ -214,15 +245,24 @@ cselib_clear_table (void)
|
|||
|
||||
n_used_regs = 0;
|
||||
|
||||
/* ??? Preserve constants? */
|
||||
htab_empty (cselib_hash_table);
|
||||
|
||||
n_useless_values = 0;
|
||||
|
||||
next_unknown_value = 0;
|
||||
next_unknown_value = num;
|
||||
|
||||
first_containing_mem = &dummy_val;
|
||||
}
|
||||
|
||||
/* Return the number of the next value that will be generated. */
|
||||
|
||||
unsigned int
|
||||
cselib_get_next_unknown_value (void)
|
||||
{
|
||||
return next_unknown_value;
|
||||
}
|
||||
|
||||
/* The equality test for our hash table. The first argument ENTRY is a table
|
||||
element (i.e. a cselib_val), while the second arg X is an rtx. We know
|
||||
that all callers of htab_find_slot_with_hash will wrap CONST_INTs into a
|
||||
|
@ -317,7 +357,7 @@ discard_useless_locs (void **x, void *info ATTRIBUTE_UNUSED)
|
|||
p = &(*p)->next;
|
||||
}
|
||||
|
||||
if (had_locs && v->locs == 0)
|
||||
if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
|
||||
{
|
||||
n_useless_values++;
|
||||
values_became_useless = 1;
|
||||
|
@ -332,7 +372,7 @@ discard_useless_values (void **x, void *info ATTRIBUTE_UNUSED)
|
|||
{
|
||||
cselib_val *v = (cselib_val *)*x;
|
||||
|
||||
if (v->locs == 0)
|
||||
if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
|
||||
{
|
||||
if (cselib_discard_hook)
|
||||
cselib_discard_hook (v);
|
||||
|
@ -378,6 +418,78 @@ remove_useless_values (void)
|
|||
gcc_assert (!n_useless_values);
|
||||
}
|
||||
|
||||
/* Arrange for a value to not be removed from the hash table even if
|
||||
it becomes useless. */
|
||||
|
||||
void
|
||||
cselib_preserve_value (cselib_val *v)
|
||||
{
|
||||
PRESERVED_VALUE_P (v->val_rtx) = 1;
|
||||
}
|
||||
|
||||
/* Test whether a value is preserved. */
|
||||
|
||||
bool
|
||||
cselib_preserved_value_p (cselib_val *v)
|
||||
{
|
||||
return PRESERVED_VALUE_P (v->val_rtx);
|
||||
}
|
||||
|
||||
/* Mark preserved values as preserved for the long term. */
|
||||
|
||||
static int
|
||||
cselib_preserve_definitely (void **slot, void *info ATTRIBUTE_UNUSED)
|
||||
{
|
||||
cselib_val *v = (cselib_val *)*slot;
|
||||
|
||||
if (PRESERVED_VALUE_P (v->val_rtx)
|
||||
&& !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx))
|
||||
LONG_TERM_PRESERVED_VALUE_P (v->val_rtx) = true;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Clear the preserve marks for values not preserved for the long
|
||||
term. */
|
||||
|
||||
static int
|
||||
cselib_clear_preserve (void **slot, void *info ATTRIBUTE_UNUSED)
|
||||
{
|
||||
cselib_val *v = (cselib_val *)*slot;
|
||||
|
||||
if (PRESERVED_VALUE_P (v->val_rtx)
|
||||
&& !LONG_TERM_PRESERVED_VALUE_P (v->val_rtx))
|
||||
{
|
||||
PRESERVED_VALUE_P (v->val_rtx) = false;
|
||||
if (!v->locs)
|
||||
n_useless_values++;
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Clean all non-constant expressions in the hash table, but retain
|
||||
their values. */
|
||||
|
||||
void
|
||||
cselib_preserve_only_values (bool retain)
|
||||
{
|
||||
int i;
|
||||
|
||||
htab_traverse (cselib_hash_table,
|
||||
retain ? cselib_preserve_definitely : cselib_clear_preserve,
|
||||
NULL);
|
||||
|
||||
for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
|
||||
cselib_invalidate_regno (i, reg_raw_mode[i]);
|
||||
|
||||
cselib_invalidate_mem (callmem);
|
||||
|
||||
remove_useless_values ();
|
||||
|
||||
gcc_assert (first_containing_mem == &dummy_val);
|
||||
}
|
||||
|
||||
/* Return the mode in which a register was last set. If X is not a
|
||||
register, return its mode. If the mode in which the register was
|
||||
set is not known, or the value was already clobbered, return
|
||||
|
@ -549,19 +661,6 @@ rtx_equal_for_cselib_p (rtx x, rtx y)
|
|||
return 1;
|
||||
}
|
||||
|
||||
/* We need to pass down the mode of constants through the hash table
|
||||
functions. For that purpose, wrap them in a CONST of the appropriate
|
||||
mode. */
|
||||
static rtx
|
||||
wrap_constant (enum machine_mode mode, rtx x)
|
||||
{
|
||||
if (!CONST_INT_P (x) && GET_CODE (x) != CONST_FIXED
|
||||
&& (GET_CODE (x) != CONST_DOUBLE || GET_MODE (x) != VOIDmode))
|
||||
return x;
|
||||
gcc_assert (mode != VOIDmode);
|
||||
return gen_rtx_CONST (mode, x);
|
||||
}
|
||||
|
||||
/* Hash an rtx. Return 0 if we couldn't hash the rtx.
|
||||
For registers and memory locations, we look up their cselib_val structure
|
||||
and return its VALUE element.
|
||||
|
@ -748,7 +847,7 @@ cselib_hash_rtx (rtx x, int create)
|
|||
value is MODE. */
|
||||
|
||||
static inline cselib_val *
|
||||
new_cselib_val (unsigned int value, enum machine_mode mode)
|
||||
new_cselib_val (unsigned int value, enum machine_mode mode, rtx x)
|
||||
{
|
||||
cselib_val *e = (cselib_val *) pool_alloc (cselib_val_pool);
|
||||
|
||||
|
@ -768,6 +867,18 @@ new_cselib_val (unsigned int value, enum machine_mode mode)
|
|||
e->addr_list = 0;
|
||||
e->locs = 0;
|
||||
e->next_containing_mem = 0;
|
||||
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
fprintf (dump_file, "cselib value %u ", value);
|
||||
if (flag_dump_noaddr || flag_dump_unnumbered)
|
||||
fputs ("# ", dump_file);
|
||||
else
|
||||
fprintf (dump_file, "%p ", (void*)e);
|
||||
print_rtl_single (dump_file, x);
|
||||
fputc ('\n', dump_file);
|
||||
}
|
||||
|
||||
return e;
|
||||
}
|
||||
|
||||
|
@ -827,7 +938,7 @@ cselib_lookup_mem (rtx x, int create)
|
|||
if (! create)
|
||||
return 0;
|
||||
|
||||
mem_elt = new_cselib_val (++next_unknown_value, mode);
|
||||
mem_elt = new_cselib_val (++next_unknown_value, mode, x);
|
||||
add_mem_for_addr (addr, mem_elt, x);
|
||||
slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x),
|
||||
mem_elt->value, INSERT);
|
||||
|
@ -842,7 +953,8 @@ cselib_lookup_mem (rtx x, int create)
|
|||
expand to the same place. */
|
||||
|
||||
static rtx
|
||||
expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
||||
expand_loc (struct elt_loc_list *p, struct expand_value_data *evd,
|
||||
int max_depth)
|
||||
{
|
||||
rtx reg_result = NULL;
|
||||
unsigned int regno = UINT_MAX;
|
||||
|
@ -854,7 +966,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
|||
the same reg. */
|
||||
if ((REG_P (p->loc))
|
||||
&& (REGNO (p->loc) < regno)
|
||||
&& !bitmap_bit_p (regs_active, REGNO (p->loc)))
|
||||
&& !bitmap_bit_p (evd->regs_active, REGNO (p->loc)))
|
||||
{
|
||||
reg_result = p->loc;
|
||||
regno = REGNO (p->loc);
|
||||
|
@ -867,7 +979,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
|||
else if (!REG_P (p->loc))
|
||||
{
|
||||
rtx result, note;
|
||||
if (dump_file)
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
print_inline_rtx (dump_file, p->loc, 0);
|
||||
fprintf (dump_file, "\n");
|
||||
|
@ -878,7 +990,7 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
|||
&& (note = find_reg_note (p->setting_insn, REG_EQUAL, NULL_RTX))
|
||||
&& XEXP (note, 0) == XEXP (p->loc, 1))
|
||||
return XEXP (p->loc, 1);
|
||||
result = cselib_expand_value_rtx (p->loc, regs_active, max_depth - 1);
|
||||
result = cselib_expand_value_rtx_1 (p->loc, evd, max_depth - 1);
|
||||
if (result)
|
||||
return result;
|
||||
}
|
||||
|
@ -888,15 +1000,15 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
|||
if (regno != UINT_MAX)
|
||||
{
|
||||
rtx result;
|
||||
if (dump_file)
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
fprintf (dump_file, "r%d\n", regno);
|
||||
|
||||
result = cselib_expand_value_rtx (reg_result, regs_active, max_depth - 1);
|
||||
result = cselib_expand_value_rtx_1 (reg_result, evd, max_depth - 1);
|
||||
if (result)
|
||||
return result;
|
||||
}
|
||||
|
||||
if (dump_file)
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
if (reg_result)
|
||||
{
|
||||
|
@ -930,6 +1042,35 @@ expand_loc (struct elt_loc_list *p, bitmap regs_active, int max_depth)
|
|||
|
||||
rtx
|
||||
cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
||||
{
|
||||
struct expand_value_data evd;
|
||||
|
||||
evd.regs_active = regs_active;
|
||||
evd.callback = NULL;
|
||||
evd.callback_arg = NULL;
|
||||
|
||||
return cselib_expand_value_rtx_1 (orig, &evd, max_depth);
|
||||
}
|
||||
|
||||
/* Same as cselib_expand_value_rtx, but using a callback to try to
|
||||
resolve VALUEs that expand to nothing. */
|
||||
|
||||
rtx
|
||||
cselib_expand_value_rtx_cb (rtx orig, bitmap regs_active, int max_depth,
|
||||
cselib_expand_callback cb, void *data)
|
||||
{
|
||||
struct expand_value_data evd;
|
||||
|
||||
evd.regs_active = regs_active;
|
||||
evd.callback = cb;
|
||||
evd.callback_arg = data;
|
||||
|
||||
return cselib_expand_value_rtx_1 (orig, &evd, max_depth);
|
||||
}
|
||||
|
||||
static rtx
|
||||
cselib_expand_value_rtx_1 (rtx orig, struct expand_value_data *evd,
|
||||
int max_depth)
|
||||
{
|
||||
rtx copy, scopy;
|
||||
int i, j;
|
||||
|
@ -980,13 +1121,13 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
|| regno == HARD_FRAME_POINTER_REGNUM)
|
||||
return orig;
|
||||
|
||||
bitmap_set_bit (regs_active, regno);
|
||||
bitmap_set_bit (evd->regs_active, regno);
|
||||
|
||||
if (dump_file)
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
fprintf (dump_file, "expanding: r%d into: ", regno);
|
||||
|
||||
result = expand_loc (l->elt->locs, regs_active, max_depth);
|
||||
bitmap_clear_bit (regs_active, regno);
|
||||
result = expand_loc (l->elt->locs, evd, max_depth);
|
||||
bitmap_clear_bit (evd->regs_active, regno);
|
||||
|
||||
if (result)
|
||||
return result;
|
||||
|
@ -1017,8 +1158,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
|
||||
case SUBREG:
|
||||
{
|
||||
rtx subreg = cselib_expand_value_rtx (SUBREG_REG (orig), regs_active,
|
||||
max_depth - 1);
|
||||
rtx subreg = cselib_expand_value_rtx_1 (SUBREG_REG (orig), evd,
|
||||
max_depth - 1);
|
||||
if (!subreg)
|
||||
return NULL;
|
||||
scopy = simplify_gen_subreg (GET_MODE (orig), subreg,
|
||||
|
@ -1027,18 +1168,39 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
if (scopy == NULL
|
||||
|| (GET_CODE (scopy) == SUBREG
|
||||
&& !REG_P (SUBREG_REG (scopy))
|
||||
&& !MEM_P (SUBREG_REG (scopy))))
|
||||
&& !MEM_P (SUBREG_REG (scopy))
|
||||
&& (REG_P (SUBREG_REG (orig))
|
||||
|| MEM_P (SUBREG_REG (orig)))))
|
||||
return shallow_copy_rtx (orig);
|
||||
return scopy;
|
||||
}
|
||||
|
||||
case VALUE:
|
||||
if (dump_file)
|
||||
fprintf (dump_file, "expanding value %s into: ",
|
||||
GET_MODE_NAME (GET_MODE (orig)));
|
||||
{
|
||||
rtx result;
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
fputs ("\nexpanding ", dump_file);
|
||||
print_rtl_single (dump_file, orig);
|
||||
fputs (" into...", dump_file);
|
||||
}
|
||||
|
||||
return expand_loc (CSELIB_VAL_PTR (orig)->locs, regs_active, max_depth);
|
||||
if (!evd->callback)
|
||||
result = NULL;
|
||||
else
|
||||
{
|
||||
result = evd->callback (orig, evd->regs_active, max_depth,
|
||||
evd->callback_arg);
|
||||
if (result == orig)
|
||||
result = NULL;
|
||||
else if (result)
|
||||
result = cselib_expand_value_rtx_1 (result, evd, max_depth);
|
||||
}
|
||||
|
||||
if (!result)
|
||||
result = expand_loc (CSELIB_VAL_PTR (orig)->locs, evd, max_depth);
|
||||
return result;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@ -1057,7 +1219,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
case 'e':
|
||||
if (XEXP (orig, i) != NULL)
|
||||
{
|
||||
rtx result = cselib_expand_value_rtx (XEXP (orig, i), regs_active, max_depth - 1);
|
||||
rtx result = cselib_expand_value_rtx_1 (XEXP (orig, i), evd,
|
||||
max_depth - 1);
|
||||
if (!result)
|
||||
return NULL;
|
||||
XEXP (copy, i) = result;
|
||||
|
@ -1071,7 +1234,8 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
XVEC (copy, i) = rtvec_alloc (XVECLEN (orig, i));
|
||||
for (j = 0; j < XVECLEN (copy, i); j++)
|
||||
{
|
||||
rtx result = cselib_expand_value_rtx (XVECEXP (orig, i, j), regs_active, max_depth - 1);
|
||||
rtx result = cselib_expand_value_rtx_1 (XVECEXP (orig, i, j),
|
||||
evd, max_depth - 1);
|
||||
if (!result)
|
||||
return NULL;
|
||||
XVECEXP (copy, i, j) = result;
|
||||
|
@ -1155,13 +1319,17 @@ cselib_expand_value_rtx (rtx orig, bitmap regs_active, int max_depth)
|
|||
{
|
||||
XEXP (copy, 0)
|
||||
= gen_rtx_CONST (GET_MODE (XEXP (orig, 0)), XEXP (copy, 0));
|
||||
if (dump_file)
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
fprintf (dump_file, " wrapping const_int result in const to preserve mode %s\n",
|
||||
GET_MODE_NAME (GET_MODE (XEXP (copy, 0))));
|
||||
}
|
||||
scopy = simplify_rtx (copy);
|
||||
if (scopy)
|
||||
return scopy;
|
||||
{
|
||||
if (GET_MODE (copy) != GET_MODE (scopy))
|
||||
scopy = wrap_constant (GET_MODE (copy), scopy);
|
||||
return scopy;
|
||||
}
|
||||
return copy;
|
||||
}
|
||||
|
||||
|
@ -1199,7 +1367,7 @@ cselib_subst_to_values (rtx x)
|
|||
{
|
||||
/* This happens for autoincrements. Assign a value that doesn't
|
||||
match any other. */
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x));
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
|
||||
}
|
||||
return e->val_rtx;
|
||||
|
||||
|
@ -1215,7 +1383,7 @@ cselib_subst_to_values (rtx x)
|
|||
case PRE_DEC:
|
||||
case POST_MODIFY:
|
||||
case PRE_MODIFY:
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x));
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
|
||||
return e->val_rtx;
|
||||
|
||||
default:
|
||||
|
@ -1259,6 +1427,21 @@ cselib_subst_to_values (rtx x)
|
|||
return copy;
|
||||
}
|
||||
|
||||
/* Log a lookup of X to the cselib table along with the result RET. */
|
||||
|
||||
static cselib_val *
|
||||
cselib_log_lookup (rtx x, cselib_val *ret)
|
||||
{
|
||||
if (dump_file && (dump_flags & TDF_DETAILS))
|
||||
{
|
||||
fputs ("cselib lookup ", dump_file);
|
||||
print_inline_rtx (dump_file, x, 2);
|
||||
fprintf (dump_file, " => %u\n", ret ? ret->value : 0);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* Look up the rtl expression X in our tables and return the value it has.
|
||||
If CREATE is zero, we return NULL if we don't know the value. Otherwise,
|
||||
we create a new one if possible, using mode MODE if X doesn't have a mode
|
||||
|
@ -1287,10 +1470,10 @@ cselib_lookup (rtx x, enum machine_mode mode, int create)
|
|||
l = l->next;
|
||||
for (; l; l = l->next)
|
||||
if (mode == GET_MODE (l->elt->val_rtx))
|
||||
return l->elt;
|
||||
return cselib_log_lookup (x, l->elt);
|
||||
|
||||
if (! create)
|
||||
return 0;
|
||||
return cselib_log_lookup (x, 0);
|
||||
|
||||
if (i < FIRST_PSEUDO_REGISTER)
|
||||
{
|
||||
|
@ -1300,7 +1483,7 @@ cselib_lookup (rtx x, enum machine_mode mode, int create)
|
|||
max_value_regs = n;
|
||||
}
|
||||
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x));
|
||||
e = new_cselib_val (++next_unknown_value, GET_MODE (x), x);
|
||||
e->locs = new_elt_loc_list (e->locs, x);
|
||||
if (REG_VALUES (i) == 0)
|
||||
{
|
||||
|
@ -1313,34 +1496,34 @@ cselib_lookup (rtx x, enum machine_mode mode, int create)
|
|||
REG_VALUES (i)->next = new_elt_list (REG_VALUES (i)->next, e);
|
||||
slot = htab_find_slot_with_hash (cselib_hash_table, x, e->value, INSERT);
|
||||
*slot = e;
|
||||
return e;
|
||||
return cselib_log_lookup (x, e);
|
||||
}
|
||||
|
||||
if (MEM_P (x))
|
||||
return cselib_lookup_mem (x, create);
|
||||
return cselib_log_lookup (x, cselib_lookup_mem (x, create));
|
||||
|
||||
hashval = cselib_hash_rtx (x, create);
|
||||
/* Can't even create if hashing is not possible. */
|
||||
if (! hashval)
|
||||
return 0;
|
||||
return cselib_log_lookup (x, 0);
|
||||
|
||||
slot = htab_find_slot_with_hash (cselib_hash_table, wrap_constant (mode, x),
|
||||
hashval, create ? INSERT : NO_INSERT);
|
||||
if (slot == 0)
|
||||
return 0;
|
||||
return cselib_log_lookup (x, 0);
|
||||
|
||||
e = (cselib_val *) *slot;
|
||||
if (e)
|
||||
return e;
|
||||
return cselib_log_lookup (x, e);
|
||||
|
||||
e = new_cselib_val (hashval, mode);
|
||||
e = new_cselib_val (hashval, mode, x);
|
||||
|
||||
/* We have to fill the slot before calling cselib_subst_to_values:
|
||||
the hash table is inconsistent until we do so, and
|
||||
cselib_subst_to_values will need to do lookups. */
|
||||
*slot = (void *) e;
|
||||
e->locs = new_elt_loc_list (e->locs, cselib_subst_to_values (x));
|
||||
return e;
|
||||
return cselib_log_lookup (x, e);
|
||||
}
|
||||
|
||||
/* Invalidate any entries in reg_values that overlap REGNO. This is called
|
||||
|
@ -1427,7 +1610,7 @@ cselib_invalidate_regno (unsigned int regno, enum machine_mode mode)
|
|||
break;
|
||||
}
|
||||
}
|
||||
if (v->locs == 0)
|
||||
if (v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
|
||||
n_useless_values++;
|
||||
}
|
||||
}
|
||||
|
@ -1510,7 +1693,7 @@ cselib_invalidate_mem (rtx mem_rtx)
|
|||
unchain_one_elt_loc_list (p);
|
||||
}
|
||||
|
||||
if (had_locs && v->locs == 0)
|
||||
if (had_locs && v->locs == 0 && !PRESERVED_VALUE_P (v->val_rtx))
|
||||
n_useless_values++;
|
||||
|
||||
next = v->next_containing_mem;
|
||||
|
@ -1591,28 +1774,19 @@ cselib_record_set (rtx dest, cselib_val *src_elt, cselib_val *dest_addr_elt)
|
|||
REG_VALUES (dreg)->elt = src_elt;
|
||||
}
|
||||
|
||||
if (src_elt->locs == 0)
|
||||
if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx))
|
||||
n_useless_values--;
|
||||
src_elt->locs = new_elt_loc_list (src_elt->locs, dest);
|
||||
}
|
||||
else if (MEM_P (dest) && dest_addr_elt != 0
|
||||
&& cselib_record_memory)
|
||||
{
|
||||
if (src_elt->locs == 0)
|
||||
if (src_elt->locs == 0 && !PRESERVED_VALUE_P (src_elt->val_rtx))
|
||||
n_useless_values--;
|
||||
add_mem_for_addr (dest_addr_elt, src_elt, dest);
|
||||
}
|
||||
}
|
||||
|
||||
/* Describe a single set that is part of an insn. */
|
||||
struct set
|
||||
{
|
||||
rtx src;
|
||||
rtx dest;
|
||||
cselib_val *src_elt;
|
||||
cselib_val *dest_addr_elt;
|
||||
};
|
||||
|
||||
/* There is no good way to determine how many elements there can be
|
||||
in a PARALLEL. Since it's fairly cheap, use a really large number. */
|
||||
#define MAX_SETS (FIRST_PSEUDO_REGISTER * 2)
|
||||
|
@ -1623,7 +1797,7 @@ cselib_record_sets (rtx insn)
|
|||
{
|
||||
int n_sets = 0;
|
||||
int i;
|
||||
struct set sets[MAX_SETS];
|
||||
struct cselib_set sets[MAX_SETS];
|
||||
rtx body = PATTERN (insn);
|
||||
rtx cond = 0;
|
||||
|
||||
|
@ -1695,6 +1869,9 @@ cselib_record_sets (rtx insn)
|
|||
}
|
||||
}
|
||||
|
||||
if (cselib_record_sets_hook)
|
||||
cselib_record_sets_hook (insn, sets, n_sets);
|
||||
|
||||
/* Invalidate all locations written by this insn. Note that the elts we
|
||||
looked up in the previous loop aren't affected, just some of their
|
||||
locations may go away. */
|
||||
|
@ -1751,7 +1928,7 @@ cselib_process_insn (rtx insn)
|
|||
&& GET_CODE (PATTERN (insn)) == ASM_OPERANDS
|
||||
&& MEM_VOLATILE_P (PATTERN (insn))))
|
||||
{
|
||||
cselib_clear_table ();
|
||||
cselib_reset_table_with_next_value (next_unknown_value);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1868,4 +2045,92 @@ cselib_finish (void)
|
|||
next_unknown_value = 0;
|
||||
}
|
||||
|
||||
/* Dump the cselib_val *X to FILE *info. */
|
||||
|
||||
static int
|
||||
dump_cselib_val (void **x, void *info)
|
||||
{
|
||||
cselib_val *v = (cselib_val *)*x;
|
||||
FILE *out = (FILE *)info;
|
||||
bool need_lf = true;
|
||||
|
||||
print_inline_rtx (out, v->val_rtx, 0);
|
||||
|
||||
if (v->locs)
|
||||
{
|
||||
struct elt_loc_list *l = v->locs;
|
||||
if (need_lf)
|
||||
{
|
||||
fputc ('\n', out);
|
||||
need_lf = false;
|
||||
}
|
||||
fputs (" locs:", out);
|
||||
do
|
||||
{
|
||||
fprintf (out, "\n from insn %i ",
|
||||
INSN_UID (l->setting_insn));
|
||||
print_inline_rtx (out, l->loc, 4);
|
||||
}
|
||||
while ((l = l->next));
|
||||
fputc ('\n', out);
|
||||
}
|
||||
else
|
||||
{
|
||||
fputs (" no locs", out);
|
||||
need_lf = true;
|
||||
}
|
||||
|
||||
if (v->addr_list)
|
||||
{
|
||||
struct elt_list *e = v->addr_list;
|
||||
if (need_lf)
|
||||
{
|
||||
fputc ('\n', out);
|
||||
need_lf = false;
|
||||
}
|
||||
fputs (" addr list:", out);
|
||||
do
|
||||
{
|
||||
fputs ("\n ", out);
|
||||
print_inline_rtx (out, e->elt->val_rtx, 2);
|
||||
}
|
||||
while ((e = e->next));
|
||||
fputc ('\n', out);
|
||||
}
|
||||
else
|
||||
{
|
||||
fputs (" no addrs", out);
|
||||
need_lf = true;
|
||||
}
|
||||
|
||||
if (v->next_containing_mem == &dummy_val)
|
||||
fputs (" last mem\n", out);
|
||||
else if (v->next_containing_mem)
|
||||
{
|
||||
fputs (" next mem ", out);
|
||||
print_inline_rtx (out, v->next_containing_mem->val_rtx, 2);
|
||||
fputc ('\n', out);
|
||||
}
|
||||
else if (need_lf)
|
||||
fputc ('\n', out);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* Dump to OUT everything in the CSELIB table. */
|
||||
|
||||
void
|
||||
dump_cselib_table (FILE *out)
|
||||
{
|
||||
fprintf (out, "cselib hash table:\n");
|
||||
htab_traverse (cselib_hash_table, dump_cselib_val, out);
|
||||
if (first_containing_mem != &dummy_val)
|
||||
{
|
||||
fputs ("first mem ", out);
|
||||
print_inline_rtx (out, first_containing_mem->val_rtx, 2);
|
||||
fputc ('\n', out);
|
||||
}
|
||||
fprintf (out, "last unknown value %i\n", next_unknown_value);
|
||||
}
|
||||
|
||||
#include "gt-cselib.h"
|
||||
|
|
22
gcc/cselib.h
22
gcc/cselib.h
|
@ -53,7 +53,18 @@ struct GTY(()) elt_list {
|
|||
cselib_val *elt;
|
||||
};
|
||||
|
||||
/* Describe a single set that is part of an insn. */
|
||||
struct cselib_set
|
||||
{
|
||||
rtx src;
|
||||
rtx dest;
|
||||
cselib_val *src_elt;
|
||||
cselib_val *dest_addr_elt;
|
||||
};
|
||||
|
||||
extern void (*cselib_discard_hook) (cselib_val *);
|
||||
extern void (*cselib_record_sets_hook) (rtx insn, struct cselib_set *sets,
|
||||
int n_sets);
|
||||
|
||||
extern cselib_val *cselib_lookup (rtx, enum machine_mode, int);
|
||||
extern void cselib_init (bool record_memory);
|
||||
|
@ -64,5 +75,16 @@ extern enum machine_mode cselib_reg_set_mode (const_rtx);
|
|||
extern int rtx_equal_for_cselib_p (rtx, rtx);
|
||||
extern int references_value_p (const_rtx, int);
|
||||
extern rtx cselib_expand_value_rtx (rtx, bitmap, int);
|
||||
typedef rtx (*cselib_expand_callback)(rtx, bitmap, int, void *);
|
||||
extern rtx cselib_expand_value_rtx_cb (rtx, bitmap, int,
|
||||
cselib_expand_callback, void*);
|
||||
extern rtx cselib_subst_to_values (rtx);
|
||||
extern void cselib_invalidate_rtx (rtx);
|
||||
|
||||
extern void cselib_reset_table_with_next_value (unsigned int);
|
||||
extern unsigned int cselib_get_next_unknown_value (void);
|
||||
extern void cselib_preserve_value (cselib_val *);
|
||||
extern bool cselib_preserved_value_p (cselib_val *);
|
||||
extern void cselib_preserve_only_values (bool);
|
||||
|
||||
extern void dump_cselib_table (FILE *);
|
||||
|
|
|
@ -124,6 +124,7 @@ deletable_insn_p (rtx insn, bool fast, bitmap arg_stores)
|
|||
switch (GET_CODE (body))
|
||||
{
|
||||
case USE:
|
||||
case VAR_LOCATION:
|
||||
return false;
|
||||
|
||||
case CLOBBER:
|
||||
|
@ -643,6 +644,9 @@ mark_reg_dependencies (rtx insn)
|
|||
struct df_link *defs;
|
||||
df_ref *use_rec;
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
return;
|
||||
|
||||
for (use_rec = DF_INSN_USES (insn); *use_rec; use_rec++)
|
||||
{
|
||||
df_ref use = *use_rec;
|
||||
|
|
26
gcc/ddg.c
26
gcc/ddg.c
|
@ -166,6 +166,9 @@ create_ddg_dep_from_intra_loop_link (ddg_ptr g, ddg_node_ptr src_node,
|
|||
else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
|
||||
t = OUTPUT_DEP;
|
||||
|
||||
gcc_assert (!DEBUG_INSN_P (dest_node->insn) || t == ANTI_DEP);
|
||||
gcc_assert (!DEBUG_INSN_P (src_node->insn) || DEBUG_INSN_P (dest_node->insn));
|
||||
|
||||
/* We currently choose not to create certain anti-deps edges and
|
||||
compensate for that by generating reg-moves based on the life-range
|
||||
analysis. The anti-deps that will be deleted are the ones which
|
||||
|
@ -209,6 +212,9 @@ create_ddg_dep_no_link (ddg_ptr g, ddg_node_ptr from, ddg_node_ptr to,
|
|||
enum reg_note dep_kind;
|
||||
struct _dep _dep, *dep = &_dep;
|
||||
|
||||
gcc_assert (!DEBUG_INSN_P (to->insn) || d_t == ANTI_DEP);
|
||||
gcc_assert (!DEBUG_INSN_P (from->insn) || DEBUG_INSN_P (to->insn));
|
||||
|
||||
if (d_t == ANTI_DEP)
|
||||
dep_kind = REG_DEP_ANTI;
|
||||
else if (d_t == OUTPUT_DEP)
|
||||
|
@ -277,10 +283,11 @@ add_cross_iteration_register_deps (ddg_ptr g, df_ref last_def)
|
|||
/* Add true deps from last_def to it's uses in the next
|
||||
iteration. Any such upwards exposed use appears before
|
||||
the last_def def. */
|
||||
create_ddg_dep_no_link (g, last_def_node, use_node, TRUE_DEP,
|
||||
create_ddg_dep_no_link (g, last_def_node, use_node,
|
||||
DEBUG_INSN_P (use_insn) ? ANTI_DEP : TRUE_DEP,
|
||||
REG_DEP, 1);
|
||||
}
|
||||
else
|
||||
else if (!DEBUG_INSN_P (use_insn))
|
||||
{
|
||||
/* Add anti deps from last_def's uses in the current iteration
|
||||
to the first def in the next iteration. We do not add ANTI
|
||||
|
@ -417,6 +424,8 @@ build_intra_loop_deps (ddg_ptr g)
|
|||
for (j = 0; j <= i; j++)
|
||||
{
|
||||
ddg_node_ptr j_node = &g->nodes[j];
|
||||
if (DEBUG_INSN_P (j_node->insn))
|
||||
continue;
|
||||
if (mem_access_insn_p (j_node->insn))
|
||||
/* Don't bother calculating inter-loop dep if an intra-loop dep
|
||||
already exists. */
|
||||
|
@ -458,10 +467,15 @@ create_ddg (basic_block bb, int closing_branch_deps)
|
|||
if (! INSN_P (insn) || GET_CODE (PATTERN (insn)) == USE)
|
||||
continue;
|
||||
|
||||
if (mem_read_insn_p (insn))
|
||||
g->num_loads++;
|
||||
if (mem_write_insn_p (insn))
|
||||
g->num_stores++;
|
||||
if (DEBUG_INSN_P (insn))
|
||||
g->num_debug++;
|
||||
else
|
||||
{
|
||||
if (mem_read_insn_p (insn))
|
||||
g->num_loads++;
|
||||
if (mem_write_insn_p (insn))
|
||||
g->num_stores++;
|
||||
}
|
||||
num_nodes++;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
/* DDG - Data Dependence Graph - interface.
|
||||
Copyright (C) 2004, 2005, 2006, 2007
|
||||
Copyright (C) 2004, 2005, 2006, 2007, 2008
|
||||
Free Software Foundation, Inc.
|
||||
Contributed by Ayal Zaks and Mustafa Hagog <zaks,mustafa@il.ibm.com>
|
||||
|
||||
|
@ -121,6 +121,9 @@ struct ddg
|
|||
int num_loads;
|
||||
int num_stores;
|
||||
|
||||
/* Number of debug instructions in the BB. */
|
||||
int num_debug;
|
||||
|
||||
/* This array holds the nodes in the graph; it is indexed by the node
|
||||
cuid, which follows the order of the instructions in the BB. */
|
||||
ddg_node_ptr nodes;
|
||||
|
@ -134,8 +137,8 @@ struct ddg
|
|||
int closing_branch_deps;
|
||||
|
||||
/* Array and number of backarcs (edges with distance > 0) in the DDG. */
|
||||
ddg_edge_ptr *backarcs;
|
||||
int num_backarcs;
|
||||
ddg_edge_ptr *backarcs;
|
||||
};
|
||||
|
||||
|
||||
|
|
|
@ -858,7 +858,7 @@ df_lr_bb_local_compute (unsigned int bb_index)
|
|||
{
|
||||
unsigned int uid = INSN_UID (insn);
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
for (def_rec = DF_INSN_UID_DEFS (uid); *def_rec; def_rec++)
|
||||
|
@ -3182,6 +3182,8 @@ df_set_note (enum reg_note note_type, rtx insn, rtx old, rtx reg)
|
|||
rtx curr = old;
|
||||
rtx prev = NULL;
|
||||
|
||||
gcc_assert (!DEBUG_INSN_P (insn));
|
||||
|
||||
while (curr)
|
||||
if (XEXP (curr, 0) == reg)
|
||||
{
|
||||
|
@ -3314,9 +3316,12 @@ df_whole_mw_reg_dead_p (struct df_mw_hardreg *mws,
|
|||
static rtx
|
||||
df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws,
|
||||
bitmap live, bitmap do_not_gen,
|
||||
bitmap artificial_uses)
|
||||
bitmap artificial_uses, bool *added_notes_p)
|
||||
{
|
||||
unsigned int r;
|
||||
bool is_debug = *added_notes_p;
|
||||
|
||||
*added_notes_p = false;
|
||||
|
||||
#ifdef REG_DEAD_DEBUGGING
|
||||
if (dump_file)
|
||||
|
@ -3334,6 +3339,11 @@ df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws,
|
|||
if (df_whole_mw_reg_dead_p (mws, live, artificial_uses, do_not_gen))
|
||||
{
|
||||
/* Add a dead note for the entire multi word register. */
|
||||
if (is_debug)
|
||||
{
|
||||
*added_notes_p = true;
|
||||
return old;
|
||||
}
|
||||
old = df_set_note (REG_DEAD, insn, old, mws->mw_reg);
|
||||
#ifdef REG_DEAD_DEBUGGING
|
||||
df_print_note ("adding 1: ", insn, REG_NOTES (insn));
|
||||
|
@ -3346,6 +3356,11 @@ df_set_dead_notes_for_mw (rtx insn, rtx old, struct df_mw_hardreg *mws,
|
|||
&& !bitmap_bit_p (artificial_uses, r)
|
||||
&& !bitmap_bit_p (do_not_gen, r))
|
||||
{
|
||||
if (is_debug)
|
||||
{
|
||||
*added_notes_p = true;
|
||||
return old;
|
||||
}
|
||||
old = df_set_note (REG_DEAD, insn, old, regno_reg_rtx[r]);
|
||||
#ifdef REG_DEAD_DEBUGGING
|
||||
df_print_note ("adding 2: ", insn, REG_NOTES (insn));
|
||||
|
@ -3456,10 +3471,13 @@ df_note_bb_compute (unsigned int bb_index,
|
|||
struct df_mw_hardreg **mws_rec;
|
||||
rtx old_dead_notes;
|
||||
rtx old_unused_notes;
|
||||
int debug_insn;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
continue;
|
||||
|
||||
debug_insn = DEBUG_INSN_P (insn);
|
||||
|
||||
bitmap_clear (do_not_gen);
|
||||
df_kill_notes (insn, &old_dead_notes, &old_unused_notes);
|
||||
|
||||
|
@ -3544,10 +3562,18 @@ df_note_bb_compute (unsigned int bb_index,
|
|||
struct df_mw_hardreg *mws = *mws_rec;
|
||||
if ((DF_MWS_REG_DEF_P (mws))
|
||||
&& !df_ignore_stack_reg (mws->start_regno))
|
||||
old_dead_notes
|
||||
= df_set_dead_notes_for_mw (insn, old_dead_notes,
|
||||
mws, live, do_not_gen,
|
||||
artificial_uses);
|
||||
{
|
||||
bool really_add_notes = debug_insn != 0;
|
||||
|
||||
old_dead_notes
|
||||
= df_set_dead_notes_for_mw (insn, old_dead_notes,
|
||||
mws, live, do_not_gen,
|
||||
artificial_uses,
|
||||
&really_add_notes);
|
||||
|
||||
if (really_add_notes)
|
||||
debug_insn = -1;
|
||||
}
|
||||
mws_rec++;
|
||||
}
|
||||
|
||||
|
@ -3557,7 +3583,7 @@ df_note_bb_compute (unsigned int bb_index,
|
|||
unsigned int uregno = DF_REF_REGNO (use);
|
||||
|
||||
#ifdef REG_DEAD_DEBUGGING
|
||||
if (dump_file)
|
||||
if (dump_file && !debug_insn)
|
||||
{
|
||||
fprintf (dump_file, " regular looking at use ");
|
||||
df_ref_debug (use, dump_file);
|
||||
|
@ -3565,6 +3591,12 @@ df_note_bb_compute (unsigned int bb_index,
|
|||
#endif
|
||||
if (!bitmap_bit_p (live, uregno))
|
||||
{
|
||||
if (debug_insn)
|
||||
{
|
||||
debug_insn = -1;
|
||||
break;
|
||||
}
|
||||
|
||||
if ( (!(DF_REF_FLAGS (use) & DF_REF_MW_HARDREG))
|
||||
&& (!bitmap_bit_p (do_not_gen, uregno))
|
||||
&& (!bitmap_bit_p (artificial_uses, uregno))
|
||||
|
@ -3596,6 +3628,14 @@ df_note_bb_compute (unsigned int bb_index,
|
|||
free_EXPR_LIST_node (old_dead_notes);
|
||||
old_dead_notes = next;
|
||||
}
|
||||
|
||||
if (debug_insn == -1)
|
||||
{
|
||||
/* ??? We could probably do better here, replacing dead
|
||||
registers with their definitions. */
|
||||
INSN_VAR_LOCATION_LOC (insn) = gen_rtx_UNKNOWN_VAR_LOC ();
|
||||
df_insn_rescan_debug_internal (insn);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3741,6 +3781,9 @@ df_simulate_uses (rtx insn, bitmap live)
|
|||
df_ref *use_rec;
|
||||
unsigned int uid = INSN_UID (insn);
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
return;
|
||||
|
||||
for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
|
||||
{
|
||||
df_ref use = *use_rec;
|
||||
|
@ -3807,7 +3850,7 @@ df_simulate_initialize_backwards (basic_block bb, bitmap live)
|
|||
void
|
||||
df_simulate_one_insn_backwards (basic_block bb, rtx insn, bitmap live)
|
||||
{
|
||||
if (! INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
return;
|
||||
|
||||
df_simulate_defs (insn, live);
|
||||
|
|
|
@ -1310,6 +1310,62 @@ df_insn_rescan (rtx insn)
|
|||
return true;
|
||||
}
|
||||
|
||||
/* Same as df_insn_rescan, but don't mark the basic block as
|
||||
dirty. */
|
||||
|
||||
bool
|
||||
df_insn_rescan_debug_internal (rtx insn)
|
||||
{
|
||||
unsigned int uid = INSN_UID (insn);
|
||||
struct df_insn_info *insn_info;
|
||||
|
||||
gcc_assert (DEBUG_INSN_P (insn));
|
||||
gcc_assert (VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn)));
|
||||
|
||||
if (!df)
|
||||
return false;
|
||||
|
||||
insn_info = DF_INSN_UID_SAFE_GET (INSN_UID (insn));
|
||||
if (!insn_info)
|
||||
return false;
|
||||
|
||||
if (dump_file)
|
||||
fprintf (dump_file, "deleting debug_insn with uid = %d.\n", uid);
|
||||
|
||||
bitmap_clear_bit (df->insns_to_delete, uid);
|
||||
bitmap_clear_bit (df->insns_to_rescan, uid);
|
||||
bitmap_clear_bit (df->insns_to_notes_rescan, uid);
|
||||
|
||||
if (!insn_info->defs)
|
||||
return false;
|
||||
|
||||
if (insn_info->defs == df_null_ref_rec
|
||||
&& insn_info->uses == df_null_ref_rec
|
||||
&& insn_info->eq_uses == df_null_ref_rec
|
||||
&& insn_info->mw_hardregs == df_null_mw_rec)
|
||||
return false;
|
||||
|
||||
df_mw_hardreg_chain_delete (insn_info->mw_hardregs);
|
||||
|
||||
if (df_chain)
|
||||
{
|
||||
df_ref_chain_delete_du_chain (insn_info->defs);
|
||||
df_ref_chain_delete_du_chain (insn_info->uses);
|
||||
df_ref_chain_delete_du_chain (insn_info->eq_uses);
|
||||
}
|
||||
|
||||
df_ref_chain_delete (insn_info->defs);
|
||||
df_ref_chain_delete (insn_info->uses);
|
||||
df_ref_chain_delete (insn_info->eq_uses);
|
||||
|
||||
insn_info->defs = df_null_ref_rec;
|
||||
insn_info->uses = df_null_ref_rec;
|
||||
insn_info->eq_uses = df_null_ref_rec;
|
||||
insn_info->mw_hardregs = df_null_mw_rec;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
/* Rescan all of the insns in the function. Note that the artificial
|
||||
uses and defs are not touched. This function will destroy def-se
|
||||
|
@ -3267,12 +3323,20 @@ df_uses_record (enum df_ref_class cl, struct df_collection_rec *collection_rec,
|
|||
break;
|
||||
}
|
||||
|
||||
case VAR_LOCATION:
|
||||
df_uses_record (cl, collection_rec,
|
||||
&PAT_VAR_LOCATION_LOC (x),
|
||||
DF_REF_REG_USE, bb, insn_info,
|
||||
flags, width, offset, mode);
|
||||
return;
|
||||
|
||||
case PRE_DEC:
|
||||
case POST_DEC:
|
||||
case PRE_INC:
|
||||
case POST_INC:
|
||||
case PRE_MODIFY:
|
||||
case POST_MODIFY:
|
||||
gcc_assert (!DEBUG_INSN_P (insn_info->insn));
|
||||
/* Catch the def of the register being modified. */
|
||||
df_ref_record (cl, collection_rec, XEXP (x, 0), &XEXP (x, 0),
|
||||
bb, insn_info,
|
||||
|
|
1
gcc/df.h
1
gcc/df.h
|
@ -1002,6 +1002,7 @@ extern struct df_insn_info * df_insn_create_insn_record (rtx);
|
|||
extern void df_insn_delete (basic_block, unsigned int);
|
||||
extern void df_bb_refs_record (int, bool);
|
||||
extern bool df_insn_rescan (rtx);
|
||||
extern bool df_insn_rescan_debug_internal (rtx);
|
||||
extern void df_insn_rescan_all (void);
|
||||
extern void df_process_deferred_rescans (void);
|
||||
extern void df_recompute_luids (basic_block);
|
||||
|
|
|
@ -322,6 +322,9 @@ diagnostic_report_diagnostic (diagnostic_context *context,
|
|||
&& !diagnostic_report_warnings_p (location))
|
||||
return false;
|
||||
|
||||
if (diagnostic->kind == DK_NOTE && flag_compare_debug)
|
||||
return false;
|
||||
|
||||
if (diagnostic->kind == DK_PEDWARN)
|
||||
diagnostic->kind = pedantic_warning_kind ();
|
||||
|
||||
|
|
|
@ -691,12 +691,21 @@ Return true if the code of g is @code{GIMPLE_ASSIGN}.
|
|||
@end deftypefn
|
||||
|
||||
@deftypefn {GIMPLE function} is_gimple_call (gimple g)
|
||||
Return true if the code of g is @code{GIMPLE_CALL}
|
||||
Return true if the code of g is @code{GIMPLE_CALL}.
|
||||
@end deftypefn
|
||||
|
||||
@deftypefn {GIMPLE function} is_gimple_debug (gimple g)
|
||||
Return true if the code of g is @code{GIMPLE_DEBUG}.
|
||||
@end deftypefn
|
||||
|
||||
@deftypefn {GIMPLE function} gimple_assign_cast_p (gimple g)
|
||||
Return true if g is a @code{GIMPLE_ASSIGN} that performs a type cast
|
||||
operation
|
||||
operation.
|
||||
@end deftypefn
|
||||
|
||||
@deftypefn {GIMPLE function} gimple_debug_bind_p (gimple g)
|
||||
Return true if g is a @code{GIMPLE_DEBUG} that binds the value of an
|
||||
expression to a variable.
|
||||
@end deftypefn
|
||||
|
||||
@node Manipulating GIMPLE statements
|
||||
|
|
|
@ -2096,8 +2096,53 @@ Removes any @option{-O}-started option from @code{BOOT_CFLAGS}, and adds
|
|||
Analogous to @code{bootstrap-O1}.
|
||||
|
||||
@item @samp{bootstrap-debug}
|
||||
Builds stage2 without debug information, and uses
|
||||
@file{contrib/compare-debug} to compare object files.
|
||||
Verifies that the compiler generates the same executable code, whether
|
||||
or not it is asked to emit debug information. To this end, this option
|
||||
builds stage2 host programs without debug information, and uses
|
||||
@file{contrib/compare-debug} to compare them with the stripped stage3
|
||||
object files. If @code{BOOT_CFLAGS} is overridden so as to not enable
|
||||
debug information, stage2 will have it, and stage3 won't. This option
|
||||
is enabled by default when GCC bootstrapping is enabled: in addition to
|
||||
better test coverage, it makes default bootstraps faster and leaner.
|
||||
|
||||
@item @samp{bootstrap-debug-big}
|
||||
In addition to the checking performed by @code{bootstrap-debug}, this
|
||||
option saves internal compiler dumps during stage2 and stage3 and
|
||||
compares them as well, which helps catch additional potential problems,
|
||||
but at a great cost in terms of disk space.
|
||||
|
||||
@item @samp{bootstrap-debug-lean}
|
||||
This option saves disk space compared with @code{bootstrap-debug-big},
|
||||
but at the expense of some recompilation. Instead of saving the dumps
|
||||
of stage2 and stage3 until the final compare, it uses
|
||||
@option{-fcompare-debug} to generate, compare and remove the dumps
|
||||
during stage3, repeating the compilation that already took place in
|
||||
stage2, whose dumps were not saved.
|
||||
|
||||
@item @samp{bootstrap-debug-lib}
|
||||
This option tests executable code invariance over debug information
|
||||
generation on target libraries, just like @code{bootstrap-debug-lean}
|
||||
tests it on host programs. It builds stage3 libraries with
|
||||
@option{-fcompare-debug}, and it can be used along with any of the
|
||||
@code{bootstrap-debug} options above.
|
||||
|
||||
There aren't @code{-lean} or @code{-big} counterparts to this option
|
||||
because most libraries are only build in stage3, so bootstrap compares
|
||||
would not get significant coverage. Moreover, the few libraries built
|
||||
in stage2 are used in stage3 host programs, so we wouldn't want to
|
||||
compile stage2 libraries with different options for comparison purposes.
|
||||
|
||||
@item @samp{bootstrap-debug-ckovw}
|
||||
Arranges for error messages to be issued if the compiler built on any
|
||||
stage is run without the option @option{-fcompare-debug}. This is
|
||||
useful to verify the full @option{-fcompare-debug} testing coverage. It
|
||||
must be used along with @code{bootstrap-debug-lean} and
|
||||
@code{bootstrap-debug-lib}.
|
||||
|
||||
@item @samp{bootstrap-time}
|
||||
Arranges for the run time of each program started by the GCC driver,
|
||||
built in any stage, to be logged to @file{time.log}, in the top level of
|
||||
the build tree.
|
||||
|
||||
@end table
|
||||
|
||||
|
|
|
@ -311,6 +311,7 @@ Objective-C and Objective-C++ Dialects}.
|
|||
-frandom-seed=@var{string} -fsched-verbose=@var{n} @gol
|
||||
-fsel-sched-verbose -fsel-sched-dump-cfg -fsel-sched-pipelining-verbose @gol
|
||||
-ftest-coverage -ftime-report -fvar-tracking @gol
|
||||
-fvar-tracking-assigments -fvar-tracking-assignments-toggle @gol
|
||||
-g -g@var{level} -gtoggle -gcoff -gdwarf-@var{version} @gol
|
||||
-ggdb -gstabs -gstabs+ -gvms -gxcoff -gxcoff+ @gol
|
||||
-fno-merge-debug-strings -fno-dwarf2-cfi-asm @gol
|
||||
|
@ -4397,11 +4398,14 @@ assembler (GAS) to fail with an error.
|
|||
@opindex gdwarf-@var{version}
|
||||
Produce debugging information in DWARF format (if that is
|
||||
supported). This is the format used by DBX on IRIX 6. The value
|
||||
of @var{version} may be either 2 or 3; the default version is 2.
|
||||
of @var{version} may be either 2, 3 or 4; the default version is 2.
|
||||
|
||||
Note that with DWARF version 2 some ports require, and will always
|
||||
use, some non-conflicting DWARF 3 extensions in the unwind tables.
|
||||
|
||||
Version 4 may require GDB 7.0 and @option{-fvar-tracking-assignments}
|
||||
for maximum benefit.
|
||||
|
||||
@item -gvms
|
||||
@opindex gvms
|
||||
Produce debugging information in VMS debug format (if that is
|
||||
|
@ -4445,9 +4449,12 @@ other options are processed, and it does so only once, no matter how
|
|||
many times it is given. This is mainly intended to be used with
|
||||
@option{-fcompare-debug}.
|
||||
|
||||
@item -fdump-final-insns=@var{file}
|
||||
@opindex fdump-final-insns=
|
||||
Dump the final internal representation (RTL) to @var{file}.
|
||||
@item -fdump-final-insns@r{[}=@var{file}@r{]}
|
||||
@opindex fdump-final-insns
|
||||
Dump the final internal representation (RTL) to @var{file}. If the
|
||||
optional argument is omitted (or if @var{file} is @code{.}), the name
|
||||
of the dump file will be determined by appending @code{.gkd} to the
|
||||
compilation output file name.
|
||||
|
||||
@item -fcompare-debug@r{[}=@var{opts}@r{]}
|
||||
@opindex fcompare-debug
|
||||
|
@ -5446,6 +5453,23 @@ It is enabled by default when compiling with optimization (@option{-Os},
|
|||
@option{-O}, @option{-O2}, @dots{}), debugging information (@option{-g}) and
|
||||
the debug info format supports it.
|
||||
|
||||
@item -fvar-tracking-assignments
|
||||
@opindex fvar-tracking-assignments
|
||||
@opindex fno-var-tracking-assignments
|
||||
Annotate assignments to user variables early in the compilation and
|
||||
attempt to carry the annotations over throughout the compilation all the
|
||||
way to the end, in an attempt to improve debug information while
|
||||
optimizing. Use of @option{-gdwarf-4} is recommended along with it.
|
||||
|
||||
It can be enabled even if var-tracking is disabled, in which case
|
||||
annotations will be created and maintained, but discarded at the end.
|
||||
|
||||
@item -fvar-tracking-assignments-toggle
|
||||
@opindex fvar-tracking-assignments-toggle
|
||||
@opindex fno-var-tracking-assignments-toggle
|
||||
Toggle @option{-fvar-tracking-assignments}, in the same way that
|
||||
@option{-gtoggle} toggles @option{-g}.
|
||||
|
||||
@item -print-file-name=@var{library}
|
||||
@opindex print-file-name
|
||||
Print the full absolute name of the library file @var{library} that
|
||||
|
@ -8094,6 +8118,12 @@ with more basic blocks than this parameter won't have loop invariant
|
|||
motion optimization performed on them. The default value of the
|
||||
parameter is 1000 for -O1 and 10000 for -O2 and above.
|
||||
|
||||
@item min-nondebug-insn-uid
|
||||
Use uids starting at this parameter for nondebug insns. The range below
|
||||
the parameter is reserved exclusively for debug insns created by
|
||||
@option{-fvar-tracking-assignments}, but debug insns may get
|
||||
(non-overlapping) uids above it if the reserved range is exhausted.
|
||||
|
||||
@end table
|
||||
@end table
|
||||
|
||||
|
|
|
@ -2387,6 +2387,11 @@ scan_insn (bb_info_t bb_info, rtx insn)
|
|||
insn_info->insn = insn;
|
||||
bb_info->last_insn = insn_info;
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
insn_info->cannot_delete = true;
|
||||
return;
|
||||
}
|
||||
|
||||
/* Cselib clears the table for this case, so we have to essentially
|
||||
do the same. */
|
||||
|
|
880
gcc/dwarf2out.c
880
gcc/dwarf2out.c
File diff suppressed because it is too large
Load diff
372
gcc/emit-rtl.c
372
gcc/emit-rtl.c
|
@ -58,6 +58,7 @@ along with GCC; see the file COPYING3. If not see
|
|||
#include "langhooks.h"
|
||||
#include "tree-pass.h"
|
||||
#include "df.h"
|
||||
#include "params.h"
|
||||
|
||||
/* Commonly used modes. */
|
||||
|
||||
|
@ -175,6 +176,7 @@ static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def)))
|
|||
#define first_insn (crtl->emit.x_first_insn)
|
||||
#define last_insn (crtl->emit.x_last_insn)
|
||||
#define cur_insn_uid (crtl->emit.x_cur_insn_uid)
|
||||
#define cur_debug_insn_uid (crtl->emit.x_cur_debug_insn_uid)
|
||||
#define last_location (crtl->emit.x_last_location)
|
||||
#define first_label_num (crtl->emit.x_first_label_num)
|
||||
|
||||
|
@ -2268,8 +2270,31 @@ set_new_first_and_last_insn (rtx first, rtx last)
|
|||
last_insn = last;
|
||||
cur_insn_uid = 0;
|
||||
|
||||
for (insn = first; insn; insn = NEXT_INSN (insn))
|
||||
cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
|
||||
if (MIN_NONDEBUG_INSN_UID || MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
int debug_count = 0;
|
||||
|
||||
cur_insn_uid = MIN_NONDEBUG_INSN_UID - 1;
|
||||
cur_debug_insn_uid = 0;
|
||||
|
||||
for (insn = first; insn; insn = NEXT_INSN (insn))
|
||||
if (INSN_UID (insn) < MIN_NONDEBUG_INSN_UID)
|
||||
cur_debug_insn_uid = MAX (cur_debug_insn_uid, INSN_UID (insn));
|
||||
else
|
||||
{
|
||||
cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
|
||||
if (DEBUG_INSN_P (insn))
|
||||
debug_count++;
|
||||
}
|
||||
|
||||
if (debug_count)
|
||||
cur_debug_insn_uid = MIN_NONDEBUG_INSN_UID + debug_count;
|
||||
else
|
||||
cur_debug_insn_uid++;
|
||||
}
|
||||
else
|
||||
for (insn = first; insn; insn = NEXT_INSN (insn))
|
||||
cur_insn_uid = MAX (cur_insn_uid, INSN_UID (insn));
|
||||
|
||||
cur_insn_uid++;
|
||||
}
|
||||
|
@ -2592,6 +2617,7 @@ repeat:
|
|||
return;
|
||||
break;
|
||||
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -2698,6 +2724,7 @@ repeat:
|
|||
case CC0:
|
||||
return;
|
||||
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -2768,6 +2795,7 @@ set_used_flags (rtx x)
|
|||
case CC0:
|
||||
return;
|
||||
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -2947,6 +2975,27 @@ get_max_uid (void)
|
|||
{
|
||||
return cur_insn_uid;
|
||||
}
|
||||
|
||||
/* Return the number of actual (non-debug) insns emitted in this
|
||||
function. */
|
||||
|
||||
int
|
||||
get_max_insn_count (void)
|
||||
{
|
||||
int n = cur_insn_uid;
|
||||
|
||||
/* The table size must be stable across -g, to avoid codegen
|
||||
differences due to debug insns, and not be affected by
|
||||
-fmin-insn-uid, to avoid excessive table size and to simplify
|
||||
debugging of -fcompare-debug failures. */
|
||||
if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID)
|
||||
n -= cur_debug_insn_uid;
|
||||
else
|
||||
n -= MIN_NONDEBUG_INSN_UID;
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
|
||||
/* Return the next insn. If it is a SEQUENCE, return the first insn
|
||||
of the sequence. */
|
||||
|
@ -3033,6 +3082,38 @@ prev_nonnote_insn (rtx insn)
|
|||
return insn;
|
||||
}
|
||||
|
||||
/* Return the next insn after INSN that is not a DEBUG_INSN. This
|
||||
routine does not look inside SEQUENCEs. */
|
||||
|
||||
rtx
|
||||
next_nondebug_insn (rtx insn)
|
||||
{
|
||||
while (insn)
|
||||
{
|
||||
insn = NEXT_INSN (insn);
|
||||
if (insn == 0 || !DEBUG_INSN_P (insn))
|
||||
break;
|
||||
}
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
/* Return the previous insn before INSN that is not a DEBUG_INSN.
|
||||
This routine does not look inside SEQUENCEs. */
|
||||
|
||||
rtx
|
||||
prev_nondebug_insn (rtx insn)
|
||||
{
|
||||
while (insn)
|
||||
{
|
||||
insn = PREV_INSN (insn);
|
||||
if (insn == 0 || !DEBUG_INSN_P (insn))
|
||||
break;
|
||||
}
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
/* Return the next INSN, CALL_INSN or JUMP_INSN after INSN;
|
||||
or 0, if there is none. This routine does not look inside
|
||||
SEQUENCEs. */
|
||||
|
@ -3504,6 +3585,27 @@ make_insn_raw (rtx pattern)
|
|||
return insn;
|
||||
}
|
||||
|
||||
/* Like `make_insn_raw' but make a DEBUG_INSN instead of an insn. */
|
||||
|
||||
rtx
|
||||
make_debug_insn_raw (rtx pattern)
|
||||
{
|
||||
rtx insn;
|
||||
|
||||
insn = rtx_alloc (DEBUG_INSN);
|
||||
INSN_UID (insn) = cur_debug_insn_uid++;
|
||||
if (cur_debug_insn_uid > MIN_NONDEBUG_INSN_UID)
|
||||
INSN_UID (insn) = cur_insn_uid++;
|
||||
|
||||
PATTERN (insn) = pattern;
|
||||
INSN_CODE (insn) = -1;
|
||||
REG_NOTES (insn) = NULL;
|
||||
INSN_LOCATOR (insn) = curr_insn_locator ();
|
||||
BLOCK_FOR_INSN (insn) = NULL;
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
/* Like `make_insn_raw' but make a JUMP_INSN instead of an insn. */
|
||||
|
||||
rtx
|
||||
|
@ -3917,6 +4019,7 @@ emit_insn_before_noloc (rtx x, rtx before, basic_block bb)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -3960,6 +4063,7 @@ emit_jump_insn_before_noloc (rtx x, rtx before)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4003,6 +4107,7 @@ emit_call_insn_before_noloc (rtx x, rtx before)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4034,6 +4139,50 @@ emit_call_insn_before_noloc (rtx x, rtx before)
|
|||
return last;
|
||||
}
|
||||
|
||||
/* Make an instruction with body X and code DEBUG_INSN
|
||||
and output it before the instruction BEFORE. */
|
||||
|
||||
rtx
|
||||
emit_debug_insn_before_noloc (rtx x, rtx before)
|
||||
{
|
||||
rtx last = NULL_RTX, insn;
|
||||
|
||||
gcc_assert (before);
|
||||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
case CODE_LABEL:
|
||||
case BARRIER:
|
||||
case NOTE:
|
||||
insn = x;
|
||||
while (insn)
|
||||
{
|
||||
rtx next = NEXT_INSN (insn);
|
||||
add_insn_before (insn, before, NULL);
|
||||
last = insn;
|
||||
insn = next;
|
||||
}
|
||||
break;
|
||||
|
||||
#ifdef ENABLE_RTL_CHECKING
|
||||
case SEQUENCE:
|
||||
gcc_unreachable ();
|
||||
break;
|
||||
#endif
|
||||
|
||||
default:
|
||||
last = make_debug_insn_raw (x);
|
||||
add_insn_before (last, before, NULL);
|
||||
break;
|
||||
}
|
||||
|
||||
return last;
|
||||
}
|
||||
|
||||
/* Make an insn of code BARRIER
|
||||
and output it before the insn BEFORE. */
|
||||
|
||||
|
@ -4140,6 +4289,7 @@ emit_insn_after_noloc (rtx x, rtx after, basic_block bb)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4177,6 +4327,7 @@ emit_jump_insn_after_noloc (rtx x, rtx after)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4213,6 +4364,7 @@ emit_call_insn_after_noloc (rtx x, rtx after)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4237,6 +4389,43 @@ emit_call_insn_after_noloc (rtx x, rtx after)
|
|||
return last;
|
||||
}
|
||||
|
||||
/* Make an instruction with body X and code CALL_INSN
|
||||
and output it after the instruction AFTER. */
|
||||
|
||||
rtx
|
||||
emit_debug_insn_after_noloc (rtx x, rtx after)
|
||||
{
|
||||
rtx last;
|
||||
|
||||
gcc_assert (after);
|
||||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
case CODE_LABEL:
|
||||
case BARRIER:
|
||||
case NOTE:
|
||||
last = emit_insn_after_1 (x, after, NULL);
|
||||
break;
|
||||
|
||||
#ifdef ENABLE_RTL_CHECKING
|
||||
case SEQUENCE:
|
||||
gcc_unreachable ();
|
||||
break;
|
||||
#endif
|
||||
|
||||
default:
|
||||
last = make_debug_insn_raw (x);
|
||||
add_insn_after (last, after, NULL);
|
||||
break;
|
||||
}
|
||||
|
||||
return last;
|
||||
}
|
||||
|
||||
/* Make an insn of code BARRIER
|
||||
and output it after the insn AFTER. */
|
||||
|
||||
|
@ -4307,8 +4496,13 @@ emit_insn_after_setloc (rtx pattern, rtx after, int loc)
|
|||
rtx
|
||||
emit_insn_after (rtx pattern, rtx after)
|
||||
{
|
||||
if (INSN_P (after))
|
||||
return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
|
||||
rtx prev = after;
|
||||
|
||||
while (DEBUG_INSN_P (prev))
|
||||
prev = PREV_INSN (prev);
|
||||
|
||||
if (INSN_P (prev))
|
||||
return emit_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
|
||||
else
|
||||
return emit_insn_after_noloc (pattern, after, NULL);
|
||||
}
|
||||
|
@ -4338,8 +4532,13 @@ emit_jump_insn_after_setloc (rtx pattern, rtx after, int loc)
|
|||
rtx
|
||||
emit_jump_insn_after (rtx pattern, rtx after)
|
||||
{
|
||||
if (INSN_P (after))
|
||||
return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
|
||||
rtx prev = after;
|
||||
|
||||
while (DEBUG_INSN_P (prev))
|
||||
prev = PREV_INSN (prev);
|
||||
|
||||
if (INSN_P (prev))
|
||||
return emit_jump_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
|
||||
else
|
||||
return emit_jump_insn_after_noloc (pattern, after);
|
||||
}
|
||||
|
@ -4369,12 +4568,48 @@ emit_call_insn_after_setloc (rtx pattern, rtx after, int loc)
|
|||
rtx
|
||||
emit_call_insn_after (rtx pattern, rtx after)
|
||||
{
|
||||
if (INSN_P (after))
|
||||
return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
|
||||
rtx prev = after;
|
||||
|
||||
while (DEBUG_INSN_P (prev))
|
||||
prev = PREV_INSN (prev);
|
||||
|
||||
if (INSN_P (prev))
|
||||
return emit_call_insn_after_setloc (pattern, after, INSN_LOCATOR (prev));
|
||||
else
|
||||
return emit_call_insn_after_noloc (pattern, after);
|
||||
}
|
||||
|
||||
/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to SCOPE. */
|
||||
rtx
|
||||
emit_debug_insn_after_setloc (rtx pattern, rtx after, int loc)
|
||||
{
|
||||
rtx last = emit_debug_insn_after_noloc (pattern, after);
|
||||
|
||||
if (pattern == NULL_RTX || !loc)
|
||||
return last;
|
||||
|
||||
after = NEXT_INSN (after);
|
||||
while (1)
|
||||
{
|
||||
if (active_insn_p (after) && !INSN_LOCATOR (after))
|
||||
INSN_LOCATOR (after) = loc;
|
||||
if (after == last)
|
||||
break;
|
||||
after = NEXT_INSN (after);
|
||||
}
|
||||
return last;
|
||||
}
|
||||
|
||||
/* Like emit_debug_insn_after_noloc, but set INSN_LOCATOR according to AFTER. */
|
||||
rtx
|
||||
emit_debug_insn_after (rtx pattern, rtx after)
|
||||
{
|
||||
if (INSN_P (after))
|
||||
return emit_debug_insn_after_setloc (pattern, after, INSN_LOCATOR (after));
|
||||
else
|
||||
return emit_debug_insn_after_noloc (pattern, after);
|
||||
}
|
||||
|
||||
/* Like emit_insn_before_noloc, but set INSN_LOCATOR according to SCOPE. */
|
||||
rtx
|
||||
emit_insn_before_setloc (rtx pattern, rtx before, int loc)
|
||||
|
@ -4404,8 +4639,13 @@ emit_insn_before_setloc (rtx pattern, rtx before, int loc)
|
|||
rtx
|
||||
emit_insn_before (rtx pattern, rtx before)
|
||||
{
|
||||
if (INSN_P (before))
|
||||
return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
|
||||
rtx next = before;
|
||||
|
||||
while (DEBUG_INSN_P (next))
|
||||
next = PREV_INSN (next);
|
||||
|
||||
if (INSN_P (next))
|
||||
return emit_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
|
||||
else
|
||||
return emit_insn_before_noloc (pattern, before, NULL);
|
||||
}
|
||||
|
@ -4436,8 +4676,13 @@ emit_jump_insn_before_setloc (rtx pattern, rtx before, int loc)
|
|||
rtx
|
||||
emit_jump_insn_before (rtx pattern, rtx before)
|
||||
{
|
||||
if (INSN_P (before))
|
||||
return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
|
||||
rtx next = before;
|
||||
|
||||
while (DEBUG_INSN_P (next))
|
||||
next = PREV_INSN (next);
|
||||
|
||||
if (INSN_P (next))
|
||||
return emit_jump_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
|
||||
else
|
||||
return emit_jump_insn_before_noloc (pattern, before);
|
||||
}
|
||||
|
@ -4469,11 +4714,49 @@ emit_call_insn_before_setloc (rtx pattern, rtx before, int loc)
|
|||
rtx
|
||||
emit_call_insn_before (rtx pattern, rtx before)
|
||||
{
|
||||
if (INSN_P (before))
|
||||
return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
|
||||
rtx next = before;
|
||||
|
||||
while (DEBUG_INSN_P (next))
|
||||
next = PREV_INSN (next);
|
||||
|
||||
if (INSN_P (next))
|
||||
return emit_call_insn_before_setloc (pattern, before, INSN_LOCATOR (next));
|
||||
else
|
||||
return emit_call_insn_before_noloc (pattern, before);
|
||||
}
|
||||
|
||||
/* like emit_insn_before_noloc, but set insn_locator according to scope. */
|
||||
rtx
|
||||
emit_debug_insn_before_setloc (rtx pattern, rtx before, int loc)
|
||||
{
|
||||
rtx first = PREV_INSN (before);
|
||||
rtx last = emit_debug_insn_before_noloc (pattern, before);
|
||||
|
||||
if (pattern == NULL_RTX)
|
||||
return last;
|
||||
|
||||
first = NEXT_INSN (first);
|
||||
while (1)
|
||||
{
|
||||
if (active_insn_p (first) && !INSN_LOCATOR (first))
|
||||
INSN_LOCATOR (first) = loc;
|
||||
if (first == last)
|
||||
break;
|
||||
first = NEXT_INSN (first);
|
||||
}
|
||||
return last;
|
||||
}
|
||||
|
||||
/* like emit_debug_insn_before_noloc,
|
||||
but set insn_locator according to before. */
|
||||
rtx
|
||||
emit_debug_insn_before (rtx pattern, rtx before)
|
||||
{
|
||||
if (INSN_P (before))
|
||||
return emit_debug_insn_before_setloc (pattern, before, INSN_LOCATOR (before));
|
||||
else
|
||||
return emit_debug_insn_before_noloc (pattern, before);
|
||||
}
|
||||
|
||||
/* Take X and emit it at the end of the doubly-linked
|
||||
INSN list.
|
||||
|
@ -4491,6 +4774,7 @@ emit_insn (rtx x)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4522,6 +4806,52 @@ emit_insn (rtx x)
|
|||
return last;
|
||||
}
|
||||
|
||||
/* Make an insn of code DEBUG_INSN with pattern X
|
||||
and add it to the end of the doubly-linked list. */
|
||||
|
||||
rtx
|
||||
emit_debug_insn (rtx x)
|
||||
{
|
||||
rtx last = last_insn;
|
||||
rtx insn;
|
||||
|
||||
if (x == NULL_RTX)
|
||||
return last;
|
||||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
case CODE_LABEL:
|
||||
case BARRIER:
|
||||
case NOTE:
|
||||
insn = x;
|
||||
while (insn)
|
||||
{
|
||||
rtx next = NEXT_INSN (insn);
|
||||
add_insn (insn);
|
||||
last = insn;
|
||||
insn = next;
|
||||
}
|
||||
break;
|
||||
|
||||
#ifdef ENABLE_RTL_CHECKING
|
||||
case SEQUENCE:
|
||||
gcc_unreachable ();
|
||||
break;
|
||||
#endif
|
||||
|
||||
default:
|
||||
last = make_debug_insn_raw (x);
|
||||
add_insn (last);
|
||||
break;
|
||||
}
|
||||
|
||||
return last;
|
||||
}
|
||||
|
||||
/* Make an insn of code JUMP_INSN with pattern X
|
||||
and add it to the end of the doubly-linked list. */
|
||||
|
||||
|
@ -4532,6 +4862,7 @@ emit_jump_insn (rtx x)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4573,6 +4904,7 @@ emit_call_insn (rtx x)
|
|||
|
||||
switch (GET_CODE (x))
|
||||
{
|
||||
case DEBUG_INSN:
|
||||
case INSN:
|
||||
case JUMP_INSN:
|
||||
case CALL_INSN:
|
||||
|
@ -4844,6 +5176,8 @@ emit (rtx x)
|
|||
}
|
||||
case CALL_INSN:
|
||||
return emit_call_insn (x);
|
||||
case DEBUG_INSN:
|
||||
return emit_debug_insn (x);
|
||||
default:
|
||||
gcc_unreachable ();
|
||||
}
|
||||
|
@ -5168,7 +5502,11 @@ init_emit (void)
|
|||
{
|
||||
first_insn = NULL;
|
||||
last_insn = NULL;
|
||||
cur_insn_uid = 1;
|
||||
if (MIN_NONDEBUG_INSN_UID)
|
||||
cur_insn_uid = MIN_NONDEBUG_INSN_UID;
|
||||
else
|
||||
cur_insn_uid = 1;
|
||||
cur_debug_insn_uid = 1;
|
||||
reg_rtx_no = LAST_VIRTUAL_REGISTER + 1;
|
||||
last_location = UNKNOWN_LOCATION;
|
||||
first_label_num = label_num;
|
||||
|
@ -5625,6 +5963,10 @@ emit_copy_of_insn_after (rtx insn, rtx after)
|
|||
new_rtx = emit_jump_insn_after (copy_insn (PATTERN (insn)), after);
|
||||
break;
|
||||
|
||||
case DEBUG_INSN:
|
||||
new_rtx = emit_debug_insn_after (copy_insn (PATTERN (insn)), after);
|
||||
break;
|
||||
|
||||
case CALL_INSN:
|
||||
new_rtx = emit_call_insn_after (copy_insn (PATTERN (insn)), after);
|
||||
if (CALL_INSN_FUNCTION_USAGE (insn))
|
||||
|
|
|
@ -391,6 +391,7 @@ get_attr_length_1 (rtx insn ATTRIBUTE_UNUSED,
|
|||
case NOTE:
|
||||
case BARRIER:
|
||||
case CODE_LABEL:
|
||||
case DEBUG_INSN:
|
||||
return 0;
|
||||
|
||||
case CALL_INSN:
|
||||
|
@ -4381,7 +4382,8 @@ rest_of_clean_state (void)
|
|||
&& (!NOTE_P (insn) ||
|
||||
(NOTE_KIND (insn) != NOTE_INSN_VAR_LOCATION
|
||||
&& NOTE_KIND (insn) != NOTE_INSN_BLOCK_BEG
|
||||
&& NOTE_KIND (insn) != NOTE_INSN_BLOCK_END)))
|
||||
&& NOTE_KIND (insn) != NOTE_INSN_BLOCK_END
|
||||
&& NOTE_KIND (insn) != NOTE_INSN_CFA_RESTORE_STATE)))
|
||||
print_rtl_single (final_output, insn);
|
||||
|
||||
}
|
||||
|
|
|
@ -1775,8 +1775,11 @@ instantiate_virtual_regs (void)
|
|||
|| GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC
|
||||
|| GET_CODE (PATTERN (insn)) == ASM_INPUT)
|
||||
continue;
|
||||
|
||||
instantiate_virtual_regs_in_insn (insn);
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
for_each_rtx (&INSN_VAR_LOCATION (insn),
|
||||
instantiate_virtual_regs_in_rtx, NULL);
|
||||
else
|
||||
instantiate_virtual_regs_in_insn (insn);
|
||||
|
||||
if (INSN_DELETED_P (insn))
|
||||
continue;
|
||||
|
|
|
@ -64,6 +64,10 @@ struct GTY(()) emit_status {
|
|||
Reset to 1 for each function compiled. */
|
||||
int x_cur_insn_uid;
|
||||
|
||||
/* INSN_UID for next debug insn emitted. Only used if
|
||||
--param min-nondebug-insn-uid=<value> is given with nonzero value. */
|
||||
int x_cur_debug_insn_uid;
|
||||
|
||||
/* Location the last line-number NOTE emitted.
|
||||
This is used to avoid generating duplicates. */
|
||||
location_t x_last_location;
|
||||
|
|
|
@ -1208,7 +1208,7 @@ forward_propagate_and_simplify (df_ref use, rtx def_insn, rtx def_set)
|
|||
if (INSN_CODE (use_insn) < 0)
|
||||
asm_use = asm_noperands (PATTERN (use_insn));
|
||||
|
||||
if (!use_set && asm_use < 0)
|
||||
if (!use_set && asm_use < 0 && !DEBUG_INSN_P (use_insn))
|
||||
return false;
|
||||
|
||||
/* Do not propagate into PC, CC0, etc. */
|
||||
|
@ -1265,6 +1265,11 @@ forward_propagate_and_simplify (df_ref use, rtx def_insn, rtx def_set)
|
|||
loc = &SET_DEST (use_set);
|
||||
set_reg_equal = false;
|
||||
}
|
||||
else if (!use_set)
|
||||
{
|
||||
loc = &INSN_VAR_LOCATION_LOC (use_insn);
|
||||
set_reg_equal = false;
|
||||
}
|
||||
else
|
||||
{
|
||||
rtx note = find_reg_note (use_insn, REG_EQUAL, NULL_RTX);
|
||||
|
|
78
gcc/gcc.c
78
gcc/gcc.c
|
@ -891,10 +891,10 @@ static const char *asm_options =
|
|||
|
||||
static const char *invoke_as =
|
||||
#ifdef AS_NEEDS_DASH_FOR_PIPED_INPUT
|
||||
"%{fcompare-debug=*:%:compare-debug-dump-opt()}\
|
||||
"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\
|
||||
%{!S:-o %|.s |\n as %(asm_options) %|.s %A }";
|
||||
#else
|
||||
"%{fcompare-debug=*:%:compare-debug-dump-opt()}\
|
||||
"%{fcompare-debug=*|fdump-final-insns=*:%:compare-debug-dump-opt()}\
|
||||
%{!S:-o %|.s |\n as %(asm_options) %m.s %A }";
|
||||
#endif
|
||||
|
||||
|
@ -926,6 +926,7 @@ static const char *const multilib_defaults_raw[] = MULTILIB_DEFAULTS;
|
|||
#endif
|
||||
|
||||
static const char *const driver_self_specs[] = {
|
||||
"%{fdump-final-insns:-fdump-final-insns=.} %<fdump-final-insns",
|
||||
DRIVER_SELF_SPECS, GOMP_SELF_SPECS
|
||||
};
|
||||
|
||||
|
@ -8672,6 +8673,33 @@ print_asm_header_spec_function (int arg ATTRIBUTE_UNUSED,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/* Compute a timestamp to initialize flag_random_seed. */
|
||||
|
||||
static unsigned
|
||||
get_local_tick (void)
|
||||
{
|
||||
unsigned ret = 0;
|
||||
|
||||
/* Get some more or less random data. */
|
||||
#ifdef HAVE_GETTIMEOFDAY
|
||||
{
|
||||
struct timeval tv;
|
||||
|
||||
gettimeofday (&tv, NULL);
|
||||
ret = tv.tv_sec * 1000 + tv.tv_usec / 1000;
|
||||
}
|
||||
#else
|
||||
{
|
||||
time_t now = time (NULL);
|
||||
|
||||
if (now != (time_t)-1)
|
||||
ret = (unsigned) now;
|
||||
}
|
||||
#endif
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/* %:compare-debug-dump-opt spec function. Save the last argument,
|
||||
expected to be the last -fdump-final-insns option, or generate a
|
||||
temporary. */
|
||||
|
@ -8683,41 +8711,61 @@ compare_debug_dump_opt_spec_function (int arg,
|
|||
const char *ret;
|
||||
char *name;
|
||||
int which;
|
||||
static char random_seed[HOST_BITS_PER_WIDE_INT / 4 + 3];
|
||||
|
||||
if (arg != 0)
|
||||
fatal ("too many arguments to %%:compare-debug-dump-opt");
|
||||
|
||||
if (!compare_debug)
|
||||
return NULL;
|
||||
|
||||
do_spec_2 ("%{fdump-final-insns=*:%*}");
|
||||
do_spec_1 (" ", 0, NULL);
|
||||
|
||||
if (argbuf_index > 0)
|
||||
if (argbuf_index > 0 && strcmp (argv[argbuf_index - 1], "."))
|
||||
{
|
||||
if (!compare_debug)
|
||||
return NULL;
|
||||
|
||||
name = xstrdup (argv[argbuf_index - 1]);
|
||||
ret = NULL;
|
||||
}
|
||||
else
|
||||
{
|
||||
#define OPT "-fdump-final-insns="
|
||||
ret = "-fdump-final-insns=%g.gkd";
|
||||
const char *ext = NULL;
|
||||
|
||||
if (argbuf_index > 0)
|
||||
{
|
||||
do_spec_2 ("%{o*:%*}%{!o:%{!S:%b%O}%{S:%b.s}}");
|
||||
ext = ".gkd";
|
||||
}
|
||||
else if (!compare_debug)
|
||||
return NULL;
|
||||
else
|
||||
do_spec_2 ("%g.gkd");
|
||||
|
||||
do_spec_2 (ret + sizeof (OPT) - 1);
|
||||
do_spec_1 (" ", 0, NULL);
|
||||
#undef OPT
|
||||
|
||||
gcc_assert (argbuf_index > 0);
|
||||
|
||||
name = xstrdup (argbuf[argbuf_index - 1]);
|
||||
name = concat (argbuf[argbuf_index - 1], ext, NULL);
|
||||
|
||||
ret = concat ("-fdump-final-insns=", name, NULL);
|
||||
}
|
||||
|
||||
which = compare_debug < 0;
|
||||
debug_check_temp_file[which] = name;
|
||||
|
||||
#if 0
|
||||
error ("compare-debug: [%i]=\"%s\", ret %s", which, name, ret);
|
||||
#endif
|
||||
if (!which)
|
||||
{
|
||||
unsigned HOST_WIDE_INT value = get_local_tick () ^ getpid ();
|
||||
|
||||
sprintf (random_seed, HOST_WIDE_INT_PRINT_HEX, value);
|
||||
}
|
||||
|
||||
if (*random_seed)
|
||||
ret = concat ("%{!frandom-seed=*:-frandom-seed=", random_seed, "} ",
|
||||
ret, NULL);
|
||||
|
||||
if (which)
|
||||
*random_seed = 0;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -8791,5 +8839,7 @@ compare_debug_auxbase_opt_spec_function (int arg,
|
|||
memcpy (name + sizeof (OPT) - 1, argv[0], len);
|
||||
name[sizeof (OPT) - 1 + len] = '\0';
|
||||
|
||||
#undef OPT
|
||||
|
||||
return name;
|
||||
}
|
||||
|
|
24
gcc/gcse.c
24
gcc/gcse.c
|
@ -465,7 +465,7 @@ static void record_last_reg_set_info (rtx, int);
|
|||
static void record_last_mem_set_info (rtx);
|
||||
static void record_last_set_info (rtx, const_rtx, void *);
|
||||
static void compute_hash_table (struct hash_table_d *);
|
||||
static void alloc_hash_table (int, struct hash_table_d *, int);
|
||||
static void alloc_hash_table (struct hash_table_d *, int);
|
||||
static void free_hash_table (struct hash_table_d *);
|
||||
static void compute_hash_table_work (struct hash_table_d *);
|
||||
static void dump_hash_table (FILE *, const char *, struct hash_table_d *);
|
||||
|
@ -1716,17 +1716,18 @@ compute_hash_table_work (struct hash_table_d *table)
|
|||
}
|
||||
|
||||
/* Allocate space for the set/expr hash TABLE.
|
||||
N_INSNS is the number of instructions in the function.
|
||||
It is used to determine the number of buckets to use.
|
||||
SET_P determines whether set or expression table will
|
||||
be created. */
|
||||
|
||||
static void
|
||||
alloc_hash_table (int n_insns, struct hash_table_d *table, int set_p)
|
||||
alloc_hash_table (struct hash_table_d *table, int set_p)
|
||||
{
|
||||
int n;
|
||||
|
||||
table->size = n_insns / 4;
|
||||
n = get_max_insn_count ();
|
||||
|
||||
table->size = n / 4;
|
||||
if (table->size < 11)
|
||||
table->size = 11;
|
||||
|
||||
|
@ -2610,6 +2611,9 @@ cprop_insn (rtx insn)
|
|||
}
|
||||
}
|
||||
|
||||
if (changed && DEBUG_INSN_P (insn))
|
||||
return 0;
|
||||
|
||||
return changed;
|
||||
}
|
||||
|
||||
|
@ -3137,7 +3141,9 @@ bypass_conditional_jumps (void)
|
|||
{
|
||||
setcc = NULL_RTX;
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
if (NONJUMP_INSN_P (insn))
|
||||
if (DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
else if (NONJUMP_INSN_P (insn))
|
||||
{
|
||||
if (setcc)
|
||||
break;
|
||||
|
@ -3967,7 +3973,7 @@ one_pre_gcse_pass (void)
|
|||
gcc_obstack_init (&gcse_obstack);
|
||||
alloc_gcse_mem ();
|
||||
|
||||
alloc_hash_table (get_max_uid (), &expr_hash_table, 0);
|
||||
alloc_hash_table (&expr_hash_table, 0);
|
||||
add_noreturn_fake_exit_edges ();
|
||||
if (flag_gcse_lm)
|
||||
compute_ld_motion_mems ();
|
||||
|
@ -4448,7 +4454,7 @@ one_code_hoisting_pass (void)
|
|||
gcc_obstack_init (&gcse_obstack);
|
||||
alloc_gcse_mem ();
|
||||
|
||||
alloc_hash_table (get_max_uid (), &expr_hash_table, 0);
|
||||
alloc_hash_table (&expr_hash_table, 0);
|
||||
compute_hash_table (&expr_hash_table);
|
||||
if (dump_file)
|
||||
dump_hash_table (dump_file, "Code Hosting Expressions", &expr_hash_table);
|
||||
|
@ -4752,7 +4758,7 @@ compute_ld_motion_mems (void)
|
|||
{
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
{
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
if (GET_CODE (PATTERN (insn)) == SET)
|
||||
{
|
||||
|
@ -4988,7 +4994,7 @@ one_cprop_pass (void)
|
|||
implicit_sets = XCNEWVEC (rtx, last_basic_block);
|
||||
find_implicit_sets ();
|
||||
|
||||
alloc_hash_table (get_max_uid (), &set_hash_table, 1);
|
||||
alloc_hash_table (&set_hash_table, 1);
|
||||
compute_hash_table (&set_hash_table);
|
||||
|
||||
/* Free implicit_sets before peak usage. */
|
||||
|
|
|
@ -780,6 +780,31 @@ dump_gimple_resx (pretty_printer *buffer, gimple gs, int spc, int flags)
|
|||
dump_gimple_fmt (buffer, spc, flags, "resx %d", gimple_resx_region (gs));
|
||||
}
|
||||
|
||||
/* Dump a GIMPLE_DEBUG tuple on the pretty_printer BUFFER, SPC spaces
|
||||
of indent. FLAGS specifies details to show in the dump (see TDF_*
|
||||
in tree-pass.h). */
|
||||
|
||||
static void
|
||||
dump_gimple_debug (pretty_printer *buffer, gimple gs, int spc, int flags)
|
||||
{
|
||||
switch (gs->gsbase.subcode)
|
||||
{
|
||||
case GIMPLE_DEBUG_BIND:
|
||||
if (flags & TDF_RAW)
|
||||
dump_gimple_fmt (buffer, spc, flags, "%G BIND <%T, %T>", gs,
|
||||
gimple_debug_bind_get_var (gs),
|
||||
gimple_debug_bind_get_value (gs));
|
||||
else
|
||||
dump_gimple_fmt (buffer, spc, flags, "# DEBUG %T => %T",
|
||||
gimple_debug_bind_get_var (gs),
|
||||
gimple_debug_bind_get_value (gs));
|
||||
break;
|
||||
|
||||
default:
|
||||
gcc_unreachable ();
|
||||
}
|
||||
}
|
||||
|
||||
/* Dump a GIMPLE_OMP_FOR tuple on the pretty_printer BUFFER. */
|
||||
static void
|
||||
dump_gimple_omp_for (pretty_printer *buffer, gimple gs, int spc, int flags)
|
||||
|
@ -1524,6 +1549,10 @@ dump_gimple_stmt (pretty_printer *buffer, gimple gs, int spc, int flags)
|
|||
dump_gimple_resx (buffer, gs, spc, flags);
|
||||
break;
|
||||
|
||||
case GIMPLE_DEBUG:
|
||||
dump_gimple_debug (buffer, gs, spc, flags);
|
||||
break;
|
||||
|
||||
case GIMPLE_PREDICT:
|
||||
pp_string (buffer, "// predicted ");
|
||||
if (gimple_predict_outcome (gs))
|
||||
|
@ -1577,7 +1606,8 @@ dump_bb_header (pretty_printer *buffer, basic_block bb, int indent, int flags)
|
|||
gimple_stmt_iterator gsi;
|
||||
|
||||
for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi))
|
||||
if (get_lineno (gsi_stmt (gsi)) != -1)
|
||||
if (!is_gimple_debug (gsi_stmt (gsi))
|
||||
&& get_lineno (gsi_stmt (gsi)) != UNKNOWN_LOCATION)
|
||||
{
|
||||
pp_string (buffer, ", starting at line ");
|
||||
pp_decimal_int (buffer, get_lineno (gsi_stmt (gsi)));
|
||||
|
|
37
gcc/gimple.c
37
gcc/gimple.c
|
@ -102,6 +102,7 @@ gss_for_code (enum gimple_code code)
|
|||
case GIMPLE_COND:
|
||||
case GIMPLE_GOTO:
|
||||
case GIMPLE_LABEL:
|
||||
case GIMPLE_DEBUG:
|
||||
case GIMPLE_SWITCH: return GSS_WITH_OPS;
|
||||
case GIMPLE_ASM: return GSS_ASM;
|
||||
case GIMPLE_BIND: return GSS_BIND;
|
||||
|
@ -253,7 +254,7 @@ gimple_set_subcode (gimple g, unsigned subcode)
|
|||
gimple_build_with_ops_stat (c, s, n MEM_STAT_INFO)
|
||||
|
||||
static gimple
|
||||
gimple_build_with_ops_stat (enum gimple_code code, enum tree_code subcode,
|
||||
gimple_build_with_ops_stat (enum gimple_code code, unsigned subcode,
|
||||
unsigned num_ops MEM_STAT_DECL)
|
||||
{
|
||||
gimple s = gimple_alloc_stat (code, num_ops PASS_MEM_STAT);
|
||||
|
@ -427,7 +428,7 @@ gimple_build_assign_with_ops_stat (enum tree_code subcode, tree lhs, tree op1,
|
|||
code). */
|
||||
num_ops = get_gimple_rhs_num_ops (subcode) + 1;
|
||||
|
||||
p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, subcode, num_ops
|
||||
p = gimple_build_with_ops_stat (GIMPLE_ASSIGN, (unsigned)subcode, num_ops
|
||||
PASS_MEM_STAT);
|
||||
gimple_assign_set_lhs (p, lhs);
|
||||
gimple_assign_set_rhs1 (p, op1);
|
||||
|
@ -831,6 +832,29 @@ gimple_build_switch_vec (tree index, tree default_label, VEC(tree, heap) *args)
|
|||
}
|
||||
|
||||
|
||||
/* Build a new GIMPLE_DEBUG_BIND statement.
|
||||
|
||||
VAR is bound to VALUE; block and location are taken from STMT. */
|
||||
|
||||
gimple
|
||||
gimple_build_debug_bind_stat (tree var, tree value, gimple stmt MEM_STAT_DECL)
|
||||
{
|
||||
gimple p = gimple_build_with_ops_stat (GIMPLE_DEBUG,
|
||||
(unsigned)GIMPLE_DEBUG_BIND, 2
|
||||
PASS_MEM_STAT);
|
||||
|
||||
gimple_debug_bind_set_var (p, var);
|
||||
gimple_debug_bind_set_value (p, value);
|
||||
if (stmt)
|
||||
{
|
||||
gimple_set_block (p, gimple_block (stmt));
|
||||
gimple_set_location (p, gimple_location (stmt));
|
||||
}
|
||||
|
||||
return p;
|
||||
}
|
||||
|
||||
|
||||
/* Build a GIMPLE_OMP_CRITICAL statement.
|
||||
|
||||
BODY is the sequence of statements for which only one thread can execute.
|
||||
|
@ -1213,11 +1237,11 @@ empty_body_p (gimple_seq body)
|
|||
{
|
||||
gimple_stmt_iterator i;
|
||||
|
||||
|
||||
if (gimple_seq_empty_p (body))
|
||||
return true;
|
||||
for (i = gsi_start (body); !gsi_end_p (i); gsi_next (&i))
|
||||
if (!empty_stmt_p (gsi_stmt (i)))
|
||||
if (!empty_stmt_p (gsi_stmt (i))
|
||||
&& !is_gimple_debug (gsi_stmt (i)))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
|
@ -2224,6 +2248,9 @@ gimple_has_side_effects (const_gimple s)
|
|||
{
|
||||
unsigned i;
|
||||
|
||||
if (is_gimple_debug (s))
|
||||
return false;
|
||||
|
||||
/* We don't have to scan the arguments to check for
|
||||
volatile arguments, though, at present, we still
|
||||
do a scan to check for TREE_SIDE_EFFECTS. */
|
||||
|
@ -2317,6 +2344,8 @@ gimple_rhs_has_side_effects (const_gimple s)
|
|||
return true;
|
||||
}
|
||||
}
|
||||
else if (is_gimple_debug (s))
|
||||
return false;
|
||||
else
|
||||
{
|
||||
/* For statements without an LHS, examine all arguments. */
|
||||
|
|
|
@ -53,6 +53,9 @@ DEFGSCODE(GIMPLE_ERROR_MARK, "gimple_error_mark", NULL)
|
|||
jump target for the comparison. */
|
||||
DEFGSCODE(GIMPLE_COND, "gimple_cond", struct gimple_statement_with_ops)
|
||||
|
||||
/* GIMPLE_DEBUG represents a debug statement. */
|
||||
DEFGSCODE(GIMPLE_DEBUG, "gimple_debug", struct gimple_statement_with_ops)
|
||||
|
||||
/* GIMPLE_GOTO <TARGET> represents unconditional jumps.
|
||||
TARGET is a LABEL_DECL or an expression node for computed GOTOs. */
|
||||
DEFGSCODE(GIMPLE_GOTO, "gimple_goto", struct gimple_statement_with_ops)
|
||||
|
|
163
gcc/gimple.h
163
gcc/gimple.h
|
@ -117,6 +117,14 @@ enum gf_mask {
|
|||
GF_PREDICT_TAKEN = 1 << 15
|
||||
};
|
||||
|
||||
/* Currently, there's only one type of gimple debug stmt. Others are
|
||||
envisioned, for example, to enable the generation of is_stmt notes
|
||||
in line number information, to mark sequence points, etc. This
|
||||
subcode is to be used to tell them apart. */
|
||||
enum gimple_debug_subcode {
|
||||
GIMPLE_DEBUG_BIND = 0
|
||||
};
|
||||
|
||||
/* Masks for selecting a pass local flag (PLF) to work on. These
|
||||
masks are used by gimple_set_plf and gimple_plf. */
|
||||
enum plf_mask {
|
||||
|
@ -754,6 +762,10 @@ gimple gimple_build_assign_with_ops_stat (enum tree_code, tree, tree,
|
|||
#define gimple_build_assign_with_ops(c,o1,o2,o3) \
|
||||
gimple_build_assign_with_ops_stat (c, o1, o2, o3 MEM_STAT_INFO)
|
||||
|
||||
gimple gimple_build_debug_bind_stat (tree, tree, gimple MEM_STAT_DECL);
|
||||
#define gimple_build_debug_bind(var,val,stmt) \
|
||||
gimple_build_debug_bind_stat ((var), (val), (stmt) MEM_STAT_INFO)
|
||||
|
||||
gimple gimple_build_call_vec (tree, VEC(tree, heap) *);
|
||||
gimple gimple_build_call (tree, unsigned, ...);
|
||||
gimple gimple_build_call_from_tree (tree);
|
||||
|
@ -3158,6 +3170,105 @@ gimple_switch_set_default_label (gimple gs, tree label)
|
|||
gimple_switch_set_label (gs, 0, label);
|
||||
}
|
||||
|
||||
/* Return true if GS is a GIMPLE_DEBUG statement. */
|
||||
|
||||
static inline bool
|
||||
is_gimple_debug (const_gimple gs)
|
||||
{
|
||||
return gimple_code (gs) == GIMPLE_DEBUG;
|
||||
}
|
||||
|
||||
/* Return true if S is a GIMPLE_DEBUG BIND statement. */
|
||||
|
||||
static inline bool
|
||||
gimple_debug_bind_p (const_gimple s)
|
||||
{
|
||||
if (is_gimple_debug (s))
|
||||
return s->gsbase.subcode == GIMPLE_DEBUG_BIND;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Return the variable bound in a GIMPLE_DEBUG bind statement. */
|
||||
|
||||
static inline tree
|
||||
gimple_debug_bind_get_var (gimple dbg)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
return gimple_op (dbg, 0);
|
||||
}
|
||||
|
||||
/* Return the value bound to the variable in a GIMPLE_DEBUG bind
|
||||
statement. */
|
||||
|
||||
static inline tree
|
||||
gimple_debug_bind_get_value (gimple dbg)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
return gimple_op (dbg, 1);
|
||||
}
|
||||
|
||||
/* Return a pointer to the value bound to the variable in a
|
||||
GIMPLE_DEBUG bind statement. */
|
||||
|
||||
static inline tree *
|
||||
gimple_debug_bind_get_value_ptr (gimple dbg)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
return gimple_op_ptr (dbg, 1);
|
||||
}
|
||||
|
||||
/* Set the variable bound in a GIMPLE_DEBUG bind statement. */
|
||||
|
||||
static inline void
|
||||
gimple_debug_bind_set_var (gimple dbg, tree var)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
gimple_set_op (dbg, 0, var);
|
||||
}
|
||||
|
||||
/* Set the value bound to the variable in a GIMPLE_DEBUG bind
|
||||
statement. */
|
||||
|
||||
static inline void
|
||||
gimple_debug_bind_set_value (gimple dbg, tree value)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
gimple_set_op (dbg, 1, value);
|
||||
}
|
||||
|
||||
/* The second operand of a GIMPLE_DEBUG_BIND, when the value was
|
||||
optimized away. */
|
||||
#define GIMPLE_DEBUG_BIND_NOVALUE NULL_TREE /* error_mark_node */
|
||||
|
||||
/* Remove the value bound to the variable in a GIMPLE_DEBUG bind
|
||||
statement. */
|
||||
|
||||
static inline void
|
||||
gimple_debug_bind_reset_value (gimple dbg)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
gimple_set_op (dbg, 1, GIMPLE_DEBUG_BIND_NOVALUE);
|
||||
}
|
||||
|
||||
/* Return true if the GIMPLE_DEBUG bind statement is bound to a
|
||||
value. */
|
||||
|
||||
static inline bool
|
||||
gimple_debug_bind_has_value_p (gimple dbg)
|
||||
{
|
||||
GIMPLE_CHECK (dbg, GIMPLE_DEBUG);
|
||||
gcc_assert (gimple_debug_bind_p (dbg));
|
||||
return gimple_op (dbg, 1) != GIMPLE_DEBUG_BIND_NOVALUE;
|
||||
}
|
||||
|
||||
#undef GIMPLE_DEBUG_BIND_NOVALUE
|
||||
|
||||
/* Return the body for the OMP statement GS. */
|
||||
|
||||
|
@ -4308,6 +4419,58 @@ gsi_after_labels (basic_block bb)
|
|||
return gsi;
|
||||
}
|
||||
|
||||
/* Advance the iterator to the next non-debug gimple statement. */
|
||||
|
||||
static inline void
|
||||
gsi_next_nondebug (gimple_stmt_iterator *i)
|
||||
{
|
||||
do
|
||||
{
|
||||
gsi_next (i);
|
||||
}
|
||||
while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i)));
|
||||
}
|
||||
|
||||
/* Advance the iterator to the next non-debug gimple statement. */
|
||||
|
||||
static inline void
|
||||
gsi_prev_nondebug (gimple_stmt_iterator *i)
|
||||
{
|
||||
do
|
||||
{
|
||||
gsi_prev (i);
|
||||
}
|
||||
while (!gsi_end_p (*i) && is_gimple_debug (gsi_stmt (*i)));
|
||||
}
|
||||
|
||||
/* Return a new iterator pointing to the first non-debug statement in
|
||||
basic block BB. */
|
||||
|
||||
static inline gimple_stmt_iterator
|
||||
gsi_start_nondebug_bb (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator i = gsi_start_bb (bb);
|
||||
|
||||
if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i)))
|
||||
gsi_next_nondebug (&i);
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
/* Return a new iterator pointing to the last non-debug statement in
|
||||
basic block BB. */
|
||||
|
||||
static inline gimple_stmt_iterator
|
||||
gsi_last_nondebug_bb (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator i = gsi_last_bb (bb);
|
||||
|
||||
if (!gsi_end_p (i) && is_gimple_debug (gsi_stmt (i)))
|
||||
gsi_prev_nondebug (&i);
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
/* Return a pointer to the current stmt.
|
||||
|
||||
NOTE: You may want to use gsi_replace on the iterator itself,
|
||||
|
|
|
@ -310,7 +310,7 @@ size_t dfa_state_size;
|
|||
char *ready_try = NULL;
|
||||
|
||||
/* The ready list. */
|
||||
struct ready_list ready = {NULL, 0, 0, 0};
|
||||
struct ready_list ready = {NULL, 0, 0, 0, 0};
|
||||
|
||||
/* The pointer to the ready list (to be removed). */
|
||||
static struct ready_list *readyp = &ready;
|
||||
|
@ -748,6 +748,10 @@ increase_insn_priority (rtx insn, int amount)
|
|||
static bool
|
||||
contributes_to_priority_p (dep_t dep)
|
||||
{
|
||||
if (DEBUG_INSN_P (DEP_CON (dep))
|
||||
|| DEBUG_INSN_P (DEP_PRO (dep)))
|
||||
return false;
|
||||
|
||||
/* Critical path is meaningful in block boundaries only. */
|
||||
if (!current_sched_info->contributes_to_priority (DEP_CON (dep),
|
||||
DEP_PRO (dep)))
|
||||
|
@ -767,6 +771,31 @@ contributes_to_priority_p (dep_t dep)
|
|||
return true;
|
||||
}
|
||||
|
||||
/* Compute the number of nondebug forward deps of an insn. */
|
||||
|
||||
static int
|
||||
dep_list_size (rtx insn)
|
||||
{
|
||||
sd_iterator_def sd_it;
|
||||
dep_t dep;
|
||||
int dbgcount = 0, nodbgcount = 0;
|
||||
|
||||
if (!MAY_HAVE_DEBUG_INSNS)
|
||||
return sd_lists_size (insn, SD_LIST_FORW);
|
||||
|
||||
FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
|
||||
{
|
||||
if (DEBUG_INSN_P (DEP_CON (dep)))
|
||||
dbgcount++;
|
||||
else
|
||||
nodbgcount++;
|
||||
}
|
||||
|
||||
gcc_assert (dbgcount + nodbgcount == sd_lists_size (insn, SD_LIST_FORW));
|
||||
|
||||
return nodbgcount;
|
||||
}
|
||||
|
||||
/* Compute the priority number for INSN. */
|
||||
static int
|
||||
priority (rtx insn)
|
||||
|
@ -781,7 +810,7 @@ priority (rtx insn)
|
|||
{
|
||||
int this_priority = -1;
|
||||
|
||||
if (sd_lists_empty_p (insn, SD_LIST_FORW))
|
||||
if (dep_list_size (insn) == 0)
|
||||
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
|
||||
some forward deps but all of them are ignored by
|
||||
contributes_to_priority hook. At the moment we set priority of
|
||||
|
@ -886,9 +915,19 @@ rank_for_schedule (const void *x, const void *y)
|
|||
{
|
||||
rtx tmp = *(const rtx *) y;
|
||||
rtx tmp2 = *(const rtx *) x;
|
||||
rtx last;
|
||||
int tmp_class, tmp2_class;
|
||||
int val, priority_val, weight_val, info_val;
|
||||
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
/* Schedule debug insns as early as possible. */
|
||||
if (DEBUG_INSN_P (tmp) && !DEBUG_INSN_P (tmp2))
|
||||
return -1;
|
||||
else if (DEBUG_INSN_P (tmp2))
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* The insn in a schedule group should be issued the first. */
|
||||
if (flag_sched_group_heuristic &&
|
||||
SCHED_GROUP_P (tmp) != SCHED_GROUP_P (tmp2))
|
||||
|
@ -936,8 +975,20 @@ rank_for_schedule (const void *x, const void *y)
|
|||
if(flag_sched_rank_heuristic && info_val)
|
||||
return info_val;
|
||||
|
||||
/* Compare insns based on their relation to the last-scheduled-insn. */
|
||||
if (flag_sched_last_insn_heuristic && INSN_P (last_scheduled_insn))
|
||||
if (flag_sched_last_insn_heuristic)
|
||||
{
|
||||
last = last_scheduled_insn;
|
||||
|
||||
if (DEBUG_INSN_P (last) && last != current_sched_info->prev_head)
|
||||
do
|
||||
last = PREV_INSN (last);
|
||||
while (!NONDEBUG_INSN_P (last)
|
||||
&& last != current_sched_info->prev_head);
|
||||
}
|
||||
|
||||
/* Compare insns based on their relation to the last scheduled
|
||||
non-debug insn. */
|
||||
if (flag_sched_last_insn_heuristic && NONDEBUG_INSN_P (last))
|
||||
{
|
||||
dep_t dep1;
|
||||
dep_t dep2;
|
||||
|
@ -947,7 +998,7 @@ rank_for_schedule (const void *x, const void *y)
|
|||
2) Anti/Output dependent on last scheduled insn.
|
||||
3) Independent of last scheduled insn, or has latency of one.
|
||||
Choose the insn from the highest numbered class if different. */
|
||||
dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true);
|
||||
dep1 = sd_find_dep_between (last, tmp, true);
|
||||
|
||||
if (dep1 == NULL || dep_cost (dep1) == 1)
|
||||
tmp_class = 3;
|
||||
|
@ -957,7 +1008,7 @@ rank_for_schedule (const void *x, const void *y)
|
|||
else
|
||||
tmp_class = 2;
|
||||
|
||||
dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true);
|
||||
dep2 = sd_find_dep_between (last, tmp2, true);
|
||||
|
||||
if (dep2 == NULL || dep_cost (dep2) == 1)
|
||||
tmp2_class = 3;
|
||||
|
@ -975,8 +1026,7 @@ rank_for_schedule (const void *x, const void *y)
|
|||
This gives the scheduler more freedom when scheduling later
|
||||
instructions at the expense of added register pressure. */
|
||||
|
||||
val = (sd_lists_size (tmp2, SD_LIST_FORW)
|
||||
- sd_lists_size (tmp, SD_LIST_FORW));
|
||||
val = (dep_list_size (tmp2) - dep_list_size (tmp));
|
||||
|
||||
if (flag_sched_dep_count_heuristic && val != 0)
|
||||
return val;
|
||||
|
@ -1014,6 +1064,7 @@ queue_insn (rtx insn, int n_cycles)
|
|||
rtx link = alloc_INSN_LIST (insn, insn_queue[next_q]);
|
||||
|
||||
gcc_assert (n_cycles <= max_insn_queue_index);
|
||||
gcc_assert (!DEBUG_INSN_P (insn));
|
||||
|
||||
insn_queue[next_q] = link;
|
||||
q_size += 1;
|
||||
|
@ -1081,6 +1132,8 @@ ready_add (struct ready_list *ready, rtx insn, bool first_p)
|
|||
}
|
||||
|
||||
ready->n_ready++;
|
||||
if (DEBUG_INSN_P (insn))
|
||||
ready->n_debug++;
|
||||
|
||||
gcc_assert (QUEUE_INDEX (insn) != QUEUE_READY);
|
||||
QUEUE_INDEX (insn) = QUEUE_READY;
|
||||
|
@ -1097,6 +1150,8 @@ ready_remove_first (struct ready_list *ready)
|
|||
gcc_assert (ready->n_ready);
|
||||
t = ready->vec[ready->first--];
|
||||
ready->n_ready--;
|
||||
if (DEBUG_INSN_P (t))
|
||||
ready->n_debug--;
|
||||
/* If the queue becomes empty, reset it. */
|
||||
if (ready->n_ready == 0)
|
||||
ready->first = ready->veclen - 1;
|
||||
|
@ -1138,6 +1193,8 @@ ready_remove (struct ready_list *ready, int index)
|
|||
gcc_assert (ready->n_ready && index < ready->n_ready);
|
||||
t = ready->vec[ready->first - index];
|
||||
ready->n_ready--;
|
||||
if (DEBUG_INSN_P (t))
|
||||
ready->n_debug--;
|
||||
for (i = index; i < ready->n_ready; i++)
|
||||
ready->vec[ready->first - i] = ready->vec[ready->first - i - 1];
|
||||
QUEUE_INDEX (t) = QUEUE_NOWHERE;
|
||||
|
@ -1316,7 +1373,8 @@ schedule_insn (rtx insn)
|
|||
be aligned. */
|
||||
if (issue_rate > 1
|
||||
&& GET_CODE (PATTERN (insn)) != USE
|
||||
&& GET_CODE (PATTERN (insn)) != CLOBBER)
|
||||
&& GET_CODE (PATTERN (insn)) != CLOBBER
|
||||
&& !DEBUG_INSN_P (insn))
|
||||
{
|
||||
if (reload_completed)
|
||||
PUT_MODE (insn, clock_var > last_clock_var ? TImode : VOIDmode);
|
||||
|
@ -1428,7 +1486,7 @@ get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
|
|||
beg_head = NEXT_INSN (beg_head);
|
||||
|
||||
while (beg_head != beg_tail)
|
||||
if (NOTE_P (beg_head))
|
||||
if (NOTE_P (beg_head) || BOUNDARY_DEBUG_INSN_P (beg_head))
|
||||
beg_head = NEXT_INSN (beg_head);
|
||||
else
|
||||
break;
|
||||
|
@ -1441,7 +1499,7 @@ get_ebb_head_tail (basic_block beg, basic_block end, rtx *headp, rtx *tailp)
|
|||
end_head = NEXT_INSN (end_head);
|
||||
|
||||
while (end_head != end_tail)
|
||||
if (NOTE_P (end_tail))
|
||||
if (NOTE_P (end_tail) || BOUNDARY_DEBUG_INSN_P (end_tail))
|
||||
end_tail = PREV_INSN (end_tail);
|
||||
else
|
||||
break;
|
||||
|
@ -1456,7 +1514,8 @@ no_real_insns_p (const_rtx head, const_rtx tail)
|
|||
{
|
||||
while (head != NEXT_INSN (tail))
|
||||
{
|
||||
if (!NOTE_P (head) && !LABEL_P (head))
|
||||
if (!NOTE_P (head) && !LABEL_P (head)
|
||||
&& !BOUNDARY_DEBUG_INSN_P (head))
|
||||
return 0;
|
||||
head = NEXT_INSN (head);
|
||||
}
|
||||
|
@ -1627,9 +1686,13 @@ queue_to_ready (struct ready_list *ready)
|
|||
q_ptr = NEXT_Q (q_ptr);
|
||||
|
||||
if (dbg_cnt (sched_insn) == false)
|
||||
/* If debug counter is activated do not requeue insn next after
|
||||
last_scheduled_insn. */
|
||||
skip_insn = next_nonnote_insn (last_scheduled_insn);
|
||||
{
|
||||
/* If debug counter is activated do not requeue insn next after
|
||||
last_scheduled_insn. */
|
||||
skip_insn = next_nonnote_insn (last_scheduled_insn);
|
||||
while (skip_insn && DEBUG_INSN_P (skip_insn))
|
||||
skip_insn = next_nonnote_insn (skip_insn);
|
||||
}
|
||||
else
|
||||
skip_insn = NULL_RTX;
|
||||
|
||||
|
@ -1647,7 +1710,7 @@ queue_to_ready (struct ready_list *ready)
|
|||
/* If the ready list is full, delay the insn for 1 cycle.
|
||||
See the comment in schedule_block for the rationale. */
|
||||
if (!reload_completed
|
||||
&& ready->n_ready > MAX_SCHED_READY_INSNS
|
||||
&& ready->n_ready - ready->n_debug > MAX_SCHED_READY_INSNS
|
||||
&& !SCHED_GROUP_P (insn)
|
||||
&& insn != skip_insn)
|
||||
{
|
||||
|
@ -2255,7 +2318,8 @@ choose_ready (struct ready_list *ready, rtx *insn_ptr)
|
|||
|
||||
if (targetm.sched.first_cycle_multipass_dfa_lookahead)
|
||||
lookahead = targetm.sched.first_cycle_multipass_dfa_lookahead ();
|
||||
if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0)))
|
||||
if (lookahead <= 0 || SCHED_GROUP_P (ready_element (ready, 0))
|
||||
|| DEBUG_INSN_P (ready_element (ready, 0)))
|
||||
{
|
||||
*insn_ptr = ready_remove_first (ready);
|
||||
return 0;
|
||||
|
@ -2414,6 +2478,7 @@ schedule_block (basic_block *target_bb)
|
|||
/* Clear the ready list. */
|
||||
ready.first = ready.veclen - 1;
|
||||
ready.n_ready = 0;
|
||||
ready.n_debug = 0;
|
||||
|
||||
/* It is used for first cycle multipass scheduling. */
|
||||
temp_state = alloca (dfa_state_size);
|
||||
|
@ -2424,7 +2489,8 @@ schedule_block (basic_block *target_bb)
|
|||
/* We start inserting insns after PREV_HEAD. */
|
||||
last_scheduled_insn = prev_head;
|
||||
|
||||
gcc_assert (NOTE_P (last_scheduled_insn)
|
||||
gcc_assert ((NOTE_P (last_scheduled_insn)
|
||||
|| BOUNDARY_DEBUG_INSN_P (last_scheduled_insn))
|
||||
&& BLOCK_FOR_INSN (last_scheduled_insn) == *target_bb);
|
||||
|
||||
/* Initialize INSN_QUEUE. Q_SIZE is the total number of insns in the
|
||||
|
@ -2445,12 +2511,14 @@ schedule_block (basic_block *target_bb)
|
|||
/* The algorithm is O(n^2) in the number of ready insns at any given
|
||||
time in the worst case. Before reload we are more likely to have
|
||||
big lists so truncate them to a reasonable size. */
|
||||
if (!reload_completed && ready.n_ready > MAX_SCHED_READY_INSNS)
|
||||
if (!reload_completed
|
||||
&& ready.n_ready - ready.n_debug > MAX_SCHED_READY_INSNS)
|
||||
{
|
||||
ready_sort (&ready);
|
||||
|
||||
/* Find first free-standing insn past MAX_SCHED_READY_INSNS. */
|
||||
for (i = MAX_SCHED_READY_INSNS; i < ready.n_ready; i++)
|
||||
/* Find first free-standing insn past MAX_SCHED_READY_INSNS.
|
||||
If there are debug insns, we know they're first. */
|
||||
for (i = MAX_SCHED_READY_INSNS + ready.n_debug; i < ready.n_ready; i++)
|
||||
if (!SCHED_GROUP_P (ready_element (&ready, i)))
|
||||
break;
|
||||
|
||||
|
@ -2533,6 +2601,46 @@ schedule_block (basic_block *target_bb)
|
|||
}
|
||||
}
|
||||
|
||||
/* We don't want md sched reorder to even see debug isns, so put
|
||||
them out right away. */
|
||||
if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
|
||||
{
|
||||
if (control_flow_insn_p (last_scheduled_insn))
|
||||
{
|
||||
*target_bb = current_sched_info->advance_target_bb
|
||||
(*target_bb, 0);
|
||||
|
||||
if (sched_verbose)
|
||||
{
|
||||
rtx x;
|
||||
|
||||
x = next_real_insn (last_scheduled_insn);
|
||||
gcc_assert (x);
|
||||
dump_new_block_header (1, *target_bb, x, tail);
|
||||
}
|
||||
|
||||
last_scheduled_insn = bb_note (*target_bb);
|
||||
}
|
||||
|
||||
while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
|
||||
{
|
||||
rtx insn = ready_remove_first (&ready);
|
||||
gcc_assert (DEBUG_INSN_P (insn));
|
||||
(*current_sched_info->begin_schedule_ready) (insn,
|
||||
last_scheduled_insn);
|
||||
move_insn (insn, last_scheduled_insn,
|
||||
current_sched_info->next_tail);
|
||||
last_scheduled_insn = insn;
|
||||
advance = schedule_insn (insn);
|
||||
gcc_assert (advance == 0);
|
||||
if (ready.n_ready > 0)
|
||||
ready_sort (&ready);
|
||||
}
|
||||
|
||||
if (!ready.n_ready)
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Allow the target to reorder the list, typically for
|
||||
better instruction bundling. */
|
||||
if (sort_p && targetm.sched.reorder
|
||||
|
@ -2574,7 +2682,8 @@ schedule_block (basic_block *target_bb)
|
|||
ready_sort (&ready);
|
||||
}
|
||||
|
||||
if (ready.n_ready == 0 || !can_issue_more
|
||||
if (ready.n_ready == 0
|
||||
|| !can_issue_more
|
||||
|| state_dead_lock_p (curr_state)
|
||||
|| !(*current_sched_info->schedule_more_p) ())
|
||||
break;
|
||||
|
@ -2711,7 +2820,7 @@ schedule_block (basic_block *target_bb)
|
|||
if (targetm.sched.variable_issue)
|
||||
can_issue_more =
|
||||
targetm.sched.variable_issue (sched_dump, sched_verbose,
|
||||
insn, can_issue_more);
|
||||
insn, can_issue_more);
|
||||
/* A naked CLOBBER or USE generates no instruction, so do
|
||||
not count them against the issue rate. */
|
||||
else if (GET_CODE (PATTERN (insn)) != USE
|
||||
|
@ -2734,6 +2843,44 @@ schedule_block (basic_block *target_bb)
|
|||
if (ready.n_ready > 0)
|
||||
ready_sort (&ready);
|
||||
|
||||
/* Quickly go through debug insns such that md sched
|
||||
reorder2 doesn't have to deal with debug insns. */
|
||||
if (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0))
|
||||
&& (*current_sched_info->schedule_more_p) ())
|
||||
{
|
||||
if (control_flow_insn_p (last_scheduled_insn))
|
||||
{
|
||||
*target_bb = current_sched_info->advance_target_bb
|
||||
(*target_bb, 0);
|
||||
|
||||
if (sched_verbose)
|
||||
{
|
||||
rtx x;
|
||||
|
||||
x = next_real_insn (last_scheduled_insn);
|
||||
gcc_assert (x);
|
||||
dump_new_block_header (1, *target_bb, x, tail);
|
||||
}
|
||||
|
||||
last_scheduled_insn = bb_note (*target_bb);
|
||||
}
|
||||
|
||||
while (ready.n_ready && DEBUG_INSN_P (ready_element (&ready, 0)))
|
||||
{
|
||||
insn = ready_remove_first (&ready);
|
||||
gcc_assert (DEBUG_INSN_P (insn));
|
||||
(*current_sched_info->begin_schedule_ready)
|
||||
(insn, last_scheduled_insn);
|
||||
move_insn (insn, last_scheduled_insn,
|
||||
current_sched_info->next_tail);
|
||||
advance = schedule_insn (insn);
|
||||
last_scheduled_insn = insn;
|
||||
gcc_assert (advance == 0);
|
||||
if (ready.n_ready > 0)
|
||||
ready_sort (&ready);
|
||||
}
|
||||
}
|
||||
|
||||
if (targetm.sched.reorder2
|
||||
&& (ready.n_ready == 0
|
||||
|| !SCHED_GROUP_P (ready_element (&ready, 0))))
|
||||
|
@ -2757,7 +2904,7 @@ schedule_block (basic_block *target_bb)
|
|||
if (current_sched_info->queue_must_finish_empty)
|
||||
/* Sanity check -- queue must be empty now. Meaningless if region has
|
||||
multiple bbs. */
|
||||
gcc_assert (!q_size && !ready.n_ready);
|
||||
gcc_assert (!q_size && !ready.n_ready && !ready.n_debug);
|
||||
else
|
||||
{
|
||||
/* We must maintain QUEUE_INDEX between blocks in region. */
|
||||
|
@ -2836,8 +2983,8 @@ set_priorities (rtx head, rtx tail)
|
|||
current_sched_info->sched_max_insns_priority;
|
||||
rtx prev_head;
|
||||
|
||||
if (head == tail && (! INSN_P (head)))
|
||||
return 0;
|
||||
if (head == tail && (! INSN_P (head) || BOUNDARY_DEBUG_INSN_P (head)))
|
||||
gcc_unreachable ();
|
||||
|
||||
n_insn = 0;
|
||||
|
||||
|
@ -4605,7 +4752,7 @@ add_jump_dependencies (rtx insn, rtx jump)
|
|||
if (insn == jump)
|
||||
break;
|
||||
|
||||
if (sd_lists_empty_p (insn, SD_LIST_FORW))
|
||||
if (dep_list_size (insn) == 0)
|
||||
{
|
||||
dep_def _new_dep, *new_dep = &_new_dep;
|
||||
|
||||
|
@ -4648,6 +4795,19 @@ has_edge_p (VEC(edge,gc) *el, int type)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* Search back, starting at INSN, for an insn that is not a
|
||||
NOTE_INSN_VAR_LOCATION. Don't search beyond HEAD, and return it if
|
||||
no such insn can be found. */
|
||||
static inline rtx
|
||||
prev_non_location_insn (rtx insn, rtx head)
|
||||
{
|
||||
while (insn != head && NOTE_P (insn)
|
||||
&& NOTE_KIND (insn) == NOTE_INSN_VAR_LOCATION)
|
||||
insn = PREV_INSN (insn);
|
||||
|
||||
return insn;
|
||||
}
|
||||
|
||||
/* Check few properties of CFG between HEAD and TAIL.
|
||||
If HEAD (TAIL) is NULL check from the beginning (till the end) of the
|
||||
instruction stream. */
|
||||
|
@ -4707,8 +4867,9 @@ check_cfg (rtx head, rtx tail)
|
|||
{
|
||||
if (control_flow_insn_p (head))
|
||||
{
|
||||
gcc_assert (BB_END (bb) == head);
|
||||
|
||||
gcc_assert (prev_non_location_insn (BB_END (bb), head)
|
||||
== head);
|
||||
|
||||
if (any_uncondjump_p (head))
|
||||
gcc_assert (EDGE_COUNT (bb->succs) == 1
|
||||
&& BARRIER_P (NEXT_INSN (head)));
|
||||
|
@ -4724,11 +4885,12 @@ check_cfg (rtx head, rtx tail)
|
|||
if (BB_END (bb) == head)
|
||||
{
|
||||
if (EDGE_COUNT (bb->succs) > 1)
|
||||
gcc_assert (control_flow_insn_p (head)
|
||||
gcc_assert (control_flow_insn_p (prev_non_location_insn
|
||||
(head, BB_HEAD (bb)))
|
||||
|| has_edge_p (bb->succs, EDGE_COMPLEX));
|
||||
bb = 0;
|
||||
}
|
||||
|
||||
|
||||
head = NEXT_INSN (head);
|
||||
}
|
||||
}
|
||||
|
|
31
gcc/ifcvt.c
31
gcc/ifcvt.c
|
@ -194,7 +194,7 @@ first_active_insn (basic_block bb)
|
|||
insn = NEXT_INSN (insn);
|
||||
}
|
||||
|
||||
while (NOTE_P (insn))
|
||||
while (NOTE_P (insn) || DEBUG_INSN_P (insn))
|
||||
{
|
||||
if (insn == BB_END (bb))
|
||||
return NULL_RTX;
|
||||
|
@ -217,6 +217,7 @@ last_active_insn (basic_block bb, int skip_use_p)
|
|||
|
||||
while (NOTE_P (insn)
|
||||
|| JUMP_P (insn)
|
||||
|| DEBUG_INSN_P (insn)
|
||||
|| (skip_use_p
|
||||
&& NONJUMP_INSN_P (insn)
|
||||
&& GET_CODE (PATTERN (insn)) == USE))
|
||||
|
@ -269,7 +270,7 @@ cond_exec_process_insns (ce_if_block_t *ce_info ATTRIBUTE_UNUSED,
|
|||
|
||||
for (insn = start; ; insn = NEXT_INSN (insn))
|
||||
{
|
||||
if (NOTE_P (insn))
|
||||
if (NOTE_P (insn) || DEBUG_INSN_P (insn))
|
||||
goto insn_done;
|
||||
|
||||
gcc_assert(NONJUMP_INSN_P (insn) || CALL_P (insn));
|
||||
|
@ -2256,6 +2257,8 @@ noce_process_if_block (struct noce_if_info *if_info)
|
|||
else
|
||||
{
|
||||
insn_b = prev_nonnote_insn (if_info->cond_earliest);
|
||||
while (insn_b && DEBUG_INSN_P (insn_b))
|
||||
insn_b = prev_nonnote_insn (insn_b);
|
||||
/* We're going to be moving the evaluation of B down from above
|
||||
COND_EARLIEST to JUMP. Make sure the relevant data is still
|
||||
intact. */
|
||||
|
@ -2266,14 +2269,13 @@ noce_process_if_block (struct noce_if_info *if_info)
|
|||
|| ! rtx_equal_p (x, SET_DEST (set_b))
|
||||
|| ! noce_operand_ok (SET_SRC (set_b))
|
||||
|| reg_overlap_mentioned_p (x, SET_SRC (set_b))
|
||||
|| modified_between_p (SET_SRC (set_b),
|
||||
PREV_INSN (if_info->cond_earliest), jump)
|
||||
|| modified_between_p (SET_SRC (set_b), insn_b, jump)
|
||||
/* Likewise with X. In particular this can happen when
|
||||
noce_get_condition looks farther back in the instruction
|
||||
stream than one might expect. */
|
||||
|| reg_overlap_mentioned_p (x, cond)
|
||||
|| reg_overlap_mentioned_p (x, a)
|
||||
|| modified_between_p (x, PREV_INSN (if_info->cond_earliest), jump))
|
||||
|| modified_between_p (x, insn_b, jump))
|
||||
insn_b = set_b = NULL_RTX;
|
||||
}
|
||||
|
||||
|
@ -2481,7 +2483,7 @@ check_cond_move_block (basic_block bb, rtx *vals, VEC (int, heap) **regs, rtx co
|
|||
{
|
||||
rtx set, dest, src;
|
||||
|
||||
if (!INSN_P (insn) || JUMP_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn) || JUMP_P (insn))
|
||||
continue;
|
||||
set = single_set (insn);
|
||||
if (!set)
|
||||
|
@ -2559,7 +2561,8 @@ cond_move_convert_if_block (struct noce_if_info *if_infop,
|
|||
rtx set, target, dest, t, e;
|
||||
unsigned int regno;
|
||||
|
||||
if (!INSN_P (insn) || JUMP_P (insn))
|
||||
/* ??? Maybe emit conditional debug insn? */
|
||||
if (!NONDEBUG_INSN_P (insn) || JUMP_P (insn))
|
||||
continue;
|
||||
set = single_set (insn);
|
||||
gcc_assert (set && REG_P (SET_DEST (set)));
|
||||
|
@ -3120,6 +3123,7 @@ block_jumps_and_fallthru_p (basic_block cur_bb, basic_block target_bb)
|
|||
|
||||
if (INSN_P (insn)
|
||||
&& !JUMP_P (insn)
|
||||
&& !DEBUG_INSN_P (insn)
|
||||
&& GET_CODE (PATTERN (insn)) != USE
|
||||
&& GET_CODE (PATTERN (insn)) != CLOBBER)
|
||||
n_insns++;
|
||||
|
@ -3789,6 +3793,9 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
head = BB_HEAD (merge_bb);
|
||||
end = BB_END (merge_bb);
|
||||
|
||||
while (DEBUG_INSN_P (end) && end != head)
|
||||
end = PREV_INSN (end);
|
||||
|
||||
/* If merge_bb ends with a tablejump, predicating/moving insn's
|
||||
into test_bb and then deleting merge_bb will result in the jumptable
|
||||
that follows merge_bb being removed along with merge_bb and then we
|
||||
|
@ -3798,6 +3805,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
|
||||
if (LABEL_P (head))
|
||||
head = NEXT_INSN (head);
|
||||
while (DEBUG_INSN_P (head) && head != end)
|
||||
head = NEXT_INSN (head);
|
||||
if (NOTE_P (head))
|
||||
{
|
||||
if (head == end)
|
||||
|
@ -3806,6 +3815,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
goto no_body;
|
||||
}
|
||||
head = NEXT_INSN (head);
|
||||
while (DEBUG_INSN_P (head) && head != end)
|
||||
head = NEXT_INSN (head);
|
||||
}
|
||||
|
||||
if (JUMP_P (end))
|
||||
|
@ -3816,6 +3827,8 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
goto no_body;
|
||||
}
|
||||
end = PREV_INSN (end);
|
||||
while (DEBUG_INSN_P (end) && end != head)
|
||||
end = PREV_INSN (end);
|
||||
}
|
||||
|
||||
/* Disable handling dead code by conditional execution if the machine needs
|
||||
|
@ -3876,7 +3889,7 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
{
|
||||
if (CALL_P (insn))
|
||||
return FALSE;
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
if (may_trap_p (PATTERN (insn)))
|
||||
return FALSE;
|
||||
|
@ -3922,7 +3935,7 @@ dead_or_predicable (basic_block test_bb, basic_block merge_bb,
|
|||
|
||||
FOR_BB_INSNS (merge_bb, insn)
|
||||
{
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
unsigned int uid = INSN_UID (insn);
|
||||
df_ref *def_rec;
|
||||
|
|
|
@ -70,7 +70,7 @@ initialize_uninitialized_regs (void)
|
|||
{
|
||||
unsigned int uid = INSN_UID (insn);
|
||||
df_ref *use_rec;
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
for (use_rec = DF_INSN_UID_USES (uid); *use_rec; use_rec++)
|
||||
|
|
|
@ -411,6 +411,9 @@ check_stmt (gimple_stmt_iterator *gsip, funct_state local, bool ipa)
|
|||
gimple stmt = gsi_stmt (*gsip);
|
||||
unsigned int i = 0;
|
||||
|
||||
if (is_gimple_debug (stmt))
|
||||
return;
|
||||
|
||||
if (dump_file)
|
||||
{
|
||||
fprintf (dump_file, " scanning: ");
|
||||
|
|
|
@ -442,6 +442,9 @@ scan_stmt_for_static_refs (gimple_stmt_iterator *gsip,
|
|||
gimple stmt = gsi_stmt (*gsip);
|
||||
ipa_reference_local_vars_info_t local = NULL;
|
||||
|
||||
if (is_gimple_debug (stmt))
|
||||
return NULL;
|
||||
|
||||
if (fn)
|
||||
local = get_reference_vars_info (fn)->local;
|
||||
|
||||
|
|
|
@ -1491,7 +1491,7 @@ create_bb_allocnos (ira_loop_tree_node_t bb_node)
|
|||
curr_bb = bb = bb_node->bb;
|
||||
ira_assert (bb != NULL);
|
||||
FOR_BB_INSNS_REVERSE (bb, insn)
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
create_insn_allocnos (PATTERN (insn), false);
|
||||
/* It might be a allocno living through from one subloop to
|
||||
another. */
|
||||
|
|
|
@ -522,7 +522,7 @@ add_copies (ira_loop_tree_node_t loop_tree_node)
|
|||
if (bb == NULL)
|
||||
return;
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
add_insn_allocno_copies (insn);
|
||||
}
|
||||
|
||||
|
|
|
@ -995,7 +995,7 @@ scan_one_insn (rtx insn)
|
|||
rtx set, note;
|
||||
int i, k;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
return insn;
|
||||
|
||||
pat_code = GET_CODE (PATTERN (insn));
|
||||
|
@ -1384,7 +1384,7 @@ process_bb_node_for_hard_reg_moves (ira_loop_tree_node_t loop_tree_node)
|
|||
freq = 1;
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
{
|
||||
if (! INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
set = single_set (insn);
|
||||
if (set == NULL_RTX)
|
||||
|
|
|
@ -910,7 +910,7 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node)
|
|||
df_ref *def_rec, *use_rec;
|
||||
bool call_p;
|
||||
|
||||
if (! INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
|
||||
|
|
|
@ -2234,7 +2234,7 @@ memref_used_between_p (rtx memref, rtx start, rtx end)
|
|||
for (insn = NEXT_INSN (start); insn != NEXT_INSN (end);
|
||||
insn = NEXT_INSN (insn))
|
||||
{
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
if (memref_referenced_p (memref, PATTERN (insn)))
|
||||
|
@ -2678,7 +2678,7 @@ update_equiv_regs (void)
|
|||
}
|
||||
/* Move the initialization of the register to just before
|
||||
INSN. Update the flow information. */
|
||||
else if (PREV_INSN (insn) != equiv_insn)
|
||||
else if (prev_nondebug_insn (insn) != equiv_insn)
|
||||
{
|
||||
rtx new_insn;
|
||||
|
||||
|
|
|
@ -912,7 +912,7 @@ find_invariants_bb (basic_block bb, bool always_reached, bool always_executed)
|
|||
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
{
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
find_invariants_insn (insn, always_reached, always_executed);
|
||||
|
|
|
@ -531,6 +531,34 @@ resolve_subreg_use (rtx *px, void *data)
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* This is called via for_each_rtx. Look for SUBREGs which can be
|
||||
decomposed and decomposed REGs that need copying. */
|
||||
|
||||
static int
|
||||
adjust_decomposed_uses (rtx *px, void *data ATTRIBUTE_UNUSED)
|
||||
{
|
||||
rtx x = *px;
|
||||
|
||||
if (x == NULL_RTX)
|
||||
return 0;
|
||||
|
||||
if (resolve_subreg_p (x))
|
||||
{
|
||||
x = simplify_subreg_concatn (GET_MODE (x), SUBREG_REG (x),
|
||||
SUBREG_BYTE (x));
|
||||
|
||||
if (x)
|
||||
*px = x;
|
||||
else
|
||||
x = copy_rtx (*px);
|
||||
}
|
||||
|
||||
if (resolve_reg_p (x))
|
||||
*px = copy_rtx (x);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* We are deleting INSN. Move any EH_REGION notes to INSNS. */
|
||||
|
||||
static void
|
||||
|
@ -886,6 +914,18 @@ resolve_use (rtx pat, rtx insn)
|
|||
return false;
|
||||
}
|
||||
|
||||
/* A VAR_LOCATION can be simplified. */
|
||||
|
||||
static void
|
||||
resolve_debug (rtx insn)
|
||||
{
|
||||
for_each_rtx (&PATTERN (insn), adjust_decomposed_uses, NULL_RTX);
|
||||
|
||||
df_insn_rescan (insn);
|
||||
|
||||
resolve_reg_notes (insn);
|
||||
}
|
||||
|
||||
/* Checks if INSN is a decomposable multiword-shift or zero-extend and
|
||||
sets the decomposable_context bitmap accordingly. A non-zero value
|
||||
is returned if a decomposable insn has been found. */
|
||||
|
@ -1170,6 +1210,8 @@ decompose_multiword_subregs (void)
|
|||
resolve_clobber (pat, insn);
|
||||
else if (GET_CODE (pat) == USE)
|
||||
resolve_use (pat, insn);
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
resolve_debug (insn);
|
||||
else
|
||||
{
|
||||
rtx set;
|
||||
|
|
|
@ -349,7 +349,7 @@ const_iteration_count (rtx count_reg, basic_block pre_header,
|
|||
get_ebb_head_tail (pre_header, pre_header, &head, &tail);
|
||||
|
||||
for (insn = tail; insn != PREV_INSN (head); insn = PREV_INSN (insn))
|
||||
if (INSN_P (insn) && single_set (insn) &&
|
||||
if (NONDEBUG_INSN_P (insn) && single_set (insn) &&
|
||||
rtx_equal_p (count_reg, SET_DEST (single_set (insn))))
|
||||
{
|
||||
rtx pat = single_set (insn);
|
||||
|
@ -375,7 +375,7 @@ res_MII (ddg_ptr g)
|
|||
if (targetm.sched.sms_res_mii)
|
||||
return targetm.sched.sms_res_mii (g);
|
||||
|
||||
return (g->num_nodes / issue_rate);
|
||||
return ((g->num_nodes - g->num_debug) / issue_rate);
|
||||
}
|
||||
|
||||
|
||||
|
@ -769,7 +769,7 @@ loop_single_full_bb_p (struct loop *loop)
|
|||
for (; head != NEXT_INSN (tail); head = NEXT_INSN (head))
|
||||
{
|
||||
if (NOTE_P (head) || LABEL_P (head)
|
||||
|| (INSN_P (head) && JUMP_P (head)))
|
||||
|| (INSN_P (head) && (DEBUG_INSN_P (head) || JUMP_P (head))))
|
||||
continue;
|
||||
empty_bb = false;
|
||||
break;
|
||||
|
@ -1020,7 +1020,7 @@ sms_schedule (void)
|
|||
|
||||
if (CALL_P (insn)
|
||||
|| BARRIER_P (insn)
|
||||
|| (INSN_P (insn) && !JUMP_P (insn)
|
||||
|| (NONDEBUG_INSN_P (insn) && !JUMP_P (insn)
|
||||
&& !single_set (insn) && GET_CODE (PATTERN (insn)) != USE)
|
||||
|| (FIND_REG_INC_NOTE (insn, NULL_RTX) != 0)
|
||||
|| (INSN_P (insn) && (set = single_set (insn))
|
||||
|
@ -1038,7 +1038,7 @@ sms_schedule (void)
|
|||
fprintf (dump_file, "SMS loop-with-barrier\n");
|
||||
else if (FIND_REG_INC_NOTE (insn, NULL_RTX) != 0)
|
||||
fprintf (dump_file, "SMS reg inc\n");
|
||||
else if ((INSN_P (insn) && !JUMP_P (insn)
|
||||
else if ((NONDEBUG_INSN_P (insn) && !JUMP_P (insn)
|
||||
&& !single_set (insn) && GET_CODE (PATTERN (insn)) != USE))
|
||||
fprintf (dump_file, "SMS loop-with-not-single-set\n");
|
||||
else
|
||||
|
@ -1754,7 +1754,7 @@ sms_schedule_by_order (ddg_ptr g, int mii, int maxii, int *nodes_order)
|
|||
ddg_node_ptr u_node = &ps->g->nodes[u];
|
||||
rtx insn = u_node->insn;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
RESET_BIT (tobe_scheduled, u);
|
||||
continue;
|
||||
|
@ -2743,7 +2743,7 @@ ps_has_conflicts (partial_schedule_ptr ps, int from, int to)
|
|||
{
|
||||
rtx insn = crr_insn->node->insn;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
/* Check if there is room for the current insn. */
|
||||
|
|
|
@ -2054,7 +2054,7 @@ common_handle_option (size_t scode, const char *arg, int value,
|
|||
break;
|
||||
|
||||
case OPT_gdwarf_:
|
||||
if (value < 2 || value > 3)
|
||||
if (value < 2 || value > 4)
|
||||
error ("dwarf version %d is not supported", value);
|
||||
else
|
||||
dwarf_version = value;
|
||||
|
|
|
@ -752,6 +752,13 @@ DEFPARAM (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO,
|
|||
"min. ratio of insns to mem ops to enable prefetching in a loop",
|
||||
3, 0, 0)
|
||||
|
||||
/* Set minimum insn uid for non-debug insns. */
|
||||
|
||||
DEFPARAM (PARAM_MIN_NONDEBUG_INSN_UID,
|
||||
"min-nondebug-insn-uid",
|
||||
"The minimum UID to be used for a nondebug insn",
|
||||
0, 1, 0)
|
||||
|
||||
/*
|
||||
Local variables:
|
||||
mode:c
|
||||
|
|
|
@ -170,4 +170,6 @@ typedef enum compiler_param
|
|||
PARAM_VALUE (PARAM_MIN_INSN_TO_PREFETCH_RATIO)
|
||||
#define PREFETCH_MIN_INSN_TO_MEM_RATIO \
|
||||
PARAM_VALUE (PARAM_PREFETCH_MIN_INSN_TO_MEM_RATIO)
|
||||
#define MIN_NONDEBUG_INSN_UID \
|
||||
PARAM_VALUE (PARAM_MIN_NONDEBUG_INSN_UID)
|
||||
#endif /* ! GCC_PARAMS_H */
|
||||
|
|
|
@ -41,6 +41,7 @@ along with GCC; see the file COPYING3. If not see
|
|||
#include "hard-reg-set.h"
|
||||
#include "basic-block.h"
|
||||
#include "diagnostic.h"
|
||||
#include "cselib.h"
|
||||
#endif
|
||||
|
||||
static FILE *outfile;
|
||||
|
@ -165,6 +166,23 @@ print_rtx (const_rtx in_rtx)
|
|||
/* For other rtl, print the mode if it's not VOID. */
|
||||
else if (GET_MODE (in_rtx) != VOIDmode)
|
||||
fprintf (outfile, ":%s", GET_MODE_NAME (GET_MODE (in_rtx)));
|
||||
|
||||
#ifndef GENERATOR_FILE
|
||||
if (GET_CODE (in_rtx) == VAR_LOCATION)
|
||||
{
|
||||
if (TREE_CODE (PAT_VAR_LOCATION_DECL (in_rtx)) == STRING_CST)
|
||||
fputs (" <debug string placeholder>", outfile);
|
||||
else
|
||||
print_mem_expr (outfile, PAT_VAR_LOCATION_DECL (in_rtx));
|
||||
fputc (' ', outfile);
|
||||
print_rtx (PAT_VAR_LOCATION_LOC (in_rtx));
|
||||
if (PAT_VAR_LOCATION_STATUS (in_rtx)
|
||||
== VAR_INIT_STATUS_UNINITIALIZED)
|
||||
fprintf (outfile, " [uninit]");
|
||||
sawclose = 1;
|
||||
i = GET_RTX_LENGTH (VAR_LOCATION);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -278,14 +296,8 @@ print_rtx (const_rtx in_rtx)
|
|||
|
||||
case NOTE_INSN_VAR_LOCATION:
|
||||
#ifndef GENERATOR_FILE
|
||||
fprintf (outfile, " (");
|
||||
print_mem_expr (outfile, NOTE_VAR_LOCATION_DECL (in_rtx));
|
||||
fprintf (outfile, " ");
|
||||
print_rtx (NOTE_VAR_LOCATION_LOC (in_rtx));
|
||||
if (NOTE_VAR_LOCATION_STATUS (in_rtx) ==
|
||||
VAR_INIT_STATUS_UNINITIALIZED)
|
||||
fprintf (outfile, " [uninit]");
|
||||
fprintf (outfile, ")");
|
||||
fputc (' ', outfile);
|
||||
print_rtx (NOTE_VAR_LOCATION (in_rtx));
|
||||
#endif
|
||||
break;
|
||||
|
||||
|
@ -296,6 +308,16 @@ print_rtx (const_rtx in_rtx)
|
|||
else if (i == 9 && JUMP_P (in_rtx) && XEXP (in_rtx, i) != NULL)
|
||||
/* Output the JUMP_LABEL reference. */
|
||||
fprintf (outfile, "\n -> %d", INSN_UID (XEXP (in_rtx, i)));
|
||||
else if (i == 0 && GET_CODE (in_rtx) == VALUE)
|
||||
{
|
||||
#ifndef GENERATOR_FILE
|
||||
cselib_val *val = CSELIB_VAL_PTR (in_rtx);
|
||||
|
||||
fprintf (outfile, " %i", val->value);
|
||||
dump_addr (outfile, " @", in_rtx);
|
||||
dump_addr (outfile, "/", (void*)val);
|
||||
#endif
|
||||
}
|
||||
break;
|
||||
|
||||
case 'e':
|
||||
|
|
|
@ -389,6 +389,8 @@ verify_changes (int num)
|
|||
assemblies if they have been defined as register asm ("x"). */
|
||||
break;
|
||||
}
|
||||
else if (DEBUG_INSN_P (object))
|
||||
continue;
|
||||
else if (insn_invalid_p (object))
|
||||
{
|
||||
rtx pat = PATTERN (object);
|
||||
|
@ -429,7 +431,8 @@ verify_changes (int num)
|
|||
validate_change (object, &PATTERN (object), newpat, 1);
|
||||
continue;
|
||||
}
|
||||
else if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER)
|
||||
else if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER
|
||||
|| GET_CODE (pat) == VAR_LOCATION)
|
||||
/* If this insn is a CLOBBER or USE, it is always valid, but is
|
||||
never recognized. */
|
||||
continue;
|
||||
|
@ -2039,6 +2042,7 @@ extract_insn (rtx insn)
|
|||
case ASM_INPUT:
|
||||
case ADDR_VEC:
|
||||
case ADDR_DIFF_VEC:
|
||||
case VAR_LOCATION:
|
||||
return;
|
||||
|
||||
case SET:
|
||||
|
@ -3119,7 +3123,7 @@ peephole2_optimize (void)
|
|||
for (insn = BB_END (bb); ; insn = prev)
|
||||
{
|
||||
prev = PREV_INSN (insn);
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
rtx attempt, before_try, x;
|
||||
int match_len;
|
||||
|
|
|
@ -1327,6 +1327,30 @@ compare_for_stack_reg (rtx insn, stack regstack, rtx pat_src)
|
|||
}
|
||||
}
|
||||
|
||||
/* Substitute new registers in LOC, which is part of a debug insn.
|
||||
REGSTACK is the current register layout. */
|
||||
|
||||
static int
|
||||
subst_stack_regs_in_debug_insn (rtx *loc, void *data)
|
||||
{
|
||||
rtx *tloc = get_true_reg (loc);
|
||||
stack regstack = (stack)data;
|
||||
int hard_regno;
|
||||
|
||||
if (!STACK_REG_P (*tloc))
|
||||
return 0;
|
||||
|
||||
if (tloc != loc)
|
||||
return 0;
|
||||
|
||||
hard_regno = get_hard_regnum (regstack, *loc);
|
||||
gcc_assert (hard_regno >= FIRST_STACK_REG);
|
||||
|
||||
replace_reg (loc, hard_regno);
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Substitute new registers in PAT, which is part of INSN. REGSTACK
|
||||
is the current register layout. Return whether a control flow insn
|
||||
was deleted in the process. */
|
||||
|
@ -1360,6 +1384,9 @@ subst_stack_regs_pat (rtx insn, stack regstack, rtx pat)
|
|||
since the REG_DEAD notes are not issued.) */
|
||||
break;
|
||||
|
||||
case VAR_LOCATION:
|
||||
gcc_unreachable ();
|
||||
|
||||
case CLOBBER:
|
||||
{
|
||||
rtx note;
|
||||
|
@ -2871,6 +2898,7 @@ convert_regs_1 (basic_block block)
|
|||
int reg;
|
||||
rtx insn, next;
|
||||
bool control_flow_insn_deleted = false;
|
||||
int debug_insns_with_starting_stack = 0;
|
||||
|
||||
any_malformed_asm = false;
|
||||
|
||||
|
@ -2923,8 +2951,25 @@ convert_regs_1 (basic_block block)
|
|||
|
||||
/* Don't bother processing unless there is a stack reg
|
||||
mentioned or if it's a CALL_INSN. */
|
||||
if (stack_regs_mentioned (insn)
|
||||
|| CALL_P (insn))
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
if (starting_stack_p)
|
||||
debug_insns_with_starting_stack++;
|
||||
else
|
||||
{
|
||||
for_each_rtx (&PATTERN (insn), subst_stack_regs_in_debug_insn,
|
||||
®stack);
|
||||
|
||||
/* Nothing must ever die at a debug insn. If something
|
||||
is referenced in it that becomes dead, it should have
|
||||
died before and the reference in the debug insn
|
||||
should have been removed so as to avoid changing code
|
||||
generation. */
|
||||
gcc_assert (!find_reg_note (insn, REG_DEAD, NULL));
|
||||
}
|
||||
}
|
||||
else if (stack_regs_mentioned (insn)
|
||||
|| CALL_P (insn))
|
||||
{
|
||||
if (dump_file)
|
||||
{
|
||||
|
@ -2938,6 +2983,24 @@ convert_regs_1 (basic_block block)
|
|||
}
|
||||
while (next);
|
||||
|
||||
if (debug_insns_with_starting_stack)
|
||||
{
|
||||
/* Since it's the first non-debug instruction that determines
|
||||
the stack requirements of the current basic block, we refrain
|
||||
from updating debug insns before it in the loop above, and
|
||||
fix them up here. */
|
||||
for (insn = BB_HEAD (block); debug_insns_with_starting_stack;
|
||||
insn = NEXT_INSN (insn))
|
||||
{
|
||||
if (!DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
debug_insns_with_starting_stack--;
|
||||
for_each_rtx (&PATTERN (insn), subst_stack_regs_in_debug_insn,
|
||||
&bi->stack_in);
|
||||
}
|
||||
}
|
||||
|
||||
if (dump_file)
|
||||
{
|
||||
fprintf (dump_file, "Expected live registers [");
|
||||
|
|
|
@ -474,6 +474,9 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl,
|
|||
switch (code)
|
||||
{
|
||||
case PLUS:
|
||||
if (DEBUG_INSN_P (insn))
|
||||
break;
|
||||
|
||||
{
|
||||
rtx orig_op0 = XEXP (x, 0);
|
||||
rtx orig_op1 = XEXP (x, 1);
|
||||
|
@ -608,9 +611,14 @@ replace_oldest_value_addr (rtx *loc, enum reg_class cl,
|
|||
static bool
|
||||
replace_oldest_value_mem (rtx x, rtx insn, struct value_data *vd)
|
||||
{
|
||||
return replace_oldest_value_addr (&XEXP (x, 0),
|
||||
base_reg_class (GET_MODE (x), MEM,
|
||||
SCRATCH),
|
||||
enum reg_class cl;
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
cl = ALL_REGS;
|
||||
else
|
||||
cl = base_reg_class (GET_MODE (x), MEM, SCRATCH);
|
||||
|
||||
return replace_oldest_value_addr (&XEXP (x, 0), cl,
|
||||
GET_MODE (x), insn, vd);
|
||||
}
|
||||
|
||||
|
@ -619,7 +627,7 @@ replace_oldest_value_mem (rtx x, rtx insn, struct value_data *vd)
|
|||
static bool
|
||||
copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd)
|
||||
{
|
||||
bool changed = false;
|
||||
bool anything_changed = false;
|
||||
rtx insn;
|
||||
|
||||
for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
|
||||
|
@ -628,9 +636,25 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd)
|
|||
bool is_asm, any_replacements;
|
||||
rtx set;
|
||||
bool replaced[MAX_RECOG_OPERANDS];
|
||||
bool changed = false;
|
||||
|
||||
if (! INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
rtx loc = INSN_VAR_LOCATION_LOC (insn);
|
||||
if (!VAR_LOC_UNKNOWN_P (loc)
|
||||
&& replace_oldest_value_addr (&INSN_VAR_LOCATION_LOC (insn),
|
||||
ALL_REGS, GET_MODE (loc),
|
||||
insn, vd))
|
||||
{
|
||||
changed = apply_change_group ();
|
||||
gcc_assert (changed);
|
||||
df_insn_rescan (insn);
|
||||
anything_changed = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (insn == BB_END (bb))
|
||||
break;
|
||||
else
|
||||
|
@ -817,6 +841,12 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd)
|
|||
}
|
||||
|
||||
did_replacement:
|
||||
if (changed)
|
||||
{
|
||||
df_insn_rescan (insn);
|
||||
anything_changed = true;
|
||||
}
|
||||
|
||||
/* Clobber call-clobbered registers. */
|
||||
if (CALL_P (insn))
|
||||
for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
|
||||
|
@ -834,7 +864,7 @@ copyprop_hardreg_forward_1 (basic_block bb, struct value_data *vd)
|
|||
break;
|
||||
}
|
||||
|
||||
return changed;
|
||||
return anything_changed;
|
||||
}
|
||||
|
||||
/* Main entry point for the forward copy propagation optimization. */
|
||||
|
|
|
@ -321,9 +321,12 @@ optimize_reg_copy_1 (rtx insn, rtx dest, rtx src)
|
|||
/* For SREGNO, count the total number of insns scanned.
|
||||
For DREGNO, count the total number of insns scanned after
|
||||
passing the death note for DREGNO. */
|
||||
s_length++;
|
||||
if (dest_death)
|
||||
d_length++;
|
||||
if (!DEBUG_INSN_P (p))
|
||||
{
|
||||
s_length++;
|
||||
if (dest_death)
|
||||
d_length++;
|
||||
}
|
||||
|
||||
/* If the insn in which SRC dies is a CALL_INSN, don't count it
|
||||
as a call that has been crossed. Otherwise, count it. */
|
||||
|
@ -767,7 +770,7 @@ fixup_match_2 (rtx insn, rtx dst, rtx src, rtx offset)
|
|||
|
||||
if (find_regno_note (p, REG_DEAD, REGNO (dst)))
|
||||
dst_death = p;
|
||||
if (! dst_death)
|
||||
if (! dst_death && !DEBUG_INSN_P (p))
|
||||
length++;
|
||||
|
||||
pset = single_set (p);
|
||||
|
@ -1095,7 +1098,8 @@ regmove_backward_pass (void)
|
|||
if (BLOCK_FOR_INSN (p) != bb)
|
||||
break;
|
||||
|
||||
length++;
|
||||
if (!DEBUG_INSN_P (p))
|
||||
length++;
|
||||
|
||||
/* ??? See if all of SRC is set in P. This test is much
|
||||
more conservative than it needs to be. */
|
||||
|
@ -1103,24 +1107,13 @@ regmove_backward_pass (void)
|
|||
if (pset && SET_DEST (pset) == src)
|
||||
{
|
||||
/* We use validate_replace_rtx, in case there
|
||||
are multiple identical source operands. All of
|
||||
them have to be changed at the same time. */
|
||||
are multiple identical source operands. All
|
||||
of them have to be changed at the same time:
|
||||
when validate_replace_rtx() calls
|
||||
apply_change_group(). */
|
||||
validate_change (p, &SET_DEST (pset), dst, 1);
|
||||
if (validate_replace_rtx (src, dst, insn))
|
||||
{
|
||||
if (validate_change (p, &SET_DEST (pset),
|
||||
dst, 0))
|
||||
success = 1;
|
||||
else
|
||||
{
|
||||
/* Change all source operands back.
|
||||
This modifies the dst as a side-effect. */
|
||||
validate_replace_rtx (dst, src, insn);
|
||||
/* Now make sure the dst is right. */
|
||||
validate_change (insn,
|
||||
recog_data.operand_loc[match_no],
|
||||
dst, 0);
|
||||
}
|
||||
}
|
||||
success = 1;
|
||||
break;
|
||||
}
|
||||
|
||||
|
@ -1129,9 +1122,21 @@ regmove_backward_pass (void)
|
|||
eliminate SRC. We can't make this change
|
||||
if DST is mentioned at all in P,
|
||||
since we are going to change its value. */
|
||||
if (reg_overlap_mentioned_p (src, PATTERN (p))
|
||||
|| reg_mentioned_p (dst, PATTERN (p)))
|
||||
break;
|
||||
if (reg_overlap_mentioned_p (src, PATTERN (p)))
|
||||
{
|
||||
if (DEBUG_INSN_P (p))
|
||||
validate_replace_rtx_group (dst, src, insn);
|
||||
else
|
||||
break;
|
||||
}
|
||||
if (reg_mentioned_p (dst, PATTERN (p)))
|
||||
{
|
||||
if (DEBUG_INSN_P (p))
|
||||
validate_change (p, &INSN_VAR_LOCATION_LOC (p),
|
||||
gen_rtx_UNKNOWN_VAR_LOC (), 1);
|
||||
else
|
||||
break;
|
||||
}
|
||||
|
||||
/* If we have passed a call instruction, and the
|
||||
pseudo-reg DST is not already live across a call,
|
||||
|
@ -1193,6 +1198,8 @@ regmove_backward_pass (void)
|
|||
|
||||
break;
|
||||
}
|
||||
else if (num_changes_pending () > 0)
|
||||
cancel_changes (0);
|
||||
}
|
||||
|
||||
/* If we weren't able to replace any of the alternatives, try an
|
||||
|
|
|
@ -230,7 +230,7 @@ regrename_optimize (void)
|
|||
int new_reg, best_new_reg;
|
||||
int n_uses;
|
||||
struct du_chain *this_du = all_chains;
|
||||
struct du_chain *tmp, *last;
|
||||
struct du_chain *tmp;
|
||||
HARD_REG_SET this_unavailable;
|
||||
int reg = REGNO (*this_du->loc);
|
||||
int i;
|
||||
|
@ -259,21 +259,20 @@ regrename_optimize (void)
|
|||
|
||||
COPY_HARD_REG_SET (this_unavailable, unavailable);
|
||||
|
||||
/* Find last entry on chain (which has the need_caller_save bit),
|
||||
count number of uses, and narrow the set of registers we can
|
||||
/* Count number of uses, and narrow the set of registers we can
|
||||
use for renaming. */
|
||||
n_uses = 0;
|
||||
for (last = this_du; last->next_use; last = last->next_use)
|
||||
for (tmp = this_du; tmp; tmp = tmp->next_use)
|
||||
{
|
||||
if (DEBUG_INSN_P (tmp->insn))
|
||||
continue;
|
||||
n_uses++;
|
||||
IOR_COMPL_HARD_REG_SET (this_unavailable,
|
||||
reg_class_contents[last->cl]);
|
||||
reg_class_contents[tmp->cl]);
|
||||
}
|
||||
if (n_uses < 1)
|
||||
continue;
|
||||
|
||||
IOR_COMPL_HARD_REG_SET (this_unavailable,
|
||||
reg_class_contents[last->cl]);
|
||||
if (n_uses < 2)
|
||||
continue;
|
||||
|
||||
if (this_du->need_caller_save_reg)
|
||||
IOR_HARD_REG_SET (this_unavailable, call_used_reg_set);
|
||||
|
@ -310,7 +309,8 @@ regrename_optimize (void)
|
|||
/* See whether it accepts all modes that occur in
|
||||
definition and uses. */
|
||||
for (tmp = this_du; tmp; tmp = tmp->next_use)
|
||||
if (! HARD_REGNO_MODE_OK (new_reg, GET_MODE (*tmp->loc))
|
||||
if ((! HARD_REGNO_MODE_OK (new_reg, GET_MODE (*tmp->loc))
|
||||
&& ! DEBUG_INSN_P (tmp->insn))
|
||||
|| (tmp->need_caller_save_reg
|
||||
&& ! (HARD_REGNO_CALL_PART_CLOBBERED
|
||||
(reg, GET_MODE (*tmp->loc)))
|
||||
|
@ -327,8 +327,8 @@ regrename_optimize (void)
|
|||
if (dump_file)
|
||||
{
|
||||
fprintf (dump_file, "Register %s in insn %d",
|
||||
reg_names[reg], INSN_UID (last->insn));
|
||||
if (last->need_caller_save_reg)
|
||||
reg_names[reg], INSN_UID (this_du->insn));
|
||||
if (this_du->need_caller_save_reg)
|
||||
fprintf (dump_file, " crosses a call");
|
||||
}
|
||||
|
||||
|
@ -362,17 +362,27 @@ regrename_optimize (void)
|
|||
static void
|
||||
do_replace (struct du_chain *chain, int reg)
|
||||
{
|
||||
unsigned int base_regno = REGNO (*chain->loc);
|
||||
|
||||
gcc_assert (! DEBUG_INSN_P (chain->insn));
|
||||
|
||||
while (chain)
|
||||
{
|
||||
unsigned int regno = ORIGINAL_REGNO (*chain->loc);
|
||||
struct reg_attrs * attr = REG_ATTRS (*chain->loc);
|
||||
int reg_ptr = REG_POINTER (*chain->loc);
|
||||
|
||||
*chain->loc = gen_raw_REG (GET_MODE (*chain->loc), reg);
|
||||
if (regno >= FIRST_PSEUDO_REGISTER)
|
||||
ORIGINAL_REGNO (*chain->loc) = regno;
|
||||
REG_ATTRS (*chain->loc) = attr;
|
||||
REG_POINTER (*chain->loc) = reg_ptr;
|
||||
if (DEBUG_INSN_P (chain->insn) && REGNO (*chain->loc) != base_regno)
|
||||
INSN_VAR_LOCATION_LOC (chain->insn) = gen_rtx_UNKNOWN_VAR_LOC ();
|
||||
else
|
||||
{
|
||||
*chain->loc = gen_raw_REG (GET_MODE (*chain->loc), reg);
|
||||
if (regno >= FIRST_PSEUDO_REGISTER)
|
||||
ORIGINAL_REGNO (*chain->loc) = regno;
|
||||
REG_ATTRS (*chain->loc) = attr;
|
||||
REG_POINTER (*chain->loc) = reg_ptr;
|
||||
}
|
||||
|
||||
df_insn_rescan (chain->insn);
|
||||
chain = chain->next_use;
|
||||
}
|
||||
|
@ -440,7 +450,7 @@ scan_rtx_reg (rtx insn, rtx *loc, enum reg_class cl,
|
|||
|
||||
if (action == mark_read || action == mark_access)
|
||||
{
|
||||
gcc_assert (exact_match);
|
||||
gcc_assert (exact_match || DEBUG_INSN_P (insn));
|
||||
|
||||
/* ??? Class NO_REGS can happen if the md file makes use of
|
||||
EXTRA_CONSTRAINTS to match registers. Which is arguably
|
||||
|
@ -744,7 +754,7 @@ build_def_use (basic_block bb)
|
|||
|
||||
for (insn = BB_HEAD (bb); ; insn = NEXT_INSN (insn))
|
||||
{
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
{
|
||||
int n_ops;
|
||||
rtx note;
|
||||
|
@ -970,6 +980,12 @@ build_def_use (basic_block bb)
|
|||
scan_rtx (insn, &XEXP (note, 0), NO_REGS, terminate_dead,
|
||||
OP_IN, 0);
|
||||
}
|
||||
else if (DEBUG_INSN_P (insn)
|
||||
&& !VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn)))
|
||||
{
|
||||
scan_rtx (insn, &INSN_VAR_LOCATION_LOC (insn),
|
||||
ALL_REGS, mark_read, OP_IN, 0);
|
||||
}
|
||||
if (insn == BB_END (bb))
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -61,11 +61,27 @@ regstat_init_n_sets_and_refs (void)
|
|||
|
||||
regstat_n_sets_and_refs = XNEWVEC (struct regstat_n_sets_and_refs_t, max_regno);
|
||||
|
||||
for (i = 0; i < max_regno; i++)
|
||||
{
|
||||
SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i));
|
||||
SET_REG_N_REFS (i, DF_REG_USE_COUNT (i) + REG_N_SETS (i));
|
||||
}
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
for (i = 0; i < max_regno; i++)
|
||||
{
|
||||
int use_count;
|
||||
df_ref use;
|
||||
|
||||
use_count = DF_REG_USE_COUNT (i);
|
||||
for (use = DF_REG_USE_CHAIN (i); use; use = DF_REF_NEXT_REG (use))
|
||||
if (DF_REF_INSN_INFO (use) && DEBUG_INSN_P (DF_REF_INSN (use)))
|
||||
use_count--;
|
||||
|
||||
|
||||
SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i));
|
||||
SET_REG_N_REFS (i, use_count + REG_N_SETS (i));
|
||||
}
|
||||
else
|
||||
for (i = 0; i < max_regno; i++)
|
||||
{
|
||||
SET_REG_N_SETS (i, DF_REG_DEF_COUNT (i));
|
||||
SET_REG_N_REFS (i, DF_REG_USE_COUNT (i) + REG_N_SETS (i));
|
||||
}
|
||||
timevar_pop (TV_REG_STATS);
|
||||
|
||||
}
|
||||
|
@ -149,7 +165,7 @@ regstat_bb_compute_ri (unsigned int bb_index,
|
|||
struct df_mw_hardreg **mws_rec;
|
||||
rtx link;
|
||||
|
||||
if (!INSN_P (insn))
|
||||
if (!NONDEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
/* Increment the live_length for all of the registers that
|
||||
|
|
|
@ -6736,6 +6736,8 @@ find_equiv_reg (rtx goal, rtx insn, enum reg_class rclass, int other,
|
|||
while (1)
|
||||
{
|
||||
p = PREV_INSN (p);
|
||||
if (p && DEBUG_INSN_P (p))
|
||||
continue;
|
||||
num++;
|
||||
if (p == 0 || LABEL_P (p)
|
||||
|| num > PARAM_VALUE (PARAM_MAX_RELOAD_SEARCH_INSNS))
|
||||
|
|
|
@ -801,7 +801,7 @@ reload (rtx first, int global)
|
|||
&& GET_MODE (insn) != VOIDmode)
|
||||
PUT_MODE (insn, VOIDmode);
|
||||
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
scan_paradoxical_subregs (PATTERN (insn));
|
||||
|
||||
if (set != 0 && REG_P (SET_DEST (set)))
|
||||
|
@ -1234,6 +1234,48 @@ reload (rtx first, int global)
|
|||
else if (reg_equiv_mem[i])
|
||||
XEXP (reg_equiv_mem[i], 0) = addr;
|
||||
}
|
||||
|
||||
/* We don't want complex addressing modes in debug insns
|
||||
if simpler ones will do, so delegitimize equivalences
|
||||
in debug insns. */
|
||||
if (MAY_HAVE_DEBUG_INSNS && reg_renumber[i] < 0)
|
||||
{
|
||||
rtx reg = regno_reg_rtx[i];
|
||||
rtx equiv = 0;
|
||||
df_ref use;
|
||||
|
||||
if (reg_equiv_constant[i])
|
||||
equiv = reg_equiv_constant[i];
|
||||
else if (reg_equiv_invariant[i])
|
||||
equiv = reg_equiv_invariant[i];
|
||||
else if (reg && MEM_P (reg))
|
||||
{
|
||||
equiv = targetm.delegitimize_address (reg);
|
||||
if (equiv == reg)
|
||||
equiv = 0;
|
||||
}
|
||||
else if (reg && REG_P (reg) && (int)REGNO (reg) != i)
|
||||
equiv = reg;
|
||||
|
||||
if (equiv)
|
||||
for (use = DF_REG_USE_CHAIN (i); use;
|
||||
use = DF_REF_NEXT_REG (use))
|
||||
if (DEBUG_INSN_P (DF_REF_INSN (use)))
|
||||
{
|
||||
rtx *loc = DF_REF_LOC (use);
|
||||
rtx x = *loc;
|
||||
|
||||
if (x == reg)
|
||||
*loc = copy_rtx (equiv);
|
||||
else if (GET_CODE (x) == SUBREG
|
||||
&& SUBREG_REG (x) == reg)
|
||||
*loc = simplify_gen_subreg (GET_MODE (x), equiv,
|
||||
GET_MODE (reg),
|
||||
SUBREG_BYTE (x));
|
||||
else
|
||||
gcc_unreachable ();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* We must set reload_completed now since the cleanup_subreg_operands call
|
||||
|
@ -3151,7 +3193,8 @@ eliminate_regs_in_insn (rtx insn, int replace)
|
|||
|| GET_CODE (PATTERN (insn)) == CLOBBER
|
||||
|| GET_CODE (PATTERN (insn)) == ADDR_VEC
|
||||
|| GET_CODE (PATTERN (insn)) == ADDR_DIFF_VEC
|
||||
|| GET_CODE (PATTERN (insn)) == ASM_INPUT);
|
||||
|| GET_CODE (PATTERN (insn)) == ASM_INPUT
|
||||
|| DEBUG_INSN_P (insn));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -6941,7 +6984,7 @@ emit_input_reload_insns (struct insn_chain *chain, struct reload *rl,
|
|||
rl->when_needed, old, rl->out, j, 0))
|
||||
{
|
||||
rtx temp = PREV_INSN (insn);
|
||||
while (temp && NOTE_P (temp))
|
||||
while (temp && (NOTE_P (temp) || DEBUG_INSN_P (temp)))
|
||||
temp = PREV_INSN (temp);
|
||||
if (temp
|
||||
&& NONJUMP_INSN_P (temp)
|
||||
|
@ -6984,6 +7027,13 @@ emit_input_reload_insns (struct insn_chain *chain, struct reload *rl,
|
|||
alter_reg (REGNO (old), -1, false);
|
||||
}
|
||||
special = 1;
|
||||
|
||||
/* Adjust any debug insns between temp and insn. */
|
||||
while ((temp = NEXT_INSN (temp)) != insn)
|
||||
if (DEBUG_INSN_P (temp))
|
||||
replace_rtx (PATTERN (temp), old, reloadreg);
|
||||
else
|
||||
gcc_assert (NOTE_P (temp));
|
||||
}
|
||||
else
|
||||
{
|
||||
|
|
|
@ -976,6 +976,9 @@ mark_target_live_regs (rtx insns, rtx target, struct resources *res)
|
|||
rtx real_insn = insn;
|
||||
enum rtx_code code = GET_CODE (insn);
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
/* If this insn is from the target of a branch, it isn't going to
|
||||
be used in the sequel. If it is used in both cases, this
|
||||
test will not be true. */
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* RTL utility routines.
|
||||
Copyright (C) 1987, 1988, 1991, 1994, 1997, 1998, 1999, 2000, 2001, 2002,
|
||||
2003, 2004, 2005, 2006, 2007, 2008 Free Software Foundation, Inc.
|
||||
2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
|
||||
|
||||
This file is part of GCC.
|
||||
|
||||
|
@ -381,6 +381,7 @@ rtx_equal_p_cb (const_rtx x, const_rtx y, rtx_equal_p_callback_function cb)
|
|||
case SYMBOL_REF:
|
||||
return XSTR (x, 0) == XSTR (y, 0);
|
||||
|
||||
case VALUE:
|
||||
case SCRATCH:
|
||||
case CONST_DOUBLE:
|
||||
case CONST_INT:
|
||||
|
@ -495,6 +496,7 @@ rtx_equal_p (const_rtx x, const_rtx y)
|
|||
case SYMBOL_REF:
|
||||
return XSTR (x, 0) == XSTR (y, 0);
|
||||
|
||||
case VALUE:
|
||||
case SCRATCH:
|
||||
case CONST_DOUBLE:
|
||||
case CONST_INT:
|
||||
|
|
17
gcc/rtl.def
17
gcc/rtl.def
|
@ -2,7 +2,7 @@
|
|||
Register Transfer Expressions (rtx's) that make up the
|
||||
Register Transfer Language (rtl) used in the Back End of the GNU compiler.
|
||||
Copyright (C) 1987, 1988, 1992, 1994, 1995, 1997, 1998, 1999, 2000, 2004,
|
||||
2005, 2006, 2007, 2008
|
||||
2005, 2006, 2007, 2008, 2009
|
||||
Free Software Foundation, Inc.
|
||||
|
||||
This file is part of GCC.
|
||||
|
@ -81,6 +81,13 @@ along with GCC; see the file COPYING3. If not see
|
|||
value zero. */
|
||||
DEF_RTL_EXPR(UNKNOWN, "UnKnown", "*", RTX_EXTRA)
|
||||
|
||||
/* Used in the cselib routines to describe a value. Objects of this
|
||||
kind are only allocated in cselib.c, in an alloc pool instead of in
|
||||
GC memory. The only operand of a VALUE is a cselib_val_struct.
|
||||
var-tracking requires this to have a distinct integral value from
|
||||
DECL codes in trees. */
|
||||
DEF_RTL_EXPR(VALUE, "value", "0", RTX_OBJ)
|
||||
|
||||
/* ---------------------------------------------------------------------
|
||||
Expressions used in constructing lists.
|
||||
--------------------------------------------------------------------- */
|
||||
|
@ -111,6 +118,9 @@ DEF_RTL_EXPR(ADDRESS, "address", "e", RTX_MATCH)
|
|||
|
||||
---------------------------------------------------------------------- */
|
||||
|
||||
/* An annotation for variable assignment tracking. */
|
||||
DEF_RTL_EXPR(DEBUG_INSN, "debug_insn", "iuuBieie", RTX_INSN)
|
||||
|
||||
/* An instruction that cannot jump. */
|
||||
DEF_RTL_EXPR(INSN, "insn", "iuuBieie", RTX_INSN)
|
||||
|
||||
|
@ -329,11 +339,6 @@ DEF_RTL_EXPR(CONST, "const", "e", RTX_CONST_OBJ)
|
|||
by a SET whose first operand is (PC). */
|
||||
DEF_RTL_EXPR(PC, "pc", "", RTX_OBJ)
|
||||
|
||||
/* Used in the cselib routines to describe a value. Objects of this
|
||||
kind are only allocated in cselib.c, in an alloc pool instead of
|
||||
in GC memory. The only operand of a VALUE is a cselib_val_struct. */
|
||||
DEF_RTL_EXPR(VALUE, "value", "0", RTX_OBJ)
|
||||
|
||||
/* A register. The "operand" is the register number, accessed with
|
||||
the REGNO macro. If this number is less than FIRST_PSEUDO_REGISTER
|
||||
than a hardware register is being referred to. The second operand
|
||||
|
|
81
gcc/rtl.h
81
gcc/rtl.h
|
@ -385,9 +385,18 @@ struct GTY(()) rtvec_def {
|
|||
/* Predicate yielding nonzero iff X is an insn that cannot jump. */
|
||||
#define NONJUMP_INSN_P(X) (GET_CODE (X) == INSN)
|
||||
|
||||
/* Predicate yielding nonzero iff X is a debug note/insn. */
|
||||
#define DEBUG_INSN_P(X) (GET_CODE (X) == DEBUG_INSN)
|
||||
|
||||
/* Predicate yielding nonzero iff X is an insn that is not a debug insn. */
|
||||
#define NONDEBUG_INSN_P(X) (INSN_P (X) && !DEBUG_INSN_P (X))
|
||||
|
||||
/* Nonzero if DEBUG_INSN_P may possibly hold. */
|
||||
#define MAY_HAVE_DEBUG_INSNS MAY_HAVE_DEBUG_STMTS
|
||||
|
||||
/* Predicate yielding nonzero iff X is a real insn. */
|
||||
#define INSN_P(X) \
|
||||
(NONJUMP_INSN_P (X) || JUMP_P (X) || CALL_P (X))
|
||||
(NONJUMP_INSN_P (X) || DEBUG_INSN_P (X) || JUMP_P (X) || CALL_P (X))
|
||||
|
||||
/* Predicate yielding nonzero iff X is a note insn. */
|
||||
#define NOTE_P(X) (GET_CODE (X) == NOTE)
|
||||
|
@ -764,12 +773,13 @@ extern void rtl_check_failed_flag (const char *, const_rtx, const char *,
|
|||
#define INSN_CODE(INSN) XINT (INSN, 6)
|
||||
|
||||
#define RTX_FRAME_RELATED_P(RTX) \
|
||||
(RTL_FLAG_CHECK5("RTX_FRAME_RELATED_P", (RTX), INSN, CALL_INSN, \
|
||||
JUMP_INSN, BARRIER, SET)->frame_related)
|
||||
(RTL_FLAG_CHECK6("RTX_FRAME_RELATED_P", (RTX), DEBUG_INSN, INSN, \
|
||||
CALL_INSN, JUMP_INSN, BARRIER, SET)->frame_related)
|
||||
|
||||
/* 1 if RTX is an insn that has been deleted. */
|
||||
#define INSN_DELETED_P(RTX) \
|
||||
(RTL_FLAG_CHECK6("INSN_DELETED_P", (RTX), INSN, CALL_INSN, JUMP_INSN, \
|
||||
(RTL_FLAG_CHECK7("INSN_DELETED_P", (RTX), DEBUG_INSN, INSN, \
|
||||
CALL_INSN, JUMP_INSN, \
|
||||
CODE_LABEL, BARRIER, NOTE)->volatil)
|
||||
|
||||
/* 1 if RTX is a call to a const function. Built from ECF_CONST and
|
||||
|
@ -878,16 +888,46 @@ extern const char * const reg_note_name[];
|
|||
&& NOTE_KIND (INSN) == NOTE_INSN_BASIC_BLOCK)
|
||||
|
||||
/* Variable declaration and the location of a variable. */
|
||||
#define NOTE_VAR_LOCATION_DECL(INSN) (XCTREE (XCEXP (INSN, 4, NOTE), \
|
||||
0, VAR_LOCATION))
|
||||
#define NOTE_VAR_LOCATION_LOC(INSN) (XCEXP (XCEXP (INSN, 4, NOTE), \
|
||||
1, VAR_LOCATION))
|
||||
#define PAT_VAR_LOCATION_DECL(PAT) (XCTREE ((PAT), 0, VAR_LOCATION))
|
||||
#define PAT_VAR_LOCATION_LOC(PAT) (XCEXP ((PAT), 1, VAR_LOCATION))
|
||||
|
||||
/* Initialization status of the variable in the location. Status
|
||||
can be unknown, uninitialized or initialized. See enumeration
|
||||
type below. */
|
||||
#define NOTE_VAR_LOCATION_STATUS(INSN) \
|
||||
((enum var_init_status) (XCINT (XCEXP (INSN, 4, NOTE), 2, VAR_LOCATION)))
|
||||
#define PAT_VAR_LOCATION_STATUS(PAT) \
|
||||
((enum var_init_status) (XCINT ((PAT), 2, VAR_LOCATION)))
|
||||
|
||||
/* Accessors for a NOTE_INSN_VAR_LOCATION. */
|
||||
#define NOTE_VAR_LOCATION_DECL(NOTE) \
|
||||
PAT_VAR_LOCATION_DECL (NOTE_VAR_LOCATION (NOTE))
|
||||
#define NOTE_VAR_LOCATION_LOC(NOTE) \
|
||||
PAT_VAR_LOCATION_LOC (NOTE_VAR_LOCATION (NOTE))
|
||||
#define NOTE_VAR_LOCATION_STATUS(NOTE) \
|
||||
PAT_VAR_LOCATION_STATUS (NOTE_VAR_LOCATION (NOTE))
|
||||
|
||||
/* The VAR_LOCATION rtx in a DEBUG_INSN. */
|
||||
#define INSN_VAR_LOCATION(INSN) PATTERN (INSN)
|
||||
|
||||
/* Accessors for a tree-expanded var location debug insn. */
|
||||
#define INSN_VAR_LOCATION_DECL(INSN) \
|
||||
PAT_VAR_LOCATION_DECL (INSN_VAR_LOCATION (INSN))
|
||||
#define INSN_VAR_LOCATION_LOC(INSN) \
|
||||
PAT_VAR_LOCATION_LOC (INSN_VAR_LOCATION (INSN))
|
||||
#define INSN_VAR_LOCATION_STATUS(INSN) \
|
||||
PAT_VAR_LOCATION_STATUS (INSN_VAR_LOCATION (INSN))
|
||||
|
||||
/* Expand to the RTL that denotes an unknown variable location in a
|
||||
DEBUG_INSN. */
|
||||
#define gen_rtx_UNKNOWN_VAR_LOC() (gen_rtx_CLOBBER (VOIDmode, const0_rtx))
|
||||
|
||||
/* Determine whether X is such an unknown location. */
|
||||
#define VAR_LOC_UNKNOWN_P(X) \
|
||||
(GET_CODE (X) == CLOBBER && XEXP ((X), 0) == const0_rtx)
|
||||
|
||||
/* 1 if RTX is emitted after a call, but it should take effect before
|
||||
the call returns. */
|
||||
#define NOTE_DURING_CALL_P(RTX) \
|
||||
(RTL_FLAG_CHECK1("NOTE_VAR_LOCATION_DURING_CALL_P", (RTX), NOTE)->call)
|
||||
|
||||
/* Possible initialization status of a variable. When requested
|
||||
by the user, this information is tracked and recorded in the DWARF
|
||||
|
@ -1259,8 +1299,9 @@ do { \
|
|||
/* During sched, 1 if RTX is an insn that must be scheduled together
|
||||
with the preceding insn. */
|
||||
#define SCHED_GROUP_P(RTX) \
|
||||
(RTL_FLAG_CHECK3("SCHED_GROUP_P", (RTX), INSN, JUMP_INSN, CALL_INSN \
|
||||
)->in_struct)
|
||||
(RTL_FLAG_CHECK4("SCHED_GROUP_P", (RTX), DEBUG_INSN, INSN, \
|
||||
JUMP_INSN, CALL_INSN \
|
||||
)->in_struct)
|
||||
|
||||
/* For a SET rtx, SET_DEST is the place that is set
|
||||
and SET_SRC is the value it is set to. */
|
||||
|
@ -1593,6 +1634,9 @@ extern rtx emit_jump_insn_before_setloc (rtx, rtx, int);
|
|||
extern rtx emit_call_insn_before (rtx, rtx);
|
||||
extern rtx emit_call_insn_before_noloc (rtx, rtx);
|
||||
extern rtx emit_call_insn_before_setloc (rtx, rtx, int);
|
||||
extern rtx emit_debug_insn_before (rtx, rtx);
|
||||
extern rtx emit_debug_insn_before_noloc (rtx, rtx);
|
||||
extern rtx emit_debug_insn_before_setloc (rtx, rtx, int);
|
||||
extern rtx emit_barrier_before (rtx);
|
||||
extern rtx emit_label_before (rtx, rtx);
|
||||
extern rtx emit_note_before (enum insn_note, rtx);
|
||||
|
@ -1605,10 +1649,14 @@ extern rtx emit_jump_insn_after_setloc (rtx, rtx, int);
|
|||
extern rtx emit_call_insn_after (rtx, rtx);
|
||||
extern rtx emit_call_insn_after_noloc (rtx, rtx);
|
||||
extern rtx emit_call_insn_after_setloc (rtx, rtx, int);
|
||||
extern rtx emit_debug_insn_after (rtx, rtx);
|
||||
extern rtx emit_debug_insn_after_noloc (rtx, rtx);
|
||||
extern rtx emit_debug_insn_after_setloc (rtx, rtx, int);
|
||||
extern rtx emit_barrier_after (rtx);
|
||||
extern rtx emit_label_after (rtx, rtx);
|
||||
extern rtx emit_note_after (enum insn_note, rtx);
|
||||
extern rtx emit_insn (rtx);
|
||||
extern rtx emit_debug_insn (rtx);
|
||||
extern rtx emit_jump_insn (rtx);
|
||||
extern rtx emit_call_insn (rtx);
|
||||
extern rtx emit_label (rtx);
|
||||
|
@ -1620,6 +1668,7 @@ extern rtx emit_clobber (rtx);
|
|||
extern rtx gen_use (rtx);
|
||||
extern rtx emit_use (rtx);
|
||||
extern rtx make_insn_raw (rtx);
|
||||
extern rtx make_debug_insn_raw (rtx);
|
||||
extern rtx make_jump_insn_raw (rtx);
|
||||
extern void add_function_usage_to (rtx, rtx);
|
||||
extern rtx last_call_insn (void);
|
||||
|
@ -1628,6 +1677,8 @@ extern rtx next_insn (rtx);
|
|||
extern rtx prev_nonnote_insn (rtx);
|
||||
extern rtx next_nonnote_insn (rtx);
|
||||
extern rtx next_nonnote_insn_bb (rtx);
|
||||
extern rtx prev_nondebug_insn (rtx);
|
||||
extern rtx next_nondebug_insn (rtx);
|
||||
extern rtx prev_real_insn (rtx);
|
||||
extern rtx next_real_insn (rtx);
|
||||
extern rtx prev_active_insn (rtx);
|
||||
|
@ -1699,6 +1750,7 @@ extern rtx simplify_gen_subreg (enum machine_mode, rtx, enum machine_mode,
|
|||
extern rtx simplify_replace_rtx (rtx, const_rtx, rtx);
|
||||
extern rtx simplify_rtx (const_rtx);
|
||||
extern rtx avoid_constant_pool_reference (rtx);
|
||||
extern rtx delegitimize_mem_from_attrs (rtx);
|
||||
extern bool mode_signbit_p (enum machine_mode, const_rtx);
|
||||
|
||||
/* In reginfo.c */
|
||||
|
@ -2127,6 +2179,7 @@ extern void set_used_flags (rtx);
|
|||
extern void reorder_insns (rtx, rtx, rtx);
|
||||
extern void reorder_insns_nobb (rtx, rtx, rtx);
|
||||
extern int get_max_uid (void);
|
||||
extern int get_max_insn_count (void);
|
||||
extern int in_sequence_p (void);
|
||||
extern void force_next_line_note (void);
|
||||
extern void init_emit (void);
|
||||
|
@ -2327,6 +2380,8 @@ extern void invert_br_probabilities (rtx);
|
|||
extern bool expensive_function_p (int);
|
||||
/* In cfgexpand.c */
|
||||
extern void add_reg_br_prob_note (rtx last, int probability);
|
||||
extern rtx wrap_constant (enum machine_mode, rtx);
|
||||
extern rtx unwrap_constant (rtx);
|
||||
|
||||
/* In var-tracking.c */
|
||||
extern unsigned int variable_tracking_main (void);
|
||||
|
@ -2372,7 +2427,9 @@ extern void insn_locators_alloc (void);
|
|||
extern void insn_locators_free (void);
|
||||
extern void insn_locators_finalize (void);
|
||||
extern void set_curr_insn_source_location (location_t);
|
||||
extern location_t get_curr_insn_source_location (void);
|
||||
extern void set_curr_insn_block (tree);
|
||||
extern tree get_curr_insn_block (void);
|
||||
extern int curr_insn_locator (void);
|
||||
extern bool optimize_insn_for_size_p (void);
|
||||
extern bool optimize_insn_for_speed_p (void);
|
||||
|
|
|
@ -741,7 +741,7 @@ reg_used_between_p (const_rtx reg, const_rtx from_insn, const_rtx to_insn)
|
|||
return 0;
|
||||
|
||||
for (insn = NEXT_INSN (from_insn); insn != to_insn; insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn)
|
||||
if (NONDEBUG_INSN_P (insn)
|
||||
&& (reg_overlap_mentioned_p (reg, PATTERN (insn))
|
||||
|| (CALL_P (insn) && find_reg_fusage (insn, USE, reg))))
|
||||
return 1;
|
||||
|
@ -2148,6 +2148,7 @@ side_effects_p (const_rtx x)
|
|||
case SCRATCH:
|
||||
case ADDR_VEC:
|
||||
case ADDR_DIFF_VEC:
|
||||
case VAR_LOCATION:
|
||||
return 0;
|
||||
|
||||
case CLOBBER:
|
||||
|
@ -4725,7 +4726,11 @@ canonicalize_condition (rtx insn, rtx cond, int reverse, rtx *earliest,
|
|||
stop if it isn't a single set or if it has a REG_INC note because
|
||||
we don't want to bother dealing with it. */
|
||||
|
||||
if ((prev = prev_nonnote_insn (prev)) == 0
|
||||
do
|
||||
prev = prev_nonnote_insn (prev);
|
||||
while (prev && DEBUG_INSN_P (prev));
|
||||
|
||||
if (prev == 0
|
||||
|| !NONJUMP_INSN_P (prev)
|
||||
|| FIND_REG_INC_NOTE (prev, NULL_RTX)
|
||||
/* In cfglayout mode, there do not have to be labels at the
|
||||
|
|
103
gcc/sched-deps.c
103
gcc/sched-deps.c
|
@ -1,7 +1,7 @@
|
|||
/* Instruction scheduling pass. This file computes dependencies between
|
||||
instructions.
|
||||
Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998,
|
||||
1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008
|
||||
1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009
|
||||
Free Software Foundation, Inc.
|
||||
Contributed by Michael Tiemann (tiemann@cygnus.com) Enhanced by,
|
||||
and currently maintained by, Jim Wilson (wilson@cygnus.com)
|
||||
|
@ -650,7 +650,8 @@ sd_lists_size (const_rtx insn, sd_list_types_def list_types)
|
|||
bool resolved_p;
|
||||
|
||||
sd_next_list (insn, &list_types, &list, &resolved_p);
|
||||
size += DEPS_LIST_N_LINKS (list);
|
||||
if (list)
|
||||
size += DEPS_LIST_N_LINKS (list);
|
||||
}
|
||||
|
||||
return size;
|
||||
|
@ -673,6 +674,9 @@ sd_init_insn (rtx insn)
|
|||
INSN_FORW_DEPS (insn) = create_deps_list ();
|
||||
INSN_RESOLVED_FORW_DEPS (insn) = create_deps_list ();
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
DEBUG_INSN_SCHED_P (insn) = TRUE;
|
||||
|
||||
/* ??? It would be nice to allocate dependency caches here. */
|
||||
}
|
||||
|
||||
|
@ -682,6 +686,12 @@ sd_finish_insn (rtx insn)
|
|||
{
|
||||
/* ??? It would be nice to deallocate dependency caches here. */
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
gcc_assert (DEBUG_INSN_SCHED_P (insn));
|
||||
DEBUG_INSN_SCHED_P (insn) = FALSE;
|
||||
}
|
||||
|
||||
free_deps_list (INSN_HARD_BACK_DEPS (insn));
|
||||
INSN_HARD_BACK_DEPS (insn) = NULL;
|
||||
|
||||
|
@ -1181,6 +1191,7 @@ sd_add_dep (dep_t dep, bool resolved_p)
|
|||
rtx insn = DEP_CON (dep);
|
||||
|
||||
gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
|
||||
gcc_assert (!DEBUG_INSN_P (elem) || DEBUG_INSN_P (insn));
|
||||
|
||||
if ((current_sched_info->flags & DO_SPECULATION)
|
||||
&& !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep)))
|
||||
|
@ -1462,7 +1473,7 @@ fixup_sched_groups (rtx insn)
|
|||
|
||||
if (pro == i)
|
||||
goto next_link;
|
||||
} while (SCHED_GROUP_P (i));
|
||||
} while (SCHED_GROUP_P (i) || DEBUG_INSN_P (i));
|
||||
|
||||
if (! sched_insns_conditions_mutex_p (i, pro))
|
||||
add_dependence (i, pro, DEP_TYPE (dep));
|
||||
|
@ -1472,6 +1483,8 @@ fixup_sched_groups (rtx insn)
|
|||
delete_all_dependences (insn);
|
||||
|
||||
prev_nonnote = prev_nonnote_insn (insn);
|
||||
while (DEBUG_INSN_P (prev_nonnote))
|
||||
prev_nonnote = prev_nonnote_insn (prev_nonnote);
|
||||
if (BLOCK_FOR_INSN (insn) == BLOCK_FOR_INSN (prev_nonnote)
|
||||
&& ! sched_insns_conditions_mutex_p (insn, prev_nonnote))
|
||||
add_dependence (insn, prev_nonnote, REG_DEP_ANTI);
|
||||
|
@ -1801,8 +1814,7 @@ sched_analyze_reg (struct deps *deps, int regno, enum machine_mode mode,
|
|||
already cross one. */
|
||||
if (REG_N_CALLS_CROSSED (regno) == 0)
|
||||
{
|
||||
if (!deps->readonly
|
||||
&& ref == USE)
|
||||
if (!deps->readonly && ref == USE && !DEBUG_INSN_P (insn))
|
||||
deps->sched_before_next_call
|
||||
= alloc_INSN_LIST (insn, deps->sched_before_next_call);
|
||||
else
|
||||
|
@ -2059,6 +2071,12 @@ sched_analyze_2 (struct deps *deps, rtx x, rtx insn)
|
|||
rtx pending, pending_mem;
|
||||
rtx t = x;
|
||||
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
sched_analyze_2 (deps, XEXP (x, 0), insn);
|
||||
return;
|
||||
}
|
||||
|
||||
if (sched_deps_info->use_cselib)
|
||||
{
|
||||
t = shallow_copy_rtx (t);
|
||||
|
@ -2287,6 +2305,8 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn)
|
|||
{
|
||||
rtx next;
|
||||
next = next_nonnote_insn (insn);
|
||||
while (next && DEBUG_INSN_P (next))
|
||||
next = next_nonnote_insn (next);
|
||||
if (next && BARRIER_P (next))
|
||||
reg_pending_barrier = MOVE_BARRIER;
|
||||
else
|
||||
|
@ -2361,9 +2381,49 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn)
|
|||
|| (NONJUMP_INSN_P (insn) && control_flow_insn_p (insn)))
|
||||
reg_pending_barrier = MOVE_BARRIER;
|
||||
|
||||
/* Add register dependencies for insn. */
|
||||
if (DEBUG_INSN_P (insn))
|
||||
{
|
||||
rtx prev = deps->last_debug_insn;
|
||||
rtx u;
|
||||
|
||||
if (!deps->readonly)
|
||||
deps->last_debug_insn = insn;
|
||||
|
||||
if (prev)
|
||||
add_dependence (insn, prev, REG_DEP_ANTI);
|
||||
|
||||
add_dependence_list (insn, deps->last_function_call, 1,
|
||||
REG_DEP_ANTI);
|
||||
|
||||
for (u = deps->last_pending_memory_flush; u; u = XEXP (u, 1))
|
||||
if (! JUMP_P (XEXP (u, 0))
|
||||
|| !sel_sched_p ())
|
||||
add_dependence (insn, XEXP (u, 0), REG_DEP_ANTI);
|
||||
|
||||
EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
|
||||
{
|
||||
struct deps_reg *reg_last = &deps->reg_last[i];
|
||||
add_dependence_list (insn, reg_last->sets, 1, REG_DEP_ANTI);
|
||||
add_dependence_list (insn, reg_last->clobbers, 1, REG_DEP_ANTI);
|
||||
}
|
||||
CLEAR_REG_SET (reg_pending_uses);
|
||||
|
||||
/* Quite often, a debug insn will refer to stuff in the
|
||||
previous instruction, but the reason we want this
|
||||
dependency here is to make sure the scheduler doesn't
|
||||
gratuitously move a debug insn ahead. This could dirty
|
||||
DF flags and cause additional analysis that wouldn't have
|
||||
occurred in compilation without debug insns, and such
|
||||
additional analysis can modify the generated code. */
|
||||
prev = PREV_INSN (insn);
|
||||
|
||||
if (prev && NONDEBUG_INSN_P (prev))
|
||||
add_dependence (insn, prev, REG_DEP_ANTI);
|
||||
}
|
||||
/* If the current insn is conditional, we can't free any
|
||||
of the lists. */
|
||||
if (sched_has_condition_p (insn))
|
||||
else if (sched_has_condition_p (insn))
|
||||
{
|
||||
EXECUTE_IF_SET_IN_REG_SET (reg_pending_uses, 0, i, rsi)
|
||||
{
|
||||
|
@ -2557,7 +2617,30 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn)
|
|||
int src_regno, dest_regno;
|
||||
|
||||
if (set == NULL)
|
||||
goto end_call_group;
|
||||
{
|
||||
if (DEBUG_INSN_P (insn))
|
||||
/* We don't want to mark debug insns as part of the same
|
||||
sched group. We know they really aren't, but if we use
|
||||
debug insns to tell that a call group is over, we'll
|
||||
get different code if debug insns are not there and
|
||||
instructions that follow seem like they should be part
|
||||
of the call group.
|
||||
|
||||
Also, if we did, fixup_sched_groups() would move the
|
||||
deps of the debug insn to the call insn, modifying
|
||||
non-debug post-dependency counts of the debug insn
|
||||
dependencies and otherwise messing with the scheduling
|
||||
order.
|
||||
|
||||
Instead, let such debug insns be scheduled freely, but
|
||||
keep the call group open in case there are insns that
|
||||
should be part of it afterwards. Since we grant debug
|
||||
insns higher priority than even sched group insns, it
|
||||
will all turn out all right. */
|
||||
goto debug_dont_end_call_group;
|
||||
else
|
||||
goto end_call_group;
|
||||
}
|
||||
|
||||
tmp = SET_DEST (set);
|
||||
if (GET_CODE (tmp) == SUBREG)
|
||||
|
@ -2602,6 +2685,7 @@ sched_analyze_insn (struct deps *deps, rtx x, rtx insn)
|
|||
}
|
||||
}
|
||||
|
||||
debug_dont_end_call_group:
|
||||
if ((current_sched_info->flags & DO_SPECULATION)
|
||||
&& !sched_insn_is_legitimate_for_speculation_p (insn, 0))
|
||||
/* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot
|
||||
|
@ -2628,7 +2712,7 @@ deps_analyze_insn (struct deps *deps, rtx insn)
|
|||
if (sched_deps_info->start_insn)
|
||||
sched_deps_info->start_insn (insn);
|
||||
|
||||
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
|
||||
if (NONJUMP_INSN_P (insn) || DEBUG_INSN_P (insn) || JUMP_P (insn))
|
||||
{
|
||||
/* Make each JUMP_INSN (but not a speculative check)
|
||||
a scheduling barrier for memory references. */
|
||||
|
@ -2758,6 +2842,8 @@ deps_start_bb (struct deps *deps, rtx head)
|
|||
{
|
||||
rtx insn = prev_nonnote_insn (head);
|
||||
|
||||
while (insn && DEBUG_INSN_P (insn))
|
||||
insn = prev_nonnote_insn (insn);
|
||||
if (insn && CALL_P (insn))
|
||||
deps->in_post_call_group_p = post_call_initial;
|
||||
}
|
||||
|
@ -2873,6 +2959,7 @@ init_deps (struct deps *deps)
|
|||
deps->last_function_call = 0;
|
||||
deps->sched_before_next_call = 0;
|
||||
deps->in_post_call_group_p = not_post_call;
|
||||
deps->last_debug_insn = 0;
|
||||
deps->last_reg_pending_barrier = NOT_A_BARRIER;
|
||||
deps->readonly = 0;
|
||||
}
|
||||
|
|
|
@ -607,9 +607,9 @@ schedule_ebbs (void)
|
|||
a note or two. */
|
||||
while (head != tail)
|
||||
{
|
||||
if (NOTE_P (head))
|
||||
if (NOTE_P (head) || BOUNDARY_DEBUG_INSN_P (head))
|
||||
head = NEXT_INSN (head);
|
||||
else if (NOTE_P (tail))
|
||||
else if (NOTE_P (tail) || BOUNDARY_DEBUG_INSN_P (tail))
|
||||
tail = PREV_INSN (tail);
|
||||
else if (LABEL_P (head))
|
||||
head = NEXT_INSN (head);
|
||||
|
|
|
@ -181,13 +181,15 @@ extern bool sel_insn_is_speculation_check (rtx);
|
|||
FIRST is the index of the element with the highest priority; i.e. the
|
||||
last one in the ready list, since elements are ordered by ascending
|
||||
priority.
|
||||
N_READY determines how many insns are on the ready list. */
|
||||
N_READY determines how many insns are on the ready list.
|
||||
N_DEBUG determines how many debug insns are on the ready list. */
|
||||
struct ready_list
|
||||
{
|
||||
rtx *vec;
|
||||
int veclen;
|
||||
int first;
|
||||
int n_ready;
|
||||
int n_debug;
|
||||
};
|
||||
|
||||
extern char *ready_try;
|
||||
|
@ -509,6 +511,9 @@ struct deps
|
|||
the call. */
|
||||
enum post_call_group in_post_call_group_p;
|
||||
|
||||
/* The last debug insn we've seen. */
|
||||
rtx last_debug_insn;
|
||||
|
||||
/* The maximum register number for the following arrays. Before reload
|
||||
this is max_reg_num; after reload it is FIRST_PSEUDO_REGISTER. */
|
||||
int max_reg;
|
||||
|
@ -800,6 +805,23 @@ extern VEC(haifa_deps_insn_data_def, heap) *h_d_i_d;
|
|||
#define IS_SPECULATION_BRANCHY_CHECK_P(INSN) \
|
||||
(RECOVERY_BLOCK (INSN) != NULL && RECOVERY_BLOCK (INSN) != EXIT_BLOCK_PTR)
|
||||
|
||||
/* The unchanging bit tracks whether a debug insn is to be handled
|
||||
like an insn (i.e., schedule it) or like a note (e.g., it is next
|
||||
to a basic block boundary. */
|
||||
#define DEBUG_INSN_SCHED_P(insn) \
|
||||
(RTL_FLAG_CHECK1("DEBUG_INSN_SCHED_P", (insn), DEBUG_INSN)->unchanging)
|
||||
|
||||
/* True if INSN is a debug insn that is next to a basic block
|
||||
boundary, i.e., it is to be handled by the scheduler like a
|
||||
note. */
|
||||
#define BOUNDARY_DEBUG_INSN_P(insn) \
|
||||
(DEBUG_INSN_P (insn) && !DEBUG_INSN_SCHED_P (insn))
|
||||
/* True if INSN is a debug insn that is not next to a basic block
|
||||
boundary, i.e., it is to be handled by the scheduler like an
|
||||
insn. */
|
||||
#define SCHEDULE_DEBUG_INSN_P(insn) \
|
||||
(DEBUG_INSN_P (insn) && DEBUG_INSN_SCHED_P (insn))
|
||||
|
||||
/* Dep status (aka ds_t) of the link encapsulates information, that is needed
|
||||
for speculative scheduling. Namely, it is 4 integers in the range
|
||||
[0, MAX_DEP_WEAK] and 3 bits.
|
||||
|
@ -1342,7 +1364,8 @@ sd_iterator_cond (sd_iterator_def *it_ptr, dep_t *dep_ptr)
|
|||
|
||||
it_ptr->linkp = &DEPS_LIST_FIRST (list);
|
||||
|
||||
return sd_iterator_cond (it_ptr, dep_ptr);
|
||||
if (list)
|
||||
return sd_iterator_cond (it_ptr, dep_ptr);
|
||||
}
|
||||
|
||||
*dep_ptr = NULL;
|
||||
|
|
|
@ -530,7 +530,20 @@ find_single_block_region (bool ebbs_p)
|
|||
static int
|
||||
rgn_estimate_number_of_insns (basic_block bb)
|
||||
{
|
||||
return INSN_LUID (BB_END (bb)) - INSN_LUID (BB_HEAD (bb));
|
||||
int count;
|
||||
|
||||
count = INSN_LUID (BB_END (bb)) - INSN_LUID (BB_HEAD (bb));
|
||||
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
rtx insn;
|
||||
|
||||
FOR_BB_INSNS (bb, insn)
|
||||
if (DEBUG_INSN_P (insn))
|
||||
count--;
|
||||
}
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
/* Update number of blocks and the estimate for number of insns
|
||||
|
@ -2129,7 +2142,7 @@ init_ready_list (void)
|
|||
src_head = head;
|
||||
|
||||
for (insn = src_head; insn != src_next_tail; insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn))
|
||||
if (INSN_P (insn) && !BOUNDARY_DEBUG_INSN_P (insn))
|
||||
try_ready (insn);
|
||||
}
|
||||
}
|
||||
|
@ -2438,6 +2451,9 @@ add_branch_dependences (rtx head, rtx tail)
|
|||
are not moved before reload because we can wind up with register
|
||||
allocation failures. */
|
||||
|
||||
while (tail != head && DEBUG_INSN_P (tail))
|
||||
tail = PREV_INSN (tail);
|
||||
|
||||
insn = tail;
|
||||
last = 0;
|
||||
while (CALL_P (insn)
|
||||
|
@ -2472,7 +2488,9 @@ add_branch_dependences (rtx head, rtx tail)
|
|||
if (insn == head)
|
||||
break;
|
||||
|
||||
insn = PREV_INSN (insn);
|
||||
do
|
||||
insn = PREV_INSN (insn);
|
||||
while (insn != head && DEBUG_INSN_P (insn));
|
||||
}
|
||||
|
||||
/* Make sure these insns are scheduled last in their block. */
|
||||
|
@ -2482,7 +2500,8 @@ add_branch_dependences (rtx head, rtx tail)
|
|||
{
|
||||
insn = prev_nonnote_insn (insn);
|
||||
|
||||
if (TEST_BIT (insn_referenced, INSN_LUID (insn)))
|
||||
if (TEST_BIT (insn_referenced, INSN_LUID (insn))
|
||||
|| DEBUG_INSN_P (insn))
|
||||
continue;
|
||||
|
||||
if (! sched_insns_conditions_mutex_p (last, insn))
|
||||
|
@ -2719,6 +2738,9 @@ free_block_dependencies (int bb)
|
|||
|
||||
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
|
||||
|
||||
if (no_real_insns_p (head, tail))
|
||||
return;
|
||||
|
||||
sched_free_deps (head, tail, true);
|
||||
}
|
||||
|
||||
|
@ -2876,6 +2898,9 @@ compute_priorities (void)
|
|||
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
|
||||
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
|
||||
|
||||
if (no_real_insns_p (head, tail))
|
||||
continue;
|
||||
|
||||
rgn_n_insns += set_priorities (head, tail);
|
||||
}
|
||||
current_sched_info->sched_max_insns_priority++;
|
||||
|
|
|
@ -556,6 +556,10 @@ print_pattern (char *buf, const_rtx x, int verbose)
|
|||
print_value (t1, XEXP (x, 0), verbose);
|
||||
sprintf (buf, "use %s", t1);
|
||||
break;
|
||||
case VAR_LOCATION:
|
||||
print_value (t1, PAT_VAR_LOCATION_LOC (x), verbose);
|
||||
sprintf (buf, "loc %s", t1);
|
||||
break;
|
||||
case COND_EXEC:
|
||||
if (GET_CODE (COND_EXEC_TEST (x)) == NE
|
||||
&& XEXP (COND_EXEC_TEST (x), 1) == const0_rtx)
|
||||
|
@ -658,6 +662,34 @@ print_insn (char *buf, const_rtx x, int verbose)
|
|||
#endif
|
||||
sprintf (buf, " %4d %s", INSN_UID (x), t);
|
||||
break;
|
||||
|
||||
case DEBUG_INSN:
|
||||
{
|
||||
const char *name = "?";
|
||||
|
||||
if (DECL_P (INSN_VAR_LOCATION_DECL (insn)))
|
||||
{
|
||||
tree id = DECL_NAME (INSN_VAR_LOCATION_DECL (insn));
|
||||
if (id)
|
||||
name = IDENTIFIER_POINTER (id);
|
||||
else
|
||||
{
|
||||
char idbuf[32];
|
||||
sprintf (idbuf, "D.%i",
|
||||
DECL_UID (INSN_VAR_LOCATION_DECL (insn)));
|
||||
name = idbuf;
|
||||
}
|
||||
}
|
||||
if (VAR_LOC_UNKNOWN_P (INSN_VAR_LOCATION_LOC (insn)))
|
||||
sprintf (buf, " %4d: debug %s optimized away", INSN_UID (insn), name);
|
||||
else
|
||||
{
|
||||
print_pattern (t, INSN_VAR_LOCATION_LOC (insn), verbose);
|
||||
sprintf (buf, " %4d: debug %s => %s", INSN_UID (insn), name, t);
|
||||
}
|
||||
}
|
||||
break;
|
||||
|
||||
case JUMP_INSN:
|
||||
print_pattern (t, PATTERN (x), verbose);
|
||||
#ifdef INSN_SCHEDULING
|
||||
|
|
|
@ -157,6 +157,7 @@ static void sel_remove_loop_preheader (void);
|
|||
static bool insn_is_the_only_one_in_bb_p (insn_t);
|
||||
static void create_initial_data_sets (basic_block);
|
||||
|
||||
static void free_av_set (basic_block);
|
||||
static void invalidate_av_set (basic_block);
|
||||
static void extend_insn_data (void);
|
||||
static void sel_init_new_insn (insn_t, int);
|
||||
|
@ -1044,10 +1045,10 @@ get_nop_from_pool (insn_t insn)
|
|||
|
||||
/* Remove NOP from the instruction stream and return it to the pool. */
|
||||
void
|
||||
return_nop_to_pool (insn_t nop)
|
||||
return_nop_to_pool (insn_t nop, bool full_tidying)
|
||||
{
|
||||
gcc_assert (INSN_IN_STREAM_P (nop));
|
||||
sel_remove_insn (nop, false, true);
|
||||
sel_remove_insn (nop, false, full_tidying);
|
||||
|
||||
if (nop_pool.n == nop_pool.s)
|
||||
nop_pool.v = XRESIZEVEC (rtx, nop_pool.v,
|
||||
|
@ -2362,6 +2363,8 @@ setup_id_for_insn (idata_t id, insn_t insn, bool force_unique_p)
|
|||
type = SET;
|
||||
else if (type == JUMP_INSN && simplejump_p (insn))
|
||||
type = PC;
|
||||
else if (type == DEBUG_INSN)
|
||||
type = !force_unique_p ? USE : INSN;
|
||||
|
||||
IDATA_TYPE (id) = type;
|
||||
IDATA_REG_SETS (id) = get_clear_regset_from_pool ();
|
||||
|
@ -3487,7 +3490,7 @@ maybe_tidy_empty_bb (basic_block bb)
|
|||
|
||||
/* Keep empty bb only if this block immediately precedes EXIT and
|
||||
has incoming non-fallthrough edge. Otherwise remove it. */
|
||||
if (!sel_bb_empty_p (bb)
|
||||
if (!sel_bb_empty_p (bb)
|
||||
|| (single_succ_p (bb)
|
||||
&& single_succ (bb) == EXIT_BLOCK_PTR
|
||||
&& (!single_pred_p (bb)
|
||||
|
@ -3559,6 +3562,7 @@ bool
|
|||
tidy_control_flow (basic_block xbb, bool full_tidying)
|
||||
{
|
||||
bool changed = true;
|
||||
insn_t first, last;
|
||||
|
||||
/* First check whether XBB is empty. */
|
||||
changed = maybe_tidy_empty_bb (xbb);
|
||||
|
@ -3575,6 +3579,20 @@ tidy_control_flow (basic_block xbb, bool full_tidying)
|
|||
tidy_fallthru_edge (EDGE_SUCC (xbb, 0));
|
||||
}
|
||||
|
||||
first = sel_bb_head (xbb);
|
||||
last = sel_bb_end (xbb);
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
if (first != last && DEBUG_INSN_P (first))
|
||||
do
|
||||
first = NEXT_INSN (first);
|
||||
while (first != last && (DEBUG_INSN_P (first) || NOTE_P (first)));
|
||||
|
||||
if (first != last && DEBUG_INSN_P (last))
|
||||
do
|
||||
last = PREV_INSN (last);
|
||||
while (first != last && (DEBUG_INSN_P (last) || NOTE_P (last)));
|
||||
}
|
||||
/* Check if there is an unnecessary jump in previous basic block leading
|
||||
to next basic block left after removing INSN from stream.
|
||||
If it is so, remove that jump and redirect edge to current
|
||||
|
@ -3582,9 +3600,9 @@ tidy_control_flow (basic_block xbb, bool full_tidying)
|
|||
when NOP will be deleted several instructions later with its
|
||||
basic block we will not get a jump to next instruction, which
|
||||
can be harmful. */
|
||||
if (sel_bb_head (xbb) == sel_bb_end (xbb)
|
||||
if (first == last
|
||||
&& !sel_bb_empty_p (xbb)
|
||||
&& INSN_NOP_P (sel_bb_end (xbb))
|
||||
&& INSN_NOP_P (last)
|
||||
/* Flow goes fallthru from current block to the next. */
|
||||
&& EDGE_COUNT (xbb->succs) == 1
|
||||
&& (EDGE_SUCC (xbb, 0)->flags & EDGE_FALLTHRU)
|
||||
|
@ -3624,6 +3642,21 @@ sel_remove_insn (insn_t insn, bool only_disconnect, bool full_tidying)
|
|||
|
||||
gcc_assert (INSN_IN_STREAM_P (insn));
|
||||
|
||||
if (DEBUG_INSN_P (insn) && BB_AV_SET_VALID_P (bb))
|
||||
{
|
||||
expr_t expr;
|
||||
av_set_iterator i;
|
||||
|
||||
/* When we remove a debug insn that is head of a BB, it remains
|
||||
in the AV_SET of the block, but it shouldn't. */
|
||||
FOR_EACH_EXPR_1 (expr, i, &BB_AV_SET (bb))
|
||||
if (EXPR_INSN_RTX (expr) == insn)
|
||||
{
|
||||
av_set_iter_remove (&i);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (only_disconnect)
|
||||
{
|
||||
insn_t prev = PREV_INSN (insn);
|
||||
|
@ -3662,7 +3695,7 @@ sel_estimate_number_of_insns (basic_block bb)
|
|||
insn_t insn = NEXT_INSN (BB_HEAD (bb)), next_tail = NEXT_INSN (BB_END (bb));
|
||||
|
||||
for (; insn != next_tail; insn = NEXT_INSN (insn))
|
||||
if (INSN_P (insn))
|
||||
if (NONDEBUG_INSN_P (insn))
|
||||
res++;
|
||||
|
||||
return res;
|
||||
|
@ -5363,6 +5396,8 @@ create_insn_rtx_from_pattern (rtx pattern, rtx label)
|
|||
|
||||
if (label == NULL_RTX)
|
||||
insn_rtx = emit_insn (pattern);
|
||||
else if (DEBUG_INSN_P (label))
|
||||
insn_rtx = emit_debug_insn (pattern);
|
||||
else
|
||||
{
|
||||
insn_rtx = emit_jump_insn (pattern);
|
||||
|
@ -5398,6 +5433,10 @@ create_copy_of_insn_rtx (rtx insn_rtx)
|
|||
{
|
||||
rtx res;
|
||||
|
||||
if (DEBUG_INSN_P (insn_rtx))
|
||||
return create_insn_rtx_from_pattern (copy_rtx (PATTERN (insn_rtx)),
|
||||
insn_rtx);
|
||||
|
||||
gcc_assert (NONJUMP_INSN_P (insn_rtx));
|
||||
|
||||
res = create_insn_rtx_from_pattern (copy_rtx (PATTERN (insn_rtx)),
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
/* Instruction scheduling pass. This file contains definitions used
|
||||
internally in the scheduler.
|
||||
Copyright (C) 2006, 2007, 2008 Free Software Foundation, Inc.
|
||||
Copyright (C) 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
|
||||
|
||||
This file is part of GCC.
|
||||
|
||||
|
@ -1019,6 +1019,7 @@ struct succs_info
|
|||
extern basic_block after_recovery;
|
||||
|
||||
extern insn_t sel_bb_head (basic_block);
|
||||
extern insn_t sel_bb_end (basic_block);
|
||||
extern bool sel_bb_empty_p (basic_block);
|
||||
extern bool in_current_region_p (basic_block);
|
||||
|
||||
|
@ -1079,6 +1080,27 @@ get_loop_exit_edges_unique_dests (const struct loop *loop)
|
|||
return edges;
|
||||
}
|
||||
|
||||
static bool
|
||||
sel_bb_empty_or_nop_p (basic_block bb)
|
||||
{
|
||||
insn_t first = sel_bb_head (bb), last;
|
||||
|
||||
if (first == NULL_RTX)
|
||||
return true;
|
||||
|
||||
if (!INSN_NOP_P (first))
|
||||
return false;
|
||||
|
||||
if (bb == EXIT_BLOCK_PTR)
|
||||
return false;
|
||||
|
||||
last = sel_bb_end (bb);
|
||||
if (first != last)
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Collect all loop exits recursively, skipping empty BBs between them.
|
||||
E.g. if BB is a loop header which has several loop exits,
|
||||
traverse all of them and if any of them turns out to be another loop header
|
||||
|
@ -1091,7 +1113,7 @@ get_all_loop_exits (basic_block bb)
|
|||
|
||||
/* If bb is empty, and we're skipping to loop exits, then
|
||||
consider bb as a possible gate to the inner loop now. */
|
||||
while (sel_bb_empty_p (bb)
|
||||
while (sel_bb_empty_or_nop_p (bb)
|
||||
&& in_current_region_p (bb))
|
||||
{
|
||||
bb = single_succ (bb);
|
||||
|
@ -1350,7 +1372,24 @@ _eligible_successor_edge_p (edge e1, succ_iterator *ip)
|
|||
while (1)
|
||||
{
|
||||
if (!sel_bb_empty_p (bb))
|
||||
break;
|
||||
{
|
||||
edge ne;
|
||||
basic_block nbb;
|
||||
|
||||
if (!sel_bb_empty_or_nop_p (bb))
|
||||
break;
|
||||
|
||||
ne = EDGE_SUCC (bb, 0);
|
||||
nbb = ne->dest;
|
||||
|
||||
if (!in_current_region_p (nbb)
|
||||
&& !(flags & SUCCS_OUT))
|
||||
break;
|
||||
|
||||
e2 = ne;
|
||||
bb = nbb;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!in_current_region_p (bb)
|
||||
&& !(flags & SUCCS_OUT))
|
||||
|
@ -1470,7 +1509,7 @@ extern void return_regset_to_pool (regset);
|
|||
extern void free_regset_pool (void);
|
||||
|
||||
extern insn_t get_nop_from_pool (insn_t);
|
||||
extern void return_nop_to_pool (insn_t);
|
||||
extern void return_nop_to_pool (insn_t, bool);
|
||||
extern void free_nop_pool (void);
|
||||
|
||||
/* Vinsns functions. */
|
||||
|
|
359
gcc/sel-sched.c
359
gcc/sel-sched.c
|
@ -557,6 +557,7 @@ static int stat_substitutions_total;
|
|||
static bool rtx_ok_for_substitution_p (rtx, rtx);
|
||||
static int sel_rank_for_schedule (const void *, const void *);
|
||||
static av_set_t find_sequential_best_exprs (bnd_t, expr_t, bool);
|
||||
static basic_block find_block_for_bookkeeping (edge e1, edge e2, bool lax);
|
||||
|
||||
static rtx get_dest_from_orig_ops (av_set_t);
|
||||
static basic_block generate_bookkeeping_insn (expr_t, edge, edge);
|
||||
|
@ -2059,6 +2060,56 @@ moveup_expr_inside_insn_group (expr_t expr, insn_t through_insn)
|
|||
/* True when a conflict on a target register was found during moveup_expr. */
|
||||
static bool was_target_conflict = false;
|
||||
|
||||
/* Return true when moving a debug INSN across THROUGH_INSN will
|
||||
create a bookkeeping block. We don't want to create such blocks,
|
||||
for they would cause codegen differences between compilations with
|
||||
and without debug info. */
|
||||
|
||||
static bool
|
||||
moving_insn_creates_bookkeeping_block_p (insn_t insn,
|
||||
insn_t through_insn)
|
||||
{
|
||||
basic_block bbi, bbt;
|
||||
edge e1, e2;
|
||||
edge_iterator ei1, ei2;
|
||||
|
||||
if (!bookkeeping_can_be_created_if_moved_through_p (through_insn))
|
||||
{
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("no bookkeeping required: ");
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
bbi = BLOCK_FOR_INSN (insn);
|
||||
|
||||
if (EDGE_COUNT (bbi->preds) == 1)
|
||||
{
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("only one pred edge: ");
|
||||
return TRUE;
|
||||
}
|
||||
|
||||
bbt = BLOCK_FOR_INSN (through_insn);
|
||||
|
||||
FOR_EACH_EDGE (e1, ei1, bbt->succs)
|
||||
{
|
||||
FOR_EACH_EDGE (e2, ei2, bbi->preds)
|
||||
{
|
||||
if (find_block_for_bookkeeping (e1, e2, TRUE))
|
||||
{
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("found existing block: ");
|
||||
return FALSE;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("would create bookkeeping block: ");
|
||||
|
||||
return TRUE;
|
||||
}
|
||||
|
||||
/* Modifies EXPR so it can be moved through the THROUGH_INSN,
|
||||
performing necessary transformations. Record the type of transformation
|
||||
made in PTRANS_TYPE, when it is not NULL. When INSIDE_INSN_GROUP,
|
||||
|
@ -2110,7 +2161,8 @@ moveup_expr (expr_t expr, insn_t through_insn, bool inside_insn_group,
|
|||
/* And it should be mutually exclusive with through_insn, or
|
||||
be an unconditional jump. */
|
||||
if (! any_uncondjump_p (insn)
|
||||
&& ! sched_insns_conditions_mutex_p (insn, through_insn))
|
||||
&& ! sched_insns_conditions_mutex_p (insn, through_insn)
|
||||
&& ! DEBUG_INSN_P (through_insn))
|
||||
return MOVEUP_EXPR_NULL;
|
||||
}
|
||||
|
||||
|
@ -2131,6 +2183,12 @@ moveup_expr (expr_t expr, insn_t through_insn, bool inside_insn_group,
|
|||
else
|
||||
gcc_assert (!control_flow_insn_p (insn));
|
||||
|
||||
/* Don't move debug insns if this would require bookkeeping. */
|
||||
if (DEBUG_INSN_P (insn)
|
||||
&& BLOCK_FOR_INSN (through_insn) != BLOCK_FOR_INSN (insn)
|
||||
&& moving_insn_creates_bookkeeping_block_p (insn, through_insn))
|
||||
return MOVEUP_EXPR_NULL;
|
||||
|
||||
/* Deal with data dependencies. */
|
||||
was_target_conflict = false;
|
||||
full_ds = has_dependence_p (expr, through_insn, &has_dep_p);
|
||||
|
@ -2440,7 +2498,12 @@ moveup_expr_cached (expr_t expr, insn_t insn, bool inside_insn_group)
|
|||
sel_print (" through %d: ", INSN_UID (insn));
|
||||
}
|
||||
|
||||
if (try_bitmap_cache (expr, insn, inside_insn_group, &res))
|
||||
if (DEBUG_INSN_P (EXPR_INSN_RTX (expr))
|
||||
&& (sel_bb_head (BLOCK_FOR_INSN (EXPR_INSN_RTX (expr)))
|
||||
== EXPR_INSN_RTX (expr)))
|
||||
/* Don't use cached information for debug insns that are heads of
|
||||
basic blocks. */;
|
||||
else if (try_bitmap_cache (expr, insn, inside_insn_group, &res))
|
||||
/* When inside insn group, we do not want remove stores conflicting
|
||||
with previosly issued loads. */
|
||||
got_answer = ! inside_insn_group || res != MOVEUP_EXPR_NULL;
|
||||
|
@ -2852,6 +2915,9 @@ compute_av_set_inside_bb (insn_t first_insn, ilist_t p, int ws,
|
|||
break;
|
||||
}
|
||||
|
||||
if (DEBUG_INSN_P (last_insn))
|
||||
continue;
|
||||
|
||||
if (end_ws > max_ws)
|
||||
{
|
||||
/* We can reach max lookahead size at bb_header, so clean av_set
|
||||
|
@ -3261,6 +3327,12 @@ sel_rank_for_schedule (const void *x, const void *y)
|
|||
tmp_insn = EXPR_INSN_RTX (tmp);
|
||||
tmp2_insn = EXPR_INSN_RTX (tmp2);
|
||||
|
||||
/* Schedule debug insns as early as possible. */
|
||||
if (DEBUG_INSN_P (tmp_insn) && !DEBUG_INSN_P (tmp2_insn))
|
||||
return -1;
|
||||
else if (DEBUG_INSN_P (tmp2_insn))
|
||||
return 1;
|
||||
|
||||
/* Prefer SCHED_GROUP_P insns to any others. */
|
||||
if (SCHED_GROUP_P (tmp_insn) != SCHED_GROUP_P (tmp2_insn))
|
||||
{
|
||||
|
@ -3332,9 +3404,6 @@ sel_rank_for_schedule (const void *x, const void *y)
|
|||
return dw;
|
||||
}
|
||||
|
||||
tmp_insn = EXPR_INSN_RTX (tmp);
|
||||
tmp2_insn = EXPR_INSN_RTX (tmp2);
|
||||
|
||||
/* Prefer an old insn to a bookkeeping insn. */
|
||||
if (INSN_UID (tmp_insn) < first_emitted_uid
|
||||
&& INSN_UID (tmp2_insn) >= first_emitted_uid)
|
||||
|
@ -4412,15 +4481,16 @@ block_valid_for_bookkeeping_p (basic_block bb)
|
|||
/* Attempt to find a block that can hold bookkeeping code for path(s) incoming
|
||||
into E2->dest, except from E1->src (there may be a sequence of empty basic
|
||||
blocks between E1->src and E2->dest). Return found block, or NULL if new
|
||||
one must be created. */
|
||||
one must be created. If LAX holds, don't assume there is a simple path
|
||||
from E1->src to E2->dest. */
|
||||
static basic_block
|
||||
find_block_for_bookkeeping (edge e1, edge e2)
|
||||
find_block_for_bookkeeping (edge e1, edge e2, bool lax)
|
||||
{
|
||||
basic_block candidate_block = NULL;
|
||||
edge e;
|
||||
|
||||
/* Loop over edges from E1 to E2, inclusive. */
|
||||
for (e = e1; ; e = EDGE_SUCC (e->dest, 0))
|
||||
for (e = e1; !lax || e->dest != EXIT_BLOCK_PTR; e = EDGE_SUCC (e->dest, 0))
|
||||
{
|
||||
if (EDGE_COUNT (e->dest->preds) == 2)
|
||||
{
|
||||
|
@ -4438,10 +4508,18 @@ find_block_for_bookkeeping (edge e1, edge e2)
|
|||
return NULL;
|
||||
|
||||
if (e == e2)
|
||||
return (block_valid_for_bookkeeping_p (candidate_block)
|
||||
return ((!lax || candidate_block)
|
||||
&& block_valid_for_bookkeeping_p (candidate_block)
|
||||
? candidate_block
|
||||
: NULL);
|
||||
|
||||
if (lax && EDGE_COUNT (e->dest->succs) != 1)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
if (lax)
|
||||
return NULL;
|
||||
|
||||
gcc_unreachable ();
|
||||
}
|
||||
|
||||
|
@ -4485,6 +4563,101 @@ create_block_for_bookkeeping (edge e1, edge e2)
|
|||
gcc_assert (e1->dest == new_bb);
|
||||
gcc_assert (sel_bb_empty_p (bb));
|
||||
|
||||
/* To keep basic block numbers in sync between debug and non-debug
|
||||
compilations, we have to rotate blocks here. Consider that we
|
||||
started from (a,b)->d, (c,d)->e, and d contained only debug
|
||||
insns. It would have been removed before if the debug insns
|
||||
weren't there, so we'd have split e rather than d. So what we do
|
||||
now is to swap the block numbers of new_bb and
|
||||
single_succ(new_bb) == e, so that the insns that were in e before
|
||||
get the new block number. */
|
||||
|
||||
if (MAY_HAVE_DEBUG_INSNS)
|
||||
{
|
||||
basic_block succ;
|
||||
insn_t insn = sel_bb_head (new_bb);
|
||||
insn_t last;
|
||||
|
||||
if (DEBUG_INSN_P (insn)
|
||||
&& single_succ_p (new_bb)
|
||||
&& (succ = single_succ (new_bb))
|
||||
&& succ != EXIT_BLOCK_PTR
|
||||
&& DEBUG_INSN_P ((last = sel_bb_end (new_bb))))
|
||||
{
|
||||
while (insn != last && (DEBUG_INSN_P (insn) || NOTE_P (insn)))
|
||||
insn = NEXT_INSN (insn);
|
||||
|
||||
if (insn == last)
|
||||
{
|
||||
sel_global_bb_info_def gbi;
|
||||
sel_region_bb_info_def rbi;
|
||||
int i;
|
||||
|
||||
if (sched_verbose >= 2)
|
||||
sel_print ("Swapping block ids %i and %i\n",
|
||||
new_bb->index, succ->index);
|
||||
|
||||
i = new_bb->index;
|
||||
new_bb->index = succ->index;
|
||||
succ->index = i;
|
||||
|
||||
SET_BASIC_BLOCK (new_bb->index, new_bb);
|
||||
SET_BASIC_BLOCK (succ->index, succ);
|
||||
|
||||
memcpy (&gbi, SEL_GLOBAL_BB_INFO (new_bb), sizeof (gbi));
|
||||
memcpy (SEL_GLOBAL_BB_INFO (new_bb), SEL_GLOBAL_BB_INFO (succ),
|
||||
sizeof (gbi));
|
||||
memcpy (SEL_GLOBAL_BB_INFO (succ), &gbi, sizeof (gbi));
|
||||
|
||||
memcpy (&rbi, SEL_REGION_BB_INFO (new_bb), sizeof (rbi));
|
||||
memcpy (SEL_REGION_BB_INFO (new_bb), SEL_REGION_BB_INFO (succ),
|
||||
sizeof (rbi));
|
||||
memcpy (SEL_REGION_BB_INFO (succ), &rbi, sizeof (rbi));
|
||||
|
||||
i = BLOCK_TO_BB (new_bb->index);
|
||||
BLOCK_TO_BB (new_bb->index) = BLOCK_TO_BB (succ->index);
|
||||
BLOCK_TO_BB (succ->index) = i;
|
||||
|
||||
i = CONTAINING_RGN (new_bb->index);
|
||||
CONTAINING_RGN (new_bb->index) = CONTAINING_RGN (succ->index);
|
||||
CONTAINING_RGN (succ->index) = i;
|
||||
|
||||
for (i = 0; i < current_nr_blocks; i++)
|
||||
if (BB_TO_BLOCK (i) == succ->index)
|
||||
BB_TO_BLOCK (i) = new_bb->index;
|
||||
else if (BB_TO_BLOCK (i) == new_bb->index)
|
||||
BB_TO_BLOCK (i) = succ->index;
|
||||
|
||||
FOR_BB_INSNS (new_bb, insn)
|
||||
if (INSN_P (insn))
|
||||
EXPR_ORIG_BB_INDEX (INSN_EXPR (insn)) = new_bb->index;
|
||||
|
||||
FOR_BB_INSNS (succ, insn)
|
||||
if (INSN_P (insn))
|
||||
EXPR_ORIG_BB_INDEX (INSN_EXPR (insn)) = succ->index;
|
||||
|
||||
if (bitmap_bit_p (code_motion_visited_blocks, new_bb->index))
|
||||
{
|
||||
bitmap_set_bit (code_motion_visited_blocks, succ->index);
|
||||
bitmap_clear_bit (code_motion_visited_blocks, new_bb->index);
|
||||
}
|
||||
|
||||
gcc_assert (LABEL_P (BB_HEAD (new_bb))
|
||||
&& LABEL_P (BB_HEAD (succ)));
|
||||
|
||||
if (sched_verbose >= 4)
|
||||
sel_print ("Swapping code labels %i and %i\n",
|
||||
CODE_LABEL_NUMBER (BB_HEAD (new_bb)),
|
||||
CODE_LABEL_NUMBER (BB_HEAD (succ)));
|
||||
|
||||
i = CODE_LABEL_NUMBER (BB_HEAD (new_bb));
|
||||
CODE_LABEL_NUMBER (BB_HEAD (new_bb))
|
||||
= CODE_LABEL_NUMBER (BB_HEAD (succ));
|
||||
CODE_LABEL_NUMBER (BB_HEAD (succ)) = i;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return bb;
|
||||
}
|
||||
|
||||
|
@ -4496,12 +4669,42 @@ find_place_for_bookkeeping (edge e1, edge e2)
|
|||
insn_t place_to_insert;
|
||||
/* Find a basic block that can hold bookkeeping. If it can be found, do not
|
||||
create new basic block, but insert bookkeeping there. */
|
||||
basic_block book_block = find_block_for_bookkeeping (e1, e2);
|
||||
basic_block book_block = find_block_for_bookkeeping (e1, e2, FALSE);
|
||||
|
||||
if (book_block)
|
||||
{
|
||||
place_to_insert = BB_END (book_block);
|
||||
|
||||
/* Don't use a block containing only debug insns for
|
||||
bookkeeping, this causes scheduling differences between debug
|
||||
and non-debug compilations, for the block would have been
|
||||
removed already. */
|
||||
if (DEBUG_INSN_P (place_to_insert))
|
||||
{
|
||||
rtx insn = sel_bb_head (book_block);
|
||||
|
||||
while (insn != place_to_insert &&
|
||||
(DEBUG_INSN_P (insn) || NOTE_P (insn)))
|
||||
insn = NEXT_INSN (insn);
|
||||
|
||||
if (insn == place_to_insert)
|
||||
book_block = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
if (!book_block)
|
||||
book_block = create_block_for_bookkeeping (e1, e2);
|
||||
|
||||
place_to_insert = BB_END (book_block);
|
||||
{
|
||||
book_block = create_block_for_bookkeeping (e1, e2);
|
||||
place_to_insert = BB_END (book_block);
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("New block is %i, split from bookkeeping block %i\n",
|
||||
EDGE_SUCC (book_block, 0)->dest->index, book_block->index);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("Pre-existing bookkeeping block is %i\n", book_block->index);
|
||||
}
|
||||
|
||||
/* If basic block ends with a jump, insert bookkeeping code right before it. */
|
||||
if (INSN_P (place_to_insert) && control_flow_insn_p (place_to_insert))
|
||||
|
@ -4587,6 +4790,8 @@ generate_bookkeeping_insn (expr_t c_expr, edge e1, edge e2)
|
|||
|
||||
join_point = sel_bb_head (e2->dest);
|
||||
place_to_insert = find_place_for_bookkeeping (e1, e2);
|
||||
if (!place_to_insert)
|
||||
return NULL;
|
||||
new_seqno = find_seqno_for_bookkeeping (place_to_insert, join_point);
|
||||
need_to_exchange_data_sets
|
||||
= sel_bb_empty_p (BLOCK_FOR_INSN (place_to_insert));
|
||||
|
@ -4748,7 +4953,7 @@ move_cond_jump (rtx insn, bnd_t bnd)
|
|||
/* Remove nops generated during move_op for preventing removal of empty
|
||||
basic blocks. */
|
||||
static void
|
||||
remove_temp_moveop_nops (void)
|
||||
remove_temp_moveop_nops (bool full_tidying)
|
||||
{
|
||||
int i;
|
||||
insn_t insn;
|
||||
|
@ -4756,7 +4961,7 @@ remove_temp_moveop_nops (void)
|
|||
for (i = 0; VEC_iterate (insn_t, vec_temp_moveop_nops, i, insn); i++)
|
||||
{
|
||||
gcc_assert (INSN_NOP_P (insn));
|
||||
return_nop_to_pool (insn);
|
||||
return_nop_to_pool (insn, full_tidying);
|
||||
}
|
||||
|
||||
/* Empty the vector. */
|
||||
|
@ -4949,8 +5154,20 @@ prepare_place_to_insert (bnd_t bnd)
|
|||
{
|
||||
/* Add it after last scheduled. */
|
||||
place_to_insert = ILIST_INSN (BND_PTR (bnd));
|
||||
if (DEBUG_INSN_P (place_to_insert))
|
||||
{
|
||||
ilist_t l = BND_PTR (bnd);
|
||||
while ((l = ILIST_NEXT (l)) &&
|
||||
DEBUG_INSN_P (ILIST_INSN (l)))
|
||||
;
|
||||
if (!l)
|
||||
place_to_insert = NULL;
|
||||
}
|
||||
}
|
||||
else
|
||||
place_to_insert = NULL;
|
||||
|
||||
if (!place_to_insert)
|
||||
{
|
||||
/* Add it before BND_TO. The difference is in the
|
||||
basic block, where INSN will be added. */
|
||||
|
@ -5058,7 +5275,8 @@ advance_state_on_fence (fence_t fence, insn_t insn)
|
|||
|
||||
if (sched_verbose >= 2)
|
||||
debug_state (FENCE_STATE (fence));
|
||||
FENCE_STARTS_CYCLE_P (fence) = 0;
|
||||
if (!DEBUG_INSN_P (insn))
|
||||
FENCE_STARTS_CYCLE_P (fence) = 0;
|
||||
return asm_p;
|
||||
}
|
||||
|
||||
|
@ -5117,10 +5335,11 @@ update_fence_and_insn (fence_t fence, insn_t insn, int need_stall)
|
|||
}
|
||||
}
|
||||
|
||||
/* Update boundary BND with INSN, remove the old boundary from
|
||||
BNDSP, add new boundaries to BNDS_TAIL_P and return it. */
|
||||
/* Update boundary BND (and, if needed, FENCE) with INSN, remove the
|
||||
old boundary from BNDSP, add new boundaries to BNDS_TAIL_P and
|
||||
return it. */
|
||||
static blist_t *
|
||||
update_boundaries (bnd_t bnd, insn_t insn, blist_t *bndsp,
|
||||
update_boundaries (fence_t fence, bnd_t bnd, insn_t insn, blist_t *bndsp,
|
||||
blist_t *bnds_tailp)
|
||||
{
|
||||
succ_iterator si;
|
||||
|
@ -5133,6 +5352,21 @@ update_boundaries (bnd_t bnd, insn_t insn, blist_t *bndsp,
|
|||
ilist_t ptr = ilist_copy (BND_PTR (bnd));
|
||||
|
||||
ilist_add (&ptr, insn);
|
||||
|
||||
if (DEBUG_INSN_P (insn) && sel_bb_end_p (insn)
|
||||
&& is_ineligible_successor (succ, ptr))
|
||||
{
|
||||
ilist_clear (&ptr);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (FENCE_INSN (fence) == insn && !sel_bb_end_p (insn))
|
||||
{
|
||||
if (sched_verbose >= 9)
|
||||
sel_print ("Updating fence insn from %i to %i\n",
|
||||
INSN_UID (insn), INSN_UID (succ));
|
||||
FENCE_INSN (fence) = succ;
|
||||
}
|
||||
blist_add (bnds_tailp, succ, ptr, BND_DC (bnd));
|
||||
bnds_tailp = &BLIST_NEXT (*bnds_tailp);
|
||||
}
|
||||
|
@ -5192,8 +5426,8 @@ schedule_expr_on_boundary (bnd_t bnd, expr_t expr_vliw, int seqno)
|
|||
/* Return the nops generated for preserving of data sets back
|
||||
into pool. */
|
||||
if (INSN_NOP_P (place_to_insert))
|
||||
return_nop_to_pool (place_to_insert);
|
||||
remove_temp_moveop_nops ();
|
||||
return_nop_to_pool (place_to_insert, !DEBUG_INSN_P (insn));
|
||||
remove_temp_moveop_nops (!DEBUG_INSN_P (insn));
|
||||
|
||||
av_set_clear (&expr_seq);
|
||||
|
||||
|
@ -5251,7 +5485,9 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp)
|
|||
int was_stall = 0, scheduled_insns = 0, stall_iterations = 0;
|
||||
int max_insns = pipelining_p ? issue_rate : 2 * issue_rate;
|
||||
int max_stall = pipelining_p ? 1 : 3;
|
||||
|
||||
bool last_insn_was_debug = false;
|
||||
bool was_debug_bb_end_p = false;
|
||||
|
||||
compute_av_set_on_boundaries (fence, bnds, &av_vliw);
|
||||
remove_insns_that_need_bookkeeping (fence, &av_vliw);
|
||||
remove_insns_for_debug (bnds, &av_vliw);
|
||||
|
@ -5309,8 +5545,11 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp)
|
|||
}
|
||||
|
||||
insn = schedule_expr_on_boundary (bnd, expr_vliw, seqno);
|
||||
last_insn_was_debug = DEBUG_INSN_P (insn);
|
||||
if (last_insn_was_debug)
|
||||
was_debug_bb_end_p = (insn == BND_TO (bnd) && sel_bb_end_p (insn));
|
||||
update_fence_and_insn (fence, insn, need_stall);
|
||||
bnds_tailp = update_boundaries (bnd, insn, bndsp, bnds_tailp);
|
||||
bnds_tailp = update_boundaries (fence, bnd, insn, bndsp, bnds_tailp);
|
||||
|
||||
/* Add insn to the list of scheduled on this cycle instructions. */
|
||||
ilist_add (*scheduled_insns_tailpp, insn);
|
||||
|
@ -5319,13 +5558,14 @@ fill_insns (fence_t fence, int seqno, ilist_t **scheduled_insns_tailpp)
|
|||
while (*bndsp != *bnds_tailp1);
|
||||
|
||||
av_set_clear (&av_vliw);
|
||||
scheduled_insns++;
|
||||
if (!last_insn_was_debug)
|
||||
scheduled_insns++;
|
||||
|
||||
/* We currently support information about candidate blocks only for
|
||||
one 'target_bb' block. Hence we can't schedule after jump insn,
|
||||
as this will bring two boundaries and, hence, necessity to handle
|
||||
information for two or more blocks concurrently. */
|
||||
if (sel_bb_end_p (insn)
|
||||
if ((last_insn_was_debug ? was_debug_bb_end_p : sel_bb_end_p (insn))
|
||||
|| (was_stall
|
||||
&& (was_stall >= max_stall
|
||||
|| scheduled_insns >= max_insns)))
|
||||
|
@ -5544,7 +5784,7 @@ track_scheduled_insns_and_blocks (rtx insn)
|
|||
instruction out of it. */
|
||||
if (INSN_SCHED_TIMES (insn) > 0)
|
||||
bitmap_set_bit (blocks_to_reschedule, BLOCK_FOR_INSN (insn)->index);
|
||||
else if (INSN_UID (insn) < first_emitted_uid)
|
||||
else if (INSN_UID (insn) < first_emitted_uid && !DEBUG_INSN_P (insn))
|
||||
num_insns_scheduled++;
|
||||
}
|
||||
else
|
||||
|
@ -5636,32 +5876,63 @@ handle_emitting_transformations (rtx insn, expr_t expr,
|
|||
return insn_emitted;
|
||||
}
|
||||
|
||||
/* Remove INSN from stream. When ONLY_DISCONNECT is true, its data
|
||||
/* If INSN is the only insn in the basic block (not counting JUMP,
|
||||
which may be a jump to next insn, and DEBUG_INSNs), we want to
|
||||
leave a NOP there till the return to fill_insns. */
|
||||
|
||||
static bool
|
||||
need_nop_to_preserve_insn_bb (rtx insn)
|
||||
{
|
||||
insn_t bb_head, bb_end, bb_next, in_next;
|
||||
basic_block bb = BLOCK_FOR_INSN (insn);
|
||||
|
||||
bb_head = sel_bb_head (bb);
|
||||
bb_end = sel_bb_end (bb);
|
||||
|
||||
if (bb_head == bb_end)
|
||||
return true;
|
||||
|
||||
while (bb_head != bb_end && DEBUG_INSN_P (bb_head))
|
||||
bb_head = NEXT_INSN (bb_head);
|
||||
|
||||
if (bb_head == bb_end)
|
||||
return true;
|
||||
|
||||
while (bb_head != bb_end && DEBUG_INSN_P (bb_end))
|
||||
bb_end = PREV_INSN (bb_end);
|
||||
|
||||
if (bb_head == bb_end)
|
||||
return true;
|
||||
|
||||
bb_next = NEXT_INSN (bb_head);
|
||||
while (bb_next != bb_end && DEBUG_INSN_P (bb_next))
|
||||
bb_next = NEXT_INSN (bb_next);
|
||||
|
||||
if (bb_next == bb_end && JUMP_P (bb_end))
|
||||
return true;
|
||||
|
||||
in_next = NEXT_INSN (insn);
|
||||
while (DEBUG_INSN_P (in_next))
|
||||
in_next = NEXT_INSN (in_next);
|
||||
|
||||
if (IN_CURRENT_FENCE_P (in_next))
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/* Remove INSN from stream. When ONLY_DISCONNECT is true, its data
|
||||
is not removed but reused when INSN is re-emitted. */
|
||||
static void
|
||||
remove_insn_from_stream (rtx insn, bool only_disconnect)
|
||||
{
|
||||
insn_t nop, bb_head, bb_end;
|
||||
bool need_nop_to_preserve_bb;
|
||||
basic_block bb = BLOCK_FOR_INSN (insn);
|
||||
|
||||
/* If INSN is the only insn in the basic block (not counting JUMP,
|
||||
which may be a jump to next insn), leave NOP there till the
|
||||
return to fill_insns. */
|
||||
bb_head = sel_bb_head (bb);
|
||||
bb_end = sel_bb_end (bb);
|
||||
need_nop_to_preserve_bb = ((bb_head == bb_end)
|
||||
|| (NEXT_INSN (bb_head) == bb_end
|
||||
&& JUMP_P (bb_end))
|
||||
|| IN_CURRENT_FENCE_P (NEXT_INSN (insn)));
|
||||
|
||||
/* If there's only one insn in the BB, make sure that a nop is
|
||||
inserted into it, so the basic block won't disappear when we'll
|
||||
delete INSN below with sel_remove_insn. It should also survive
|
||||
till the return to fill_insns. */
|
||||
if (need_nop_to_preserve_bb)
|
||||
if (need_nop_to_preserve_insn_bb (insn))
|
||||
{
|
||||
nop = get_nop_from_pool (insn);
|
||||
insn_t nop = get_nop_from_pool (insn);
|
||||
gcc_assert (INSN_NOP_P (nop));
|
||||
VEC_safe_push (insn_t, heap, vec_temp_moveop_nops, nop);
|
||||
}
|
||||
|
@ -5925,6 +6196,8 @@ fur_orig_expr_not_found (insn_t insn, av_set_t orig_ops, void *static_params)
|
|||
|
||||
if (CALL_P (insn))
|
||||
sparams->crosses_call = true;
|
||||
else if (DEBUG_INSN_P (insn))
|
||||
return true;
|
||||
|
||||
/* If current insn we are looking at cannot be executed together
|
||||
with original insn, then we can skip it safely.
|
||||
|
|
|
@ -202,6 +202,106 @@ avoid_constant_pool_reference (rtx x)
|
|||
return x;
|
||||
}
|
||||
|
||||
/* Simplify a MEM based on its attributes. This is the default
|
||||
delegitimize_address target hook, and it's recommended that every
|
||||
overrider call it. */
|
||||
|
||||
rtx
|
||||
delegitimize_mem_from_attrs (rtx x)
|
||||
{
|
||||
if (MEM_P (x)
|
||||
&& MEM_EXPR (x)
|
||||
&& (!MEM_OFFSET (x)
|
||||
|| GET_CODE (MEM_OFFSET (x)) == CONST_INT))
|
||||
{
|
||||
tree decl = MEM_EXPR (x);
|
||||
enum machine_mode mode = GET_MODE (x);
|
||||
HOST_WIDE_INT offset = 0;
|
||||
|
||||
switch (TREE_CODE (decl))
|
||||
{
|
||||
default:
|
||||
decl = NULL;
|
||||
break;
|
||||
|
||||
case VAR_DECL:
|
||||
break;
|
||||
|
||||
case ARRAY_REF:
|
||||
case ARRAY_RANGE_REF:
|
||||
case COMPONENT_REF:
|
||||
case BIT_FIELD_REF:
|
||||
case REALPART_EXPR:
|
||||
case IMAGPART_EXPR:
|
||||
case VIEW_CONVERT_EXPR:
|
||||
{
|
||||
HOST_WIDE_INT bitsize, bitpos;
|
||||
tree toffset;
|
||||
int unsignedp = 0, volatilep = 0;
|
||||
|
||||
decl = get_inner_reference (decl, &bitsize, &bitpos, &toffset,
|
||||
&mode, &unsignedp, &volatilep, false);
|
||||
if (bitsize != GET_MODE_BITSIZE (mode)
|
||||
|| (bitpos % BITS_PER_UNIT)
|
||||
|| (toffset && !host_integerp (toffset, 0)))
|
||||
decl = NULL;
|
||||
else
|
||||
{
|
||||
offset += bitpos / BITS_PER_UNIT;
|
||||
if (toffset)
|
||||
offset += TREE_INT_CST_LOW (toffset);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (decl
|
||||
&& mode == GET_MODE (x)
|
||||
&& TREE_CODE (decl) == VAR_DECL
|
||||
&& (TREE_STATIC (decl)
|
||||
|| DECL_THREAD_LOCAL_P (decl))
|
||||
&& DECL_RTL_SET_P (decl)
|
||||
&& MEM_P (DECL_RTL (decl)))
|
||||
{
|
||||
rtx newx;
|
||||
|
||||
if (MEM_OFFSET (x))
|
||||
offset += INTVAL (MEM_OFFSET (x));
|
||||
|
||||
newx = DECL_RTL (decl);
|
||||
|
||||
if (MEM_P (newx))
|
||||
{
|
||||
rtx n = XEXP (newx, 0), o = XEXP (x, 0);
|
||||
|
||||
/* Avoid creating a new MEM needlessly if we already had
|
||||
the same address. We do if there's no OFFSET and the
|
||||
old address X is identical to NEWX, or if X is of the
|
||||
form (plus NEWX OFFSET), or the NEWX is of the form
|
||||
(plus Y (const_int Z)) and X is that with the offset
|
||||
added: (plus Y (const_int Z+OFFSET)). */
|
||||
if (!((offset == 0
|
||||
|| (GET_CODE (o) == PLUS
|
||||
&& GET_CODE (XEXP (o, 1)) == CONST_INT
|
||||
&& (offset == INTVAL (XEXP (o, 1))
|
||||
|| (GET_CODE (n) == PLUS
|
||||
&& GET_CODE (XEXP (n, 1)) == CONST_INT
|
||||
&& (INTVAL (XEXP (n, 1)) + offset
|
||||
== INTVAL (XEXP (o, 1)))
|
||||
&& (n = XEXP (n, 0))))
|
||||
&& (o = XEXP (o, 0))))
|
||||
&& rtx_equal_p (o, n)))
|
||||
x = adjust_address_nv (newx, mode, offset);
|
||||
}
|
||||
else if (GET_MODE (x) == GET_MODE (newx)
|
||||
&& offset == 0)
|
||||
x = newx;
|
||||
}
|
||||
}
|
||||
|
||||
return x;
|
||||
}
|
||||
|
||||
/* Make a unary operation by first seeing if it folds and otherwise making
|
||||
the specified operation. */
|
||||
|
||||
|
|
|
@ -488,7 +488,7 @@
|
|||
#define TARGET_CANNOT_COPY_INSN_P NULL
|
||||
#define TARGET_COMMUTATIVE_P hook_bool_const_rtx_commutative_p
|
||||
#define TARGET_LEGITIMIZE_ADDRESS default_legitimize_address
|
||||
#define TARGET_DELEGITIMIZE_ADDRESS hook_rtx_rtx_identity
|
||||
#define TARGET_DELEGITIMIZE_ADDRESS delegitimize_mem_from_attrs
|
||||
#define TARGET_LEGITIMATE_ADDRESS_P default_legitimate_address_p
|
||||
#define TARGET_USE_BLOCKS_FOR_CONSTANT_P hook_bool_mode_const_rtx_false
|
||||
#define TARGET_MIN_ANCHOR_OFFSET 0
|
||||
|
|
138
gcc/testsuite/gcc.dg/guality/example.c
Normal file
138
gcc/testsuite/gcc.dg/guality/example.c
Normal file
|
@ -0,0 +1,138 @@
|
|||
/* { dg-do run } */
|
||||
/* { dg-options "-g" } */
|
||||
|
||||
#define GUALITY_DONT_FORCE_LIVE_AFTER -1
|
||||
|
||||
#ifndef STATIC_INLINE
|
||||
#define STATIC_INLINE /*static*/
|
||||
#endif
|
||||
|
||||
#include "guality.h"
|
||||
|
||||
#include <assert.h>
|
||||
|
||||
/* Test the debug info for the functions used in the VTA
|
||||
presentation at the GCC Summit 2008. */
|
||||
|
||||
typedef struct list {
|
||||
struct list *n;
|
||||
int v;
|
||||
} elt, *node;
|
||||
|
||||
STATIC_INLINE node
|
||||
find_val (node c, int v, node e)
|
||||
{
|
||||
while (c < e)
|
||||
{
|
||||
GUALCHK (c);
|
||||
GUALCHK (v);
|
||||
GUALCHK (e);
|
||||
if (c->v == v)
|
||||
return c;
|
||||
GUALCHK (c);
|
||||
GUALCHK (v);
|
||||
GUALCHK (e);
|
||||
c++;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
STATIC_INLINE node
|
||||
find_prev (node c, node w)
|
||||
{
|
||||
while (c)
|
||||
{
|
||||
node o = c;
|
||||
c = c->n;
|
||||
GUALCHK (c);
|
||||
GUALCHK (o);
|
||||
GUALCHK (w);
|
||||
if (c == w)
|
||||
return o;
|
||||
GUALCHK (c);
|
||||
GUALCHK (o);
|
||||
GUALCHK (w);
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
STATIC_INLINE node
|
||||
check_arr (node c, node e)
|
||||
{
|
||||
if (c == e)
|
||||
return NULL;
|
||||
e--;
|
||||
while (c < e)
|
||||
{
|
||||
GUALCHK (c);
|
||||
GUALCHK (e);
|
||||
if (c->v > (c+1)->v)
|
||||
return c;
|
||||
GUALCHK (c);
|
||||
GUALCHK (e);
|
||||
c++;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
STATIC_INLINE node
|
||||
check_list (node c, node t)
|
||||
{
|
||||
while (c != t)
|
||||
{
|
||||
node n = c->n;
|
||||
GUALCHK (c);
|
||||
GUALCHK (n);
|
||||
GUALCHK (t);
|
||||
if (c->v > n->v)
|
||||
return c;
|
||||
GUALCHK (c);
|
||||
GUALCHK (n);
|
||||
GUALCHK (t);
|
||||
c = n;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
struct list testme[] = {
|
||||
{ &testme[1], 2 },
|
||||
{ &testme[2], 3 },
|
||||
{ &testme[3], 5 },
|
||||
{ &testme[4], 7 },
|
||||
{ &testme[5], 11 },
|
||||
{ NULL, 13 },
|
||||
};
|
||||
|
||||
int
|
||||
main (int argc, char *argv[])
|
||||
{
|
||||
int n = sizeof (testme) / sizeof (*testme);
|
||||
node first, last, begin, end, ret;
|
||||
|
||||
GUALCHKXPR (n);
|
||||
|
||||
begin = first = &testme[0];
|
||||
last = &testme[n-1];
|
||||
end = &testme[n];
|
||||
|
||||
GUALCHKXPR (first);
|
||||
GUALCHKXPR (last);
|
||||
GUALCHKXPR (begin);
|
||||
GUALCHKXPR (end);
|
||||
|
||||
ret = find_val (begin, 13, end);
|
||||
GUALCHK (ret);
|
||||
assert (ret == last);
|
||||
|
||||
ret = find_prev (first, last);
|
||||
GUALCHK (ret);
|
||||
assert (ret == &testme[n-2]);
|
||||
|
||||
ret = check_arr (begin, end);
|
||||
GUALCHK (ret);
|
||||
assert (!ret);
|
||||
|
||||
ret = check_list (first, last);
|
||||
GUALCHK (ret);
|
||||
assert (!ret);
|
||||
}
|
28
gcc/testsuite/gcc.dg/guality/guality.c
Normal file
28
gcc/testsuite/gcc.dg/guality/guality.c
Normal file
|
@ -0,0 +1,28 @@
|
|||
/* { dg-do run } */
|
||||
/* { dg-options "-g" } */
|
||||
|
||||
#include "guality.h"
|
||||
|
||||
/* Some silly sanity checking. */
|
||||
|
||||
int
|
||||
main (int argc, char *argv[])
|
||||
{
|
||||
int i = argc+1;
|
||||
int j = argc-2;
|
||||
int k = 5;
|
||||
|
||||
GUALCHKXPR (argc);
|
||||
GUALCHKXPR (i);
|
||||
GUALCHKXPR (j);
|
||||
GUALCHKXPR (k);
|
||||
GUALCHKXPR (&i);
|
||||
GUALCHKFLA (argc);
|
||||
GUALCHKFLA (i);
|
||||
GUALCHKFLA (j);
|
||||
GUALCHKXPR (i);
|
||||
GUALCHKXPR (j);
|
||||
GUALCHKXPRVAL ("k", 5, 1);
|
||||
GUALCHKXPRVAL ("0x40", 64, 0);
|
||||
/* GUALCHKXPRVAL ("0", 0, 0); *//* XFAIL */
|
||||
}
|
7
gcc/testsuite/gcc.dg/guality/guality.exp
Normal file
7
gcc/testsuite/gcc.dg/guality/guality.exp
Normal file
|
@ -0,0 +1,7 @@
|
|||
# This harness is for tests that should be run at all optimisation levels.
|
||||
|
||||
load_lib gcc-dg.exp
|
||||
|
||||
dg-init
|
||||
gcc-dg-runtest [lsort [glob $srcdir/$subdir/*.c]] ""
|
||||
dg-finish
|
330
gcc/testsuite/gcc.dg/guality/guality.h
Normal file
330
gcc/testsuite/gcc.dg/guality/guality.h
Normal file
|
@ -0,0 +1,330 @@
|
|||
/* Infrastructure to test the quality of debug information.
|
||||
Copyright (C) 2008, 2009 Free Software Foundation, Inc.
|
||||
Contributed by Alexandre Oliva <aoliva@redhat.com>.
|
||||
|
||||
This file is part of GCC.
|
||||
|
||||
GCC is free software; you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 3, or (at your option)
|
||||
any later version.
|
||||
|
||||
GCC is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with GCC; see the file COPYING3. If not see
|
||||
<http://www.gnu.org/licenses/>. */
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/wait.h>
|
||||
|
||||
/* This is a first cut at checking that debug information matches
|
||||
run-time. The idea is to annotate programs with GUALCHK* macros
|
||||
that guide the tests.
|
||||
|
||||
In the current implementation, all of the macros expand to function
|
||||
calls. On the one hand, this interferes with optimizations; on the
|
||||
other hand, it establishes an optimization barrier and a clear
|
||||
inspection point, where previous operations (as in the abstract
|
||||
machine) should have been completed and have their effects visible,
|
||||
and future operations shouldn't have started yet.
|
||||
|
||||
In the current implementation of guality_check(), we fork a child
|
||||
process that runs gdb, attaches to the parent process (the one that
|
||||
called guality_check), moves up one stack frame (to the caller of
|
||||
guality_check) and then examines the given expression.
|
||||
|
||||
If it matches the expected value, we have a PASS. If it differs,
|
||||
we have a FAILure. If it is missing, we'll have a FAIL or an
|
||||
UNRESOLVED depending on whether the variable or expression might be
|
||||
unavailable at that point, as indicated by the third argument.
|
||||
|
||||
We envision a future alternate implementation with two compilation
|
||||
and execution cycles, say one that runs the program and uses the
|
||||
macros to log expressions and expected values, another in which the
|
||||
macros expand to nothing and the logs are used to guide a debug
|
||||
session that tests the values. How to identify the inspection
|
||||
points in the second case is yet to be determined. It is
|
||||
recommended that GUALCHK* macros be by themselves in source lines,
|
||||
so that __FILE__ and __LINE__ will be usable to identify them.
|
||||
*/
|
||||
|
||||
/* Attach a debugger to the current process and verify that the string
|
||||
EXPR, evaluated by the debugger, yields the long long number VAL.
|
||||
If the debugger cannot compute the expression, say because the
|
||||
variable is unavailable, this will count as an error, unless unkok
|
||||
is nonzero. */
|
||||
|
||||
#define GUALCHKXPRVAL(expr, val, unkok) \
|
||||
guality_check ((expr), (val), (unkok))
|
||||
|
||||
/* Check that a debugger knows that EXPR evaluates to the run-time
|
||||
value of EXPR. Unknown values are marked as acceptable,
|
||||
considering that EXPR may die right after this call. This will
|
||||
affect the generated code in that EXPR will be evaluated and forced
|
||||
to remain live at least until right before the call to
|
||||
guality_check, although not necessarily after the call. */
|
||||
|
||||
#define GUALCHKXPR(expr) \
|
||||
GUALCHKXPRVAL (#expr, (long long)(expr), 1)
|
||||
|
||||
/* Same as GUALCHKXPR, but issue an error if the variable is optimized
|
||||
away. */
|
||||
|
||||
#define GUALCHKVAL(expr) \
|
||||
GUALCHKXPRVAL (#expr, (long long)(expr), 0)
|
||||
|
||||
/* Check that a debugger knows that EXPR evaluates to the run-time
|
||||
value of EXPR. Unknown values are marked as errors, because the
|
||||
value of EXPR is forced to be available right after the call, for a
|
||||
range of at least one instruction. This will affect the generated
|
||||
code, in that EXPR *will* be evaluated before and preserved until
|
||||
after the call to guality_check. */
|
||||
|
||||
#define GUALCHKFLA(expr) do { \
|
||||
__typeof(expr) volatile __preserve_after; \
|
||||
__typeof(expr) __preserve_before = (expr); \
|
||||
GUALCHKXPRVAL (#expr, (long long)(__preserve_before), 0); \
|
||||
__preserve_after = __preserve_before; \
|
||||
asm ("" : : "m" (__preserve_after)); \
|
||||
} while (0)
|
||||
|
||||
/* GUALCHK is the simplest way to assert that debug information for an
|
||||
expression matches its run-time value. Whether to force the
|
||||
expression live after the call, so as to flag incompleteness
|
||||
errors, can be disabled by defining GUALITY_DONT_FORCE_LIVE_AFTER.
|
||||
Setting it to -1, an error is issued for optimized out variables,
|
||||
even though they are not forced live. */
|
||||
|
||||
#if ! GUALITY_DONT_FORCE_LIVE_AFTER
|
||||
#define GUALCHK(var) GUALCHKFLA(var)
|
||||
#elif GUALITY_DONT_FORCE_LIVE_AFTER < 0
|
||||
#define GUALCHK(var) GUALCHKVAL(var)
|
||||
#else
|
||||
#define GUALCHK(var) GUALCHKXPR(var)
|
||||
#endif
|
||||
|
||||
/* The name of the GDB program, with arguments to make it quiet. This
|
||||
is GUALITY_GDB_DEFAULT GUALITY_GDB_ARGS by default, but it can be
|
||||
overridden by setting the GUALITY_GDB environment variable, whereas
|
||||
GUALITY_GDB_DEFAULT can be overridden by setting the
|
||||
GUALITY_GDB_NAME environment variable. */
|
||||
|
||||
static const char *guality_gdb_command;
|
||||
#define GUALITY_GDB_DEFAULT "gdb"
|
||||
#define GUALITY_GDB_ARGS " -nx -nw --quiet > /dev/null 2>&1"
|
||||
|
||||
/* Kinds of results communicated as exit status from child process
|
||||
that runs gdb to the parent process that's being monitored. */
|
||||
|
||||
enum guality_counter { PASS, INCORRECT, INCOMPLETE };
|
||||
|
||||
/* Count of passes and errors. */
|
||||
|
||||
static int guality_count[INCOMPLETE+1];
|
||||
|
||||
/* If --guality-skip is given in the command line, all the monitoring,
|
||||
forking and debugger-attaching action will be disabled. This is
|
||||
useful to run the monitor program within a debugger. */
|
||||
|
||||
static int guality_skip;
|
||||
|
||||
/* This is a file descriptor to which we'll issue gdb commands to
|
||||
probe and test. */
|
||||
FILE *guality_gdb_input;
|
||||
|
||||
/* This holds the line number where we're supposed to set a
|
||||
breakpoint. */
|
||||
int guality_breakpoint_line;
|
||||
|
||||
/* GDB should set this to true once it's connected. */
|
||||
int volatile guality_attached;
|
||||
|
||||
/* This function is the main guality program. It may actually be
|
||||
defined as main, because we #define main to it afterwards. Because
|
||||
of this wrapping, guality_main may not have an empty argument
|
||||
list. */
|
||||
|
||||
extern int guality_main (int argc, char *argv[]);
|
||||
|
||||
static void __attribute__((noinline))
|
||||
guality_check (const char *name, long long value, int unknown_ok);
|
||||
|
||||
/* Set things up, run guality_main, then print a summary and quit. */
|
||||
|
||||
int
|
||||
main (int argc, char *argv[])
|
||||
{
|
||||
int i;
|
||||
char *argv0 = argv[0];
|
||||
|
||||
guality_gdb_command = getenv ("GUALITY_GDB");
|
||||
if (!guality_gdb_command)
|
||||
{
|
||||
guality_gdb_command = getenv ("GUALITY_GDB_NAME");
|
||||
if (!guality_gdb_command)
|
||||
guality_gdb_command = GUALITY_GDB_DEFAULT GUALITY_GDB_ARGS;
|
||||
else
|
||||
{
|
||||
int len = strlen (guality_gdb_command) + sizeof (GUALITY_GDB_ARGS);
|
||||
char *buf = __builtin_alloca (len);
|
||||
strcpy (buf, guality_gdb_command);
|
||||
strcat (buf, GUALITY_GDB_ARGS);
|
||||
guality_gdb_command = buf;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 1; i < argc; i++)
|
||||
if (strcmp (argv[i], "--guality-skip") == 0)
|
||||
guality_skip = 1;
|
||||
else
|
||||
break;
|
||||
|
||||
if (!guality_skip)
|
||||
{
|
||||
guality_gdb_input = popen (guality_gdb_command, "w");
|
||||
/* This call sets guality_breakpoint_line. */
|
||||
guality_check (NULL, 0, 0);
|
||||
if (!guality_gdb_input
|
||||
|| fprintf (guality_gdb_input, "\
|
||||
set height 0\n\
|
||||
attach %i\n\
|
||||
set guality_attached = 1\n\
|
||||
b %i\n\
|
||||
continue\n\
|
||||
", (int)getpid (), guality_breakpoint_line) <= 0
|
||||
|| fflush (guality_gdb_input))
|
||||
{
|
||||
perror ("gdb");
|
||||
abort ();
|
||||
}
|
||||
}
|
||||
|
||||
argv[--i] = argv0;
|
||||
|
||||
guality_main (argc - i, argv + i);
|
||||
|
||||
i = guality_count[INCORRECT];
|
||||
|
||||
fprintf (stderr, "%s: %i PASS, %i FAIL, %i UNRESOLVED\n",
|
||||
i ? "FAIL" : "PASS",
|
||||
guality_count[PASS], guality_count[INCORRECT],
|
||||
guality_count[INCOMPLETE]);
|
||||
|
||||
return i;
|
||||
}
|
||||
|
||||
#define main guality_main
|
||||
|
||||
/* Tell the GDB child process to evaluate NAME in the caller. If it
|
||||
matches VALUE, we have a PASS; if it's unknown and UNKNOWN_OK, we
|
||||
have an UNRESOLVED. Otherwise, it's a FAIL. */
|
||||
|
||||
static void __attribute__((noinline))
|
||||
guality_check (const char *name, long long value, int unknown_ok)
|
||||
{
|
||||
int result;
|
||||
|
||||
if (guality_skip)
|
||||
return;
|
||||
|
||||
{
|
||||
volatile long long xvalue = -1;
|
||||
volatile int unavailable = 0;
|
||||
if (name)
|
||||
{
|
||||
/* The sequence below cannot distinguish an optimized away
|
||||
variable from one mapped to a non-lvalue zero. */
|
||||
if (fprintf (guality_gdb_input, "\
|
||||
up\n\
|
||||
set $value1 = 0\n\
|
||||
set $value1 = (%s)\n\
|
||||
set $value2 = -1\n\
|
||||
set $value2 = (%s)\n\
|
||||
set $value3 = $value1 - 1\n\
|
||||
set $value4 = $value1 + 1\n\
|
||||
set $value3 = (%s)++\n\
|
||||
set $value4 = --(%s)\n\
|
||||
down\n\
|
||||
set xvalue = $value1\n\
|
||||
set unavailable = $value1 != $value2 ? -1 : $value3 != $value4 ? 1 : 0\n\
|
||||
continue\n\
|
||||
", name, name, name, name) <= 0
|
||||
|| fflush (guality_gdb_input))
|
||||
{
|
||||
perror ("gdb");
|
||||
abort ();
|
||||
}
|
||||
else if (!guality_attached)
|
||||
{
|
||||
unsigned int timeout = 0;
|
||||
|
||||
/* Give GDB some more time to attach. Wrapping around a
|
||||
32-bit counter takes some seconds, it should be plenty
|
||||
of time for GDB to get a chance to start up and attach,
|
||||
but not long enough that, if GDB is unavailable or
|
||||
broken, we'll take far too long to give up. */
|
||||
while (--timeout && !guality_attached)
|
||||
;
|
||||
if (!timeout && !guality_attached)
|
||||
{
|
||||
fprintf (stderr, "gdb: took too long to attach\n");
|
||||
abort ();
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
guality_breakpoint_line = __LINE__ + 5;
|
||||
return;
|
||||
}
|
||||
/* Do NOT add lines between the __LINE__ above and the line below,
|
||||
without also adjusting the added constant to match. */
|
||||
if (!unavailable || (unavailable > 0 && xvalue))
|
||||
{
|
||||
if (xvalue == value)
|
||||
result = PASS;
|
||||
else
|
||||
result = INCORRECT;
|
||||
}
|
||||
else
|
||||
result = INCOMPLETE;
|
||||
asm ("" : : "X" (name), "X" (value), "X" (unknown_ok), "m" (xvalue));
|
||||
switch (result)
|
||||
{
|
||||
case PASS:
|
||||
fprintf (stderr, "PASS: %s is %lli\n", name, value);
|
||||
break;
|
||||
case INCORRECT:
|
||||
fprintf (stderr, "FAIL: %s is %lli, not %lli\n", name, xvalue, value);
|
||||
break;
|
||||
case INCOMPLETE:
|
||||
fprintf (stderr, "%s: %s is %s, expected %lli\n",
|
||||
unknown_ok ? "UNRESOLVED" : "FAIL", name,
|
||||
unavailable < 0 ? "not computable" : "optimized away", value);
|
||||
result = unknown_ok ? INCOMPLETE : INCORRECT;
|
||||
break;
|
||||
default:
|
||||
abort ();
|
||||
}
|
||||
}
|
||||
|
||||
switch (result)
|
||||
{
|
||||
case PASS:
|
||||
case INCORRECT:
|
||||
case INCOMPLETE:
|
||||
++guality_count[result];
|
||||
break;
|
||||
|
||||
default:
|
||||
abort ();
|
||||
}
|
||||
}
|
|
@ -449,11 +449,15 @@ proc cleanup-dump { suffix } {
|
|||
# The name might include a list of options; extract the file name.
|
||||
set src [file tail [lindex $testcase 0]]
|
||||
remove-build-file "[file tail $src].$suffix"
|
||||
# -fcompare-debug dumps
|
||||
remove-build-file "[file tail $src].gk.$suffix"
|
||||
|
||||
# Clean up dump files for additional source files.
|
||||
if [info exists additional_sources] {
|
||||
foreach srcfile $additional_sources {
|
||||
remove-build-file "[file tail $srcfile].$suffix"
|
||||
# -fcompare-debug dumps
|
||||
remove-build-file "[file tail $srcfile].gk.$suffix"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -468,7 +472,7 @@ proc cleanup-saved-temps { args } {
|
|||
set suffixes {}
|
||||
|
||||
# add the to-be-kept suffixes
|
||||
foreach suffix {".ii" ".i" ".s" ".o"} {
|
||||
foreach suffix {".ii" ".i" ".s" ".o" ".gkd"} {
|
||||
if {[lsearch $args $suffix] < 0} {
|
||||
lappend suffixes $suffix
|
||||
}
|
||||
|
@ -480,6 +484,8 @@ proc cleanup-saved-temps { args } {
|
|||
upvar 2 name testcase
|
||||
foreach suffix $suffixes {
|
||||
remove-build-file "[file rootname [file tail $testcase]]$suffix"
|
||||
# -fcompare-debug dumps
|
||||
remove-build-file "[file rootname [file tail $testcase]].gk$suffix"
|
||||
}
|
||||
|
||||
# Clean up saved temp files for additional source files.
|
||||
|
@ -487,6 +493,8 @@ proc cleanup-saved-temps { args } {
|
|||
foreach srcfile $additional_sources {
|
||||
foreach suffix $suffixes {
|
||||
remove-build-file "[file rootname [file tail $srcfile]]$suffix"
|
||||
# -fcompare-debug dumps
|
||||
remove-build-file "[file rootname [file tail $srcfile]].gk$suffix"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
29
gcc/toplev.c
29
gcc/toplev.c
|
@ -319,11 +319,23 @@ int flag_dump_rtl_in_asm = 0;
|
|||
the support provided depends on the backend. */
|
||||
rtx stack_limit_rtx;
|
||||
|
||||
/* Nonzero if we should track variables. When
|
||||
flag_var_tracking == AUTODETECT_VALUE it will be set according
|
||||
to optimize, debug_info_level and debug_hooks in process_options (). */
|
||||
/* Positive if we should track variables, negative if we should run
|
||||
the var-tracking pass only to discard debug annotations, zero if
|
||||
we're not to run it. When flag_var_tracking == AUTODETECT_VALUE it
|
||||
will be set according to optimize, debug_info_level and debug_hooks
|
||||
in process_options (). */
|
||||
int flag_var_tracking = AUTODETECT_VALUE;
|
||||
|
||||
/* Positive if we should track variables at assignments, negative if
|
||||
we should run the var-tracking pass only to discard debug
|
||||
annotations. When flag_var_tracking_assignments ==
|
||||
AUTODETECT_VALUE it will be set according to flag_var_tracking. */
|
||||
int flag_var_tracking_assignments = AUTODETECT_VALUE;
|
||||
|
||||
/* Nonzero if we should toggle flag_var_tracking_assignments after
|
||||
processing options and computing its default. */
|
||||
int flag_var_tracking_assignments_toggle = 0;
|
||||
|
||||
/* Type of stack check. */
|
||||
enum stack_check_type flag_stack_check = NO_STACK_CHECK;
|
||||
|
||||
|
@ -1876,7 +1888,7 @@ process_options (void)
|
|||
debug_info_level = DINFO_LEVEL_NONE;
|
||||
}
|
||||
|
||||
if (flag_dump_final_insns)
|
||||
if (flag_dump_final_insns && !flag_syntax_only && !no_backend)
|
||||
{
|
||||
FILE *final_output = fopen (flag_dump_final_insns, "w");
|
||||
if (!final_output)
|
||||
|
@ -1977,6 +1989,15 @@ process_options (void)
|
|||
if (flag_var_tracking == AUTODETECT_VALUE)
|
||||
flag_var_tracking = optimize >= 1;
|
||||
|
||||
if (flag_var_tracking_assignments == AUTODETECT_VALUE)
|
||||
flag_var_tracking_assignments = 0;
|
||||
|
||||
if (flag_var_tracking_assignments_toggle)
|
||||
flag_var_tracking_assignments = !flag_var_tracking_assignments;
|
||||
|
||||
if (flag_var_tracking_assignments && !flag_var_tracking)
|
||||
flag_var_tracking = flag_var_tracking_assignments = -1;
|
||||
|
||||
if (flag_tree_cselim == AUTODETECT_VALUE)
|
||||
#ifdef HAVE_conditional_move
|
||||
flag_tree_cselim = 1;
|
||||
|
|
113
gcc/tree-cfg.c
113
gcc/tree-cfg.c
|
@ -1395,6 +1395,49 @@ gimple_can_merge_blocks_p (basic_block a, basic_block b)
|
|||
return true;
|
||||
}
|
||||
|
||||
/* Return true if the var whose chain of uses starts at PTR has no
|
||||
nondebug uses. */
|
||||
bool
|
||||
has_zero_uses_1 (const ssa_use_operand_t *head)
|
||||
{
|
||||
const ssa_use_operand_t *ptr;
|
||||
|
||||
for (ptr = head->next; ptr != head; ptr = ptr->next)
|
||||
if (!is_gimple_debug (USE_STMT (ptr)))
|
||||
return false;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/* Return true if the var whose chain of uses starts at PTR has a
|
||||
single nondebug use. Set USE_P and STMT to that single nondebug
|
||||
use, if so, or to NULL otherwise. */
|
||||
bool
|
||||
single_imm_use_1 (const ssa_use_operand_t *head,
|
||||
use_operand_p *use_p, gimple *stmt)
|
||||
{
|
||||
ssa_use_operand_t *ptr, *single_use = 0;
|
||||
|
||||
for (ptr = head->next; ptr != head; ptr = ptr->next)
|
||||
if (!is_gimple_debug (USE_STMT (ptr)))
|
||||
{
|
||||
if (single_use)
|
||||
{
|
||||
single_use = NULL;
|
||||
break;
|
||||
}
|
||||
single_use = ptr;
|
||||
}
|
||||
|
||||
if (use_p)
|
||||
*use_p = single_use;
|
||||
|
||||
if (stmt)
|
||||
*stmt = single_use ? single_use->loc.stmt : NULL;
|
||||
|
||||
return !!single_use;
|
||||
}
|
||||
|
||||
/* Replaces all uses of NAME by VAL. */
|
||||
|
||||
void
|
||||
|
@ -2263,7 +2306,11 @@ remove_bb (basic_block bb)
|
|||
/* Remove all the instructions in the block. */
|
||||
if (bb_seq (bb) != NULL)
|
||||
{
|
||||
for (i = gsi_start_bb (bb); !gsi_end_p (i);)
|
||||
/* Walk backwards so as to get a chance to substitute all
|
||||
released DEFs into debug stmts. See
|
||||
eliminate_unnecessary_stmts() in tree-ssa-dce.c for more
|
||||
details. */
|
||||
for (i = gsi_last_bb (bb); !gsi_end_p (i);)
|
||||
{
|
||||
gimple stmt = gsi_stmt (i);
|
||||
if (gimple_code (stmt) == GIMPLE_LABEL
|
||||
|
@ -2299,13 +2346,17 @@ remove_bb (basic_block bb)
|
|||
gsi_remove (&i, true);
|
||||
}
|
||||
|
||||
if (gsi_end_p (i))
|
||||
i = gsi_last_bb (bb);
|
||||
else
|
||||
gsi_prev (&i);
|
||||
|
||||
/* Don't warn for removed gotos. Gotos are often removed due to
|
||||
jump threading, thus resulting in bogus warnings. Not great,
|
||||
since this way we lose warnings for gotos in the original
|
||||
program that are indeed unreachable. */
|
||||
if (gimple_code (stmt) != GIMPLE_GOTO
|
||||
&& gimple_has_location (stmt)
|
||||
&& !loc)
|
||||
&& gimple_has_location (stmt))
|
||||
loc = gimple_location (stmt);
|
||||
}
|
||||
}
|
||||
|
@ -2807,7 +2858,14 @@ gimple
|
|||
first_stmt (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator i = gsi_start_bb (bb);
|
||||
return !gsi_end_p (i) ? gsi_stmt (i) : NULL;
|
||||
gimple stmt = NULL;
|
||||
|
||||
while (!gsi_end_p (i) && is_gimple_debug ((stmt = gsi_stmt (i))))
|
||||
{
|
||||
gsi_next (&i);
|
||||
stmt = NULL;
|
||||
}
|
||||
return stmt;
|
||||
}
|
||||
|
||||
/* Return the first non-label statement in basic block BB. */
|
||||
|
@ -2826,8 +2884,15 @@ first_non_label_stmt (basic_block bb)
|
|||
gimple
|
||||
last_stmt (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator b = gsi_last_bb (bb);
|
||||
return !gsi_end_p (b) ? gsi_stmt (b) : NULL;
|
||||
gimple_stmt_iterator i = gsi_last_bb (bb);
|
||||
gimple stmt = NULL;
|
||||
|
||||
while (!gsi_end_p (i) && is_gimple_debug ((stmt = gsi_stmt (i))))
|
||||
{
|
||||
gsi_prev (&i);
|
||||
stmt = NULL;
|
||||
}
|
||||
return stmt;
|
||||
}
|
||||
|
||||
/* Return the last statement of an otherwise empty block. Return NULL
|
||||
|
@ -2837,14 +2902,14 @@ last_stmt (basic_block bb)
|
|||
gimple
|
||||
last_and_only_stmt (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator i = gsi_last_bb (bb);
|
||||
gimple_stmt_iterator i = gsi_last_nondebug_bb (bb);
|
||||
gimple last, prev;
|
||||
|
||||
if (gsi_end_p (i))
|
||||
return NULL;
|
||||
|
||||
last = gsi_stmt (i);
|
||||
gsi_prev (&i);
|
||||
gsi_prev_nondebug (&i);
|
||||
if (gsi_end_p (i))
|
||||
return last;
|
||||
|
||||
|
@ -4109,6 +4174,22 @@ verify_gimple_phi (gimple stmt)
|
|||
}
|
||||
|
||||
|
||||
/* Verify a gimple debug statement STMT.
|
||||
Returns true if anything is wrong. */
|
||||
|
||||
static bool
|
||||
verify_gimple_debug (gimple stmt ATTRIBUTE_UNUSED)
|
||||
{
|
||||
/* There isn't much that could be wrong in a gimple debug stmt. A
|
||||
gimple debug bind stmt, for example, maps a tree, that's usually
|
||||
a VAR_DECL or a PARM_DECL, but that could also be some scalarized
|
||||
component or member of an aggregate type, to another tree, that
|
||||
can be an arbitrary expression. These stmts expand into debug
|
||||
insns, and are converted to debug notes by var-tracking.c. */
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
/* Verify the GIMPLE statement STMT. Returns true if there is an
|
||||
error, otherwise false. */
|
||||
|
||||
|
@ -4163,6 +4244,9 @@ verify_types_in_gimple_stmt (gimple stmt)
|
|||
case GIMPLE_PREDICT:
|
||||
return false;
|
||||
|
||||
case GIMPLE_DEBUG:
|
||||
return verify_gimple_debug (stmt);
|
||||
|
||||
default:
|
||||
gcc_unreachable ();
|
||||
}
|
||||
|
@ -4269,6 +4353,9 @@ verify_stmt (gimple_stmt_iterator *gsi)
|
|||
}
|
||||
}
|
||||
|
||||
if (is_gimple_debug (stmt))
|
||||
return false;
|
||||
|
||||
memset (&wi, 0, sizeof (wi));
|
||||
addr = walk_gimple_op (gsi_stmt (*gsi), verify_expr, &wi);
|
||||
if (addr)
|
||||
|
@ -6618,7 +6705,7 @@ debug_loop_num (unsigned num, int verbosity)
|
|||
static bool
|
||||
gimple_block_ends_with_call_p (basic_block bb)
|
||||
{
|
||||
gimple_stmt_iterator gsi = gsi_last_bb (bb);
|
||||
gimple_stmt_iterator gsi = gsi_last_nondebug_bb (bb);
|
||||
return is_gimple_call (gsi_stmt (gsi));
|
||||
}
|
||||
|
||||
|
@ -6924,8 +7011,12 @@ remove_edge_and_dominated_blocks (edge e)
|
|||
remove_edge (e);
|
||||
else
|
||||
{
|
||||
for (i = 0; VEC_iterate (basic_block, bbs_to_remove, i, bb); i++)
|
||||
delete_basic_block (bb);
|
||||
/* Walk backwards so as to get a chance to substitute all
|
||||
released DEFs into debug stmts. See
|
||||
eliminate_unnecessary_stmts() in tree-ssa-dce.c for more
|
||||
details. */
|
||||
for (i = VEC_length (basic_block, bbs_to_remove); i-- > 0; )
|
||||
delete_basic_block (VEC_index (basic_block, bbs_to_remove, i));
|
||||
}
|
||||
|
||||
/* Update the dominance information. The immediate dominator may change only
|
||||
|
|
|
@ -252,6 +252,11 @@ tree_forwarder_block_p (basic_block bb, bool phi_wanted)
|
|||
return false;
|
||||
break;
|
||||
|
||||
/* ??? For now, hope there's a corresponding debug
|
||||
assignment at the destination. */
|
||||
case GIMPLE_DEBUG:
|
||||
break;
|
||||
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
|
@ -415,9 +420,10 @@ remove_forwarder_block (basic_block bb)
|
|||
for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); )
|
||||
{
|
||||
label = gsi_stmt (gsi);
|
||||
gcc_assert (gimple_code (label) == GIMPLE_LABEL);
|
||||
gcc_assert (gimple_code (label) == GIMPLE_LABEL
|
||||
|| is_gimple_debug (label));
|
||||
gsi_remove (&gsi, false);
|
||||
gsi_insert_before (&gsi_to, label, GSI_CONTINUE_LINKING);
|
||||
gsi_insert_before (&gsi_to, label, GSI_SAME_STMT);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue