binutils-gdb/gdb/record-btrace.c
Pedro Alves f6ac5f3d63 Convert struct target_ops to C++
I.e., use C++ virtual methods and inheritance instead of tables of
function pointers.

Unfortunately, there's no way to do a smooth transition.  ALL native
targets in the tree must be converted at the same time.  I've tested
all I could with cross compilers and with help from GCC compile farm,
but naturally I haven't been able to test many of the ports.  Still, I
made a best effort to port everything over, and while I expect some
build problems due to typos and such, which should be trivial to fix,
I don't expect any design problems.

* Implementation notes:

- The flattened current_target is gone.  References to current_target
  or current_target.beneath are replaced with references to
  target_stack (the top of the stack) directly.

- To keep "set debug target" working, this adds a new debug_stratum
  layer that sits on top of the stack, prints the debug, and delegates
  to the target beneath.

  In addition, this makes the shortname and longname properties of
  target_ops be virtual methods instead of data fields, and makes the
  debug target defer those to the target beneath.  This is so that
  debug code sprinkled around that does "if (debugtarget) ..."  can
  transparently print the name of the target beneath.

  A patch later in the series actually splits out the
  shortname/longname methods to a separate structure, but I preferred
  to keep that chance separate as it is associated with changing a bit
  the design of how targets are registered and open.

- Since you can't check whether a C++ virtual method is overridden,
  the old method of checking whether a target_ops implements a method
  by comparing the function pointer must be replaced with something
  else.

  Some cases are fixed by adding a parallel "can_do_foo" target_ops
  methods.  E.g.,:

    +  for (t = target_stack; t != NULL; t = t->beneath)
	 {
    -      if (t->to_create_inferior != NULL)
    +      if (t->can_create_inferior ())
	    break;
	 }

  Others are fixed by changing void return type to bool or int return
  type, and have the default implementation return false or -1, to
  indicate lack of support.

- make-target-delegates was adjusted to generate C++ classes and
  methods.

  It needed tweaks to grok "virtual" in front of the target method
  name, and for the fact that methods are no longer function pointers.
  (In particular, the current code parsing the return type was simple
  because it could simply parse up until the '(' in '(*to_foo)'.

  It now generates a couple C++ classes that inherit target_ops:
  dummy_target and debug_target.

  Since we need to generate the class declarations as well, i.e., we
  need to emit methods twice, we now generate the code in two passes.

- The core_target global is renamed to avoid conflict with the
  "core_target" class.

- ctf/tfile targets

  init_tracefile_ops is replaced by a base class that is inherited by
  both ctf and tfile.

- bsd-uthread

  The bsd_uthread_ops_hack hack is gone.  It's not needed because
  nothing was extending a target created by bsd_uthread_target.

- remote/extended-remote targets

  This is a first pass, just enough to C++ify target_ops.

  A later pass will convert more free functions to methods, and make
  remote_state be truly per remote instance, allowing multiple
  simultaneous instances of remote targets.

- inf-child/"native" is converted to an actual base class
  (inf_child_target), that is inherited by all native targets.

- GNU/Linux

  The old weird double-target linux_ops mechanism in linux-nat.c, is
  gone, replaced by adding a few virtual methods to linux-nat.h's
  target_ops, called low_XXX, that the concrete linux-nat
  implementations override.  Sort of like gdbserver's
  linux_target_ops, but simpler, for requiring only one
  target_ops-like hierarchy, which spares implementing the same method
  twice when we need to forward the method to a low implementation.
  The low target simply reimplements the target_ops method directly in
  that case.

  There are a few remaining linux-nat.c hooks that would be better
  converted to low_ methods like above too.  E.g.:

   linux_nat_set_new_thread (t, x86_linux_new_thread);
   linux_nat_set_new_fork (t, x86_linux_new_fork);
   linux_nat_set_forget_process

  That'll be done in a follow up patch.

- We can no longer use functions like x86_use_watchpoints to install
  custom methods on an arbitrary base target.

  The patch replaces instances of such a pattern with template mixins.
  For example memory_breakpoint_target defined in target.h, or
  x86_nat_target in x86-nat.h.

- linux_trad_target, MIPS and Alpha GNU/Linux

  The code in the new linux-nat-trad.h/c files which was split off of
  inf-ptrace.h/c recently, is converted to a C++ base class, and used
  by the MIPS and Alpha GNU/Linux ports.

- BSD targets

  The

    $architecture x NetBSD/OpenBSD/FreeBSD

  support matrix complicates things a bit.  There's common BSD target
  code, and there's common architecture-specific code shared between
  the different BSDs.  Currently, all that is stiched together to form
  a final target, via the i386bsd_target, x86bsd_target,
  fbsd_nat_add_target functions etc.

  This introduces new fbsd_nat_target, obsd_nat_target and
  nbsd_nat_target classes that serve as base/prototype target for the
  corresponding BSD variant.

  And introduces generic i386/AMD64 BSD targets, to be used as
  template mixin to build a final target.  Similarly, a generic SPARC
  target is added, used by both BSD and Linux ports.

- bsd_kvm_add_target, BSD libkvm target

  I considered making bsd_kvm_supply_pcb a virtual method, and then
  have each port inherit bsd_kvm_target and override that method, but
  that was resulting in lots of unjustified churn, so I left the
  function pointer mechanism alone.

gdb/ChangeLog:
2018-05-02  Pedro Alves  <palves@redhat.com>
	    John Baldwin  <jhb@freebsd.org>

	* target.h (enum strata) <debug_stratum>: New.
	(struct target_ops) <all delegation methods>: Replace by C++
	virtual methods, and drop "to_" prefix.  All references updated
	throughout.
	<to_shortname, to_longname, to_doc, to_data,
	to_have_steppable_watchpoint, to_have_continuable_watchpoint,
	to_has_thread_control, to_attach_no_wait>: Delete, replaced by
	virtual methods.  All references updated throughout.
	<can_attach, supports_terminal_ours, can_create_inferior,
	get_thread_control_capabilities, attach_no_wait>: New
	virtual methods.
	<insert_breakpoint, remove_breakpoint>: Now
	TARGET_DEFAULT_NORETURN methods.
	<info_proc>: Now returns bool.
	<to_magic>: Delete.
	(OPS_MAGIC): Delete.
	(current_target): Delete.  All references replaced by references
	to ...
	(target_stack): ... this.  New.
	(target_shortname, target_longname): Adjust.
	(target_can_run): Now a function declaration.
	(default_child_has_all_memory, default_child_has_memory)
	(default_child_has_stack, default_child_has_registers)
	(default_child_has_execution): Remove target_ops parameter.
	(complete_target_initialization): Delete.
	(memory_breakpoint_target): New template class.
	(test_target_ops): Refactor as a C++ class with virtual methods.
	* make-target-delegates (NAME_PART): Tighten.
	(POINTER_PART, CP_SYMBOL): New.
	(SIMPLE_RETURN_PART): Reimplement.
	(VEC_RETURN_PART): Expect less.
	(RETURN_PART, VIRTUAL_PART): New.
	(METHOD): Adjust to C++ virtual methods.
	(scan_target_h): Remove reference to C99.
	(dname): Output "target_ops::" prefix.
	(write_function_header): Adjust to output a C++ class method.
	(write_declaration): New.
	(write_delegator): Adjust to output a C++ class method.
	(tdname): Output "dummy_target::" prefix.
	(write_tdefault, write_debugmethod): Adjust to output a C++ class
	method.
	(tdefault_names, debug_names): Delete.
	(return_types, tdefaults, styles, argtypes_array): New.
	(top level): All methods are delegators.
	(print_class): New.
	(top level): Print dummy_target and debug_target classes.
	* target-delegates.c: Regenerate.
	* target-debug.h (target_debug_print_enum_info_proc_what)
	(target_debug_print_thread_control_capabilities)
	(target_debug_print_thread_info_p): New.
	* target.c (dummy_target): Delete.
	(the_dummy_target, the_debug_target): New.
	(target_stack): Now extern.
	(set_targetdebug): Push/unpush debug target.
	(default_child_has_all_memory, default_child_has_memory)
	(default_child_has_stack, default_child_has_registers)
	(default_child_has_execution): Remove target_ops parameter.
	(complete_target_initialization): Delete.
	(add_target_with_completer): No longer call
	complete_target_initialization.
	(target_supports_terminal_ours): Use regular delegation.
	(update_current_target): Delete.
	(push_target): No longer check magic number.  Don't call
	update_current_target.
	(unpush_target): Don't call update_current_target.
	(target_is_pushed): No longer check magic number.
	(target_require_runnable): Skip for all stratums over
	process_stratum.
	(target_ops::info_proc): New.
	(target_info_proc): Use find_target_at and
	find_default_run_target.
	(target_supports_disable_randomization): Use regular delegation.
	(target_get_osdata): Use find_target_at.
	(target_ops::open, target_ops::close, target_ops::can_attach)
	(target_ops::attach, target_ops::can_create_inferior)
	(target_ops::create_inferior, target_ops::can_run)
	(target_can_run): New.
	(default_fileio_target): Use regular delegation.
	(target_ops::fileio_open, target_ops::fileio_pwrite)
	(target_ops::fileio_pread, target_ops::fileio_fstat)
	(target_ops::fileio_close, target_ops::fileio_unlink)
	(target_ops::fileio_readlink): New.
	(target_fileio_open_1, target_fileio_unlink)
	(target_fileio_readlink): Always call the target method.  Handle
	FILEIO_ENOSYS.
	(return_zero, return_zero_has_execution): Delete.
	(init_dummy_target): Delete.
	(dummy_target::dummy_target, dummy_target::shortname)
	(dummy_target::longname, dummy_target::doc)
	(debug_target::debug_target, debug_target::shortname)
	(debug_target::longname, debug_target::doc): New.
	(target_supports_delete_record): Use regular delegation.
	(setup_target_debug): Delete.
	(maintenance_print_target_stack): Skip debug_stratum.
	(initialize_targets): Instantiate the_dummy_target and
	the_debug_target.
	* auxv.c (target_auxv_parse): Remove 'ops' parameter.  Adjust to
	use target_stack.
	(target_auxv_search, fprint_target_auxv): Adjust.
	(info_auxv_command): Adjust to use target_stack.
	* auxv.h (target_auxv_parse): Remove 'ops' parameter.
	* exceptions.c (print_flush): Handle a NULL target_stack.
	* regcache.c (target_ops_no_register): Refactor as class with
	virtual methods.

	* exec.c (exec_target): New class.
	(exec_ops): Now an exec_target.
	(exec_open, exec_close_1, exec_get_section_table)
	(exec_xfer_partial, exec_files_info, exec_has_memory)
	(exec_make_note_section): Refactor as exec_target methods.
	(exec_file_clear, ignore, exec_remove_breakpoint, init_exec_ops):
	Delete.
	(exec_target::find_memory_regions): New.
	(_initialize_exec): Don't call init_exec_ops.
	* gdbcore.h (exec_file_clear): Delete.

	* corefile.c (core_target): Delete.
	(core_file_command): Adjust.
	* corelow.c (core_target): New class.
	(the_core_target): New.
	(core_close): Remove target_ops parameter.
	(core_close_cleanup): Adjust.
	(core_target::close): New.
	(core_open, core_detach, get_core_registers, core_files_info)
	(core_xfer_partial, core_thread_alive, core_read_description)
	(core_pid_to_str, core_thread_name, core_has_memory)
	(core_has_stack, core_has_registers, core_info_proc): Rework as
	core_target methods.
	(ignore, core_remove_breakpoint, init_core_ops): Delete.
	(_initialize_corelow): Initialize the_core_target.
	* gdbcore.h (core_target): Delete.
	(the_core_target): New.

	* ctf.c: (ctf_target): New class.
	(ctf_ops): Now a ctf_target.
	(ctf_open, ctf_close, ctf_files_info, ctf_fetch_registers)
	(ctf_xfer_partial, ctf_get_trace_state_variable_value)
	(ctf_trace_find, ctf_traceframe_info): Refactor as ctf_target
	methods.
	(init_ctf_ops): Delete.
	(_initialize_ctf): Don't call it.
	* tracefile-tfile.c (tfile_target): New class.
	(tfile_ops): Now a tfile_target.
	(tfile_open, tfile_close, tfile_files_info)
	(tfile_get_tracepoint_status, tfile_trace_find)
	(tfile_fetch_registers, tfile_xfer_partial)
	(tfile_get_trace_state_variable_value, tfile_traceframe_info):
	Refactor as tfile_target methods.
	(tfile_xfer_partial_features): Remove target_ops parameter.
	(init_tfile_ops): Delete.
	(_initialize_tracefile_tfile): Don't call it.
	* tracefile.c (tracefile_has_all_memory, tracefile_has_memory)
	(tracefile_has_stack, tracefile_has_registers)
	(tracefile_thread_alive, tracefile_get_trace_status): Refactor as
	tracefile_target methods.
	(init_tracefile_ops): Delete.
	(tracefile_target::tracefile_target): New.
	* tracefile.h: Include "target.h".
	(tracefile_target): New class.
	(init_tracefile_ops): Delete.

	* spu-multiarch.c (spu_multiarch_target): New class.
	(spu_ops): Now a spu_multiarch_target.
	(spu_thread_architecture, spu_region_ok_for_hw_watchpoint)
	(spu_fetch_registers, spu_store_registers, spu_xfer_partial)
	(spu_search_memory, spu_mourn_inferior): Refactor as
	spu_multiarch_target methods.
	(init_spu_ops): Delete.
	(_initialize_spu_multiarch): Remove references to init_spu_ops,
	complete_target_initialization.

	* ravenscar-thread.c (ravenscar_thread_target): New class.
	(ravenscar_ops): Now a ravenscar_thread_target.
	(ravenscar_resume, ravenscar_wait, ravenscar_update_thread_list)
	(ravenscar_thread_alive, ravenscar_pid_to_str)
	(ravenscar_fetch_registers, ravenscar_store_registers)
	(ravenscar_prepare_to_store, ravenscar_stopped_by_sw_breakpoint)
	(ravenscar_stopped_by_hw_breakpoint)
	(ravenscar_stopped_by_watchpoint, ravenscar_stopped_data_address)
	(ravenscar_mourn_inferior, ravenscar_core_of_thread)
	(ravenscar_get_ada_task_ptid): Refactor as ravenscar_thread_target
	methods.
	(init_ravenscar_thread_ops): Delete.
	(_initialize_ravenscar): Remove references to
	init_ravenscar_thread_ops and complete_target_initialization.

	* bsd-uthread.c (bsd_uthread_ops_hack): Delete.
	(bsd_uthread_target): New class.
	(bsd_uthread_ops): Now a bsd_uthread_target.
	(bsd_uthread_activate): Adjust to refer to bsd_uthread_ops.
	(bsd_uthread_close, bsd_uthread_mourn_inferior)
	(bsd_uthread_fetch_registers, bsd_uthread_store_registers)
	(bsd_uthread_wait, bsd_uthread_resume, bsd_uthread_thread_alive)
	(bsd_uthread_update_thread_list, bsd_uthread_extra_thread_info)
	(bsd_uthread_pid_to_str): Refactor as bsd_uthread_target methods.
	(bsd_uthread_target): Delete function.
	(_initialize_bsd_uthread): Remove reference to
	complete_target_initialization.

	* bfd-target.c (target_bfd_data): Delete.  Fields folded into ...
	(target_bfd): ... this new class.
	(target_bfd_xfer_partial, target_bfd_get_section_table)
	(target_bfd_close): Refactor as target_bfd methods.
	(target_bfd::~target_bfd): New.
	(target_bfd_reopen): Adjust.
	(target_bfd::close): New.

	* record-btrace.c (record_btrace_target): New class.
	(record_btrace_ops): Now a record_btrace_target.
	(record_btrace_open, record_btrace_stop_recording)
	(record_btrace_disconnect, record_btrace_close)
	(record_btrace_async, record_btrace_info)
	(record_btrace_insn_history, record_btrace_insn_history_range)
	(record_btrace_insn_history_from, record_btrace_call_history)
	(record_btrace_call_history_range)
	(record_btrace_call_history_from, record_btrace_record_method)
	(record_btrace_is_replaying, record_btrace_will_replay)
	(record_btrace_xfer_partial, record_btrace_insert_breakpoint)
	(record_btrace_remove_breakpoint, record_btrace_fetch_registers)
	(record_btrace_store_registers, record_btrace_prepare_to_store)
	(record_btrace_to_get_unwinder)
	(record_btrace_to_get_tailcall_unwinder, record_btrace_resume)
	(record_btrace_commit_resume, record_btrace_wait)
	(record_btrace_stop, record_btrace_can_execute_reverse)
	(record_btrace_stopped_by_sw_breakpoint)
	(record_btrace_supports_stopped_by_sw_breakpoint)
	(record_btrace_stopped_by_hw_breakpoint)
	(record_btrace_supports_stopped_by_hw_breakpoint)
	(record_btrace_update_thread_list, record_btrace_thread_alive)
	(record_btrace_goto_begin, record_btrace_goto_end)
	(record_btrace_goto, record_btrace_stop_replaying_all)
	(record_btrace_execution_direction)
	(record_btrace_prepare_to_generate_core)
	(record_btrace_done_generating_core): Refactor as
	record_btrace_target methods.
	(init_record_btrace_ops): Delete.
	(_initialize_record_btrace): Remove reference to
	init_record_btrace_ops.
	* record-full.c (RECORD_FULL_IS_REPLAY): Adjust to always refer to
	the execution_direction global.
	(record_full_base_target, record_full_target)
	(record_full_core_target): New classes.
	(record_full_ops): Now a record_full_target.
	(record_full_core_ops): Now a record_full_core_target.
	(record_full_target::detach, record_full_target::disconnect)
	(record_full_core_target::disconnect)
	(record_full_target::mourn_inferior, record_full_target::kill):
	New.
	(record_full_open, record_full_close, record_full_async): Refactor
	as methods of the record_full_base_target class.
	(record_full_resume, record_full_commit_resume): Refactor
	as methods of the record_full_target class.
	(record_full_wait, record_full_stopped_by_watchpoint)
	(record_full_stopped_data_address)
	(record_full_stopped_by_sw_breakpoint)
	(record_full_supports_stopped_by_sw_breakpoint)
	(record_full_stopped_by_hw_breakpoint)
	(record_full_supports_stopped_by_hw_breakpoint): Refactor as
	methods of the record_full_base_target class.
	(record_full_store_registers, record_full_xfer_partial)
	(record_full_insert_breakpoint, record_full_remove_breakpoint):
	Refactor as methods of the record_full_target class.
	(record_full_can_execute_reverse, record_full_get_bookmark)
	(record_full_goto_bookmark, record_full_execution_direction)
	(record_full_record_method, record_full_info, record_full_delete)
	(record_full_is_replaying, record_full_will_replay)
	(record_full_goto_begin, record_full_goto_end, record_full_goto)
	(record_full_stop_replaying): Refactor as methods of the
	record_full_base_target class.
	(record_full_core_resume, record_full_core_kill)
	(record_full_core_fetch_registers)
	(record_full_core_prepare_to_store)
	(record_full_core_store_registers, record_full_core_xfer_partial)
	(record_full_core_insert_breakpoint)
	(record_full_core_remove_breakpoint)
	(record_full_core_has_execution): Refactor
	as methods of the record_full_core_target class.
	(record_full_base_target::supports_delete_record): New.
	(init_record_full_ops): Delete.
	(init_record_full_core_ops): Delete.
	(record_full_save): Refactor as method of the
	record_full_base_target class.
	(_initialize_record_full): Remove references to
	init_record_full_ops and init_record_full_core_ops.

	* remote.c (remote_target, extended_remote_target): New classes.
	(remote_ops): Now a remote_target.
	(extended_remote_ops): Now an extended_remote_target.
	(remote_insert_fork_catchpoint, remote_remove_fork_catchpoint)
	(remote_insert_vfork_catchpoint, remote_remove_vfork_catchpoint)
	(remote_insert_exec_catchpoint, remote_remove_exec_catchpoint)
	(remote_pass_signals, remote_set_syscall_catchpoint)
	(remote_program_signals, )
	(remote_thread_always_alive): Remove target_ops parameter.
	(remote_thread_alive, remote_thread_name)
	(remote_update_thread_list, remote_threads_extra_info)
	(remote_static_tracepoint_marker_at)
	(remote_static_tracepoint_markers_by_strid)
	(remote_get_ada_task_ptid, remote_close, remote_start_remote)
	(remote_open): Refactor as methods of remote_target.
	(extended_remote_open, extended_remote_detach)
	(extended_remote_attach, extended_remote_post_attach):
	(extended_remote_supports_disable_randomization)
	(extended_remote_create_inferior): : Refactor as method of
	extended_remote_target.
	(remote_set_permissions, remote_open_1, remote_detach)
	(remote_follow_fork, remote_follow_exec, remote_disconnect)
	(remote_resume, remote_commit_resume, remote_stop)
	(remote_interrupt, remote_pass_ctrlc, remote_terminal_inferior)
	(remote_terminal_ours, remote_wait, remote_fetch_registers)
	(remote_prepare_to_store, remote_store_registers)
	(remote_flash_erase, remote_flash_done, remote_files_info)
	(remote_kill, remote_mourn, remote_insert_breakpoint)
	(remote_remove_breakpoint, remote_insert_watchpoint)
	(remote_watchpoint_addr_within_range)
	(remote_remove_watchpoint, remote_region_ok_for_hw_watchpoint)
	(remote_check_watch_resources, remote_stopped_by_sw_breakpoint)
	(remote_supports_stopped_by_sw_breakpoint)
	(remote_stopped_by_hw_breakpoint)
	(remote_supports_stopped_by_hw_breakpoint)
	(remote_stopped_by_watchpoint, remote_stopped_data_address)
	(remote_insert_hw_breakpoint, remote_remove_hw_breakpoint)
	(remote_verify_memory): Refactor as methods of remote_target.
	(remote_write_qxfer, remote_read_qxfer): Remove target_ops
	parameter.
	(remote_xfer_partial, remote_get_memory_xfer_limit)
	(remote_search_memory, remote_rcmd, remote_memory_map)
	(remote_pid_to_str, remote_get_thread_local_address)
	(remote_get_tib_address, remote_read_description): Refactor as
	methods of remote_target.
	(remote_target::fileio_open, remote_target::fileio_pwrite)
	(remote_target::fileio_pread, remote_target::fileio_close): New.
	(remote_hostio_readlink, remote_hostio_fstat)
	(remote_filesystem_is_local, remote_can_execute_reverse)
	(remote_supports_non_stop, remote_supports_disable_randomization)
	(remote_supports_multi_process, remote_supports_cond_breakpoints)
	(remote_supports_enable_disable_tracepoint)
	(remote_supports_string_tracing)
	(remote_can_run_breakpoint_commands, remote_trace_init)
	(remote_download_tracepoint, remote_can_download_tracepoint)
	(remote_download_trace_state_variable, remote_enable_tracepoint)
	(remote_disable_tracepoint, remote_trace_set_readonly_regions)
	(remote_trace_start, remote_get_trace_status)
	(remote_get_tracepoint_status, remote_trace_stop)
	(remote_trace_find, remote_get_trace_state_variable_value)
	(remote_save_trace_data, remote_get_raw_trace_data)
	(remote_set_disconnected_tracing, remote_core_of_thread)
	(remote_set_circular_trace_buffer, remote_traceframe_info)
	(remote_get_min_fast_tracepoint_insn_len)
	(remote_set_trace_buffer_size, remote_set_trace_notes)
	(remote_use_agent, remote_can_use_agent, remote_enable_btrace)
	(remote_disable_btrace, remote_teardown_btrace)
	(remote_read_btrace, remote_btrace_conf)
	(remote_augmented_libraries_svr4_read, remote_load)
	(remote_pid_to_exec_file, remote_can_do_single_step)
	(remote_execution_direction, remote_thread_handle_to_thread_info):
	Refactor as methods of remote_target.
	(init_remote_ops, init_extended_remote_ops): Delete.
	(remote_can_async_p, remote_is_async_p, remote_async)
	(remote_thread_events, remote_upload_tracepoints)
	(remote_upload_trace_state_variables): Refactor as methods of
	remote_target.
	(_initialize_remote): Remove references to init_remote_ops and
	init_extended_remote_ops.

	* remote-sim.c (gdbsim_target): New class.
	(gdbsim_fetch_register, gdbsim_store_register, gdbsim_kill)
	(gdbsim_load, gdbsim_create_inferior, gdbsim_open, gdbsim_close)
	(gdbsim_detach, gdbsim_resume, gdbsim_interrupt)
	(gdbsim_wait, gdbsim_prepare_to_store, gdbsim_xfer_partial)
	(gdbsim_files_info, gdbsim_mourn_inferior, gdbsim_thread_alive)
	(gdbsim_pid_to_str, gdbsim_has_all_memory, gdbsim_has_memory):
	Refactor as methods of gdbsim_target.
	(gdbsim_ops): Now a gdbsim_target.
	(init_gdbsim_ops): Delete.
	(gdbsim_cntrl_c): Adjust.
	(_initialize_remote_sim): Remove reference to init_gdbsim_ops.

	* amd64-linux-nat.c (amd64_linux_nat_target): New class.
	(the_amd64_linux_nat_target): New.
	(amd64_linux_fetch_inferior_registers)
	(amd64_linux_store_inferior_registers): Refactor as methods of
	amd64_linux_nat_target.
	(_initialize_amd64_linux_nat): Adjust.  Set linux_target.
	* i386-linux-nat.c: Don't include "linux-nat.h".
	(i386_linux_nat_target): New class.
	(the_i386_linux_nat_target): New.
	(i386_linux_fetch_inferior_registers)
	(i386_linux_store_inferior_registers, i386_linux_resume): Refactor
	as methods of i386_linux_nat_target.
	(_initialize_i386_linux_nat): Adjust.  Set linux_target.
	* inf-child.c (inf_child_ops): Delete.
	(inf_child_fetch_inferior_registers)
	(inf_child_store_inferior_registers): Delete.
	(inf_child_post_attach, inf_child_prepare_to_store): Refactor as
	methods of inf_child_target.
	(inf_child_target::supports_terminal_ours)
	(inf_child_target::terminal_init)
	(inf_child_target::terminal_inferior)
	(inf_child_target::terminal_ours_for_output)
	(inf_child_target::terminal_ours, inf_child_target::interrupt)
	(inf_child_target::pass_ctrlc, inf_child_target::terminal_info):
	New.
	(inf_child_open, inf_child_disconnect, inf_child_close)
	(inf_child_mourn_inferior, inf_child_maybe_unpush_target)
	(inf_child_post_startup_inferior, inf_child_can_run)
	(inf_child_pid_to_exec_file): Refactor as methods of
	inf_child_target.
	(inf_child_follow_fork): Delete.
	(inf_child_target::can_create_inferior)
	(inf_child_target::can_attach): New.
	(inf_child_target::has_all_memory, inf_child_target::has_memory)
	(inf_child_target::has_stack, inf_child_target::has_registers)
	(inf_child_target::has_execution): New.
	(inf_child_fileio_open, inf_child_fileio_pwrite)
	(inf_child_fileio_pread, inf_child_fileio_fstat)
	(inf_child_fileio_close, inf_child_fileio_unlink)
	(inf_child_fileio_readlink, inf_child_use_agent)
	(inf_child_can_use_agent): Refactor as methods of
	inf_child_target.
	(return_zero, inf_child_target): Delete.
	(inf_child_target::inf_child_target): New.
	* inf-child.h: Include "target.h".
	(inf_child_target): Delete function prototype.
	(inf_child_target): New class.
	(inf_child_open_target, inf_child_mourn_inferior)
	(inf_child_maybe_unpush_target): Delete.
	* inf-ptrace.c (inf_ptrace_target::~inf_ptrace_target): New.
	(inf_ptrace_follow_fork, inf_ptrace_insert_fork_catchpoint)
	(inf_ptrace_remove_fork_catchpoint, inf_ptrace_create_inferior)
	(inf_ptrace_post_startup_inferior, inf_ptrace_mourn_inferior)
	(inf_ptrace_attach, inf_ptrace_post_attach, inf_ptrace_detach)
	(inf_ptrace_detach_success, inf_ptrace_kill, inf_ptrace_resume)
	(inf_ptrace_wait, inf_ptrace_xfer_partial)
	(inf_ptrace_thread_alive, inf_ptrace_files_info)
	(inf_ptrace_pid_to_str, inf_ptrace_auxv_parse): Refactor as
	methods of inf_ptrace_target.
	(inf_ptrace_target): Delete function.
	* inf-ptrace.h: Include "inf-child.h".
	(inf_ptrace_target): Delete function declaration.
	(inf_ptrace_target): New class.
	(inf_ptrace_trad_target, inf_ptrace_detach_success): Delete.
	* linux-nat.c (linux_target): New.
	(linux_ops, linux_ops_saved, super_xfer_partial): Delete.
	(linux_nat_target::~linux_nat_target): New.
	(linux_child_post_attach, linux_child_post_startup_inferior)
	(linux_child_follow_fork, linux_child_insert_fork_catchpoint)
	(linux_child_remove_fork_catchpoint)
	(linux_child_insert_vfork_catchpoint)
	(linux_child_remove_vfork_catchpoint)
	(linux_child_insert_exec_catchpoint)
	(linux_child_remove_exec_catchpoint)
	(linux_child_set_syscall_catchpoint, linux_nat_pass_signals)
	(linux_nat_create_inferior, linux_nat_attach, linux_nat_detach)
	(linux_nat_resume, linux_nat_stopped_by_watchpoint)
	(linux_nat_stopped_data_address)
	(linux_nat_stopped_by_sw_breakpoint)
	(linux_nat_supports_stopped_by_sw_breakpoint)
	(linux_nat_stopped_by_hw_breakpoint)
	(linux_nat_supports_stopped_by_hw_breakpoint, linux_nat_wait)
	(linux_nat_kill, linux_nat_mourn_inferior)
	(linux_nat_xfer_partial, linux_nat_thread_alive)
	(linux_nat_update_thread_list, linux_nat_pid_to_str)
	(linux_nat_thread_name, linux_child_pid_to_exec_file)
	(linux_child_static_tracepoint_markers_by_strid)
	(linux_nat_is_async_p, linux_nat_can_async_p)
	(linux_nat_supports_non_stop, linux_nat_always_non_stop_p)
	(linux_nat_supports_multi_process)
	(linux_nat_supports_disable_randomization, linux_nat_async)
	(linux_nat_stop, linux_nat_close, linux_nat_thread_address_space)
	(linux_nat_core_of_thread, linux_nat_filesystem_is_local)
	(linux_nat_fileio_open, linux_nat_fileio_readlink)
	(linux_nat_fileio_unlink, linux_nat_thread_events): Refactor as
	methods of linux_nat_target.
	(linux_nat_wait_1, linux_xfer_siginfo, linux_proc_xfer_partial)
	(linux_proc_xfer_spu, linux_nat_xfer_osdata): Remove target_ops
	parameter.
	(check_stopped_by_watchpoint): Adjust.
	(linux_xfer_partial): Delete.
	(linux_target_install_ops, linux_target, linux_nat_add_target):
	Delete.
	(linux_nat_target::linux_nat_target): New.
	* linux-nat.h: Include "inf-ptrace.h".
	(linux_nat_target): New.
	(linux_target, linux_target_install_ops, linux_nat_add_target):
	Delete function declarations.
	(linux_target): Declare global.
	* linux-thread-db.c (thread_db_target): New.
	(thread_db_target::thread_db_target): New.
	(thread_db_ops): Delete.
	(the_thread_db_target): New.
	(thread_db_detach, thread_db_wait, thread_db_mourn_inferior)
	(thread_db_update_thread_list, thread_db_pid_to_str)
	(thread_db_extra_thread_info)
	(thread_db_thread_handle_to_thread_info)
	(thread_db_get_thread_local_address, thread_db_get_ada_task_ptid)
	(thread_db_resume): Refactor as methods of thread_db_target.
	(init_thread_db_ops): Delete.
	(_initialize_thread_db): Remove reference to init_thread_db_ops.
	* x86-linux-nat.c: Don't include "linux-nat.h".
	(super_post_startup_inferior): Delete.
	(x86_linux_nat_target::~x86_linux_nat_target): New.
	(x86_linux_child_post_startup_inferior)
	(x86_linux_read_description, x86_linux_enable_btrace)
	(x86_linux_disable_btrace, x86_linux_teardown_btrace)
	(x86_linux_read_btrace, x86_linux_btrace_conf): Refactor as
	methods of x86_linux_nat_target.
	(x86_linux_create_target): Delete.  Bits folded ...
	(x86_linux_add_target): ... here.  Now takes a linux_nat_target
	pointer.
	* x86-linux-nat.h: Include "linux-nat.h" and "x86-nat.h".
	(x86_linux_nat_target): New class.
	(x86_linux_create_target): Delete.
	(x86_linux_add_target): Now takes a linux_nat_target pointer.
	* x86-nat.c (x86_insert_watchpoint, x86_remove_watchpoint)
	(x86_region_ok_for_watchpoint, x86_stopped_data_address)
	(x86_stopped_by_watchpoint, x86_insert_hw_breakpoint)
	(x86_remove_hw_breakpoint, x86_can_use_hw_breakpoint)
	(x86_stopped_by_hw_breakpoint): Remove target_ops parameter and
	make extern.
	(x86_use_watchpoints): Delete.
	* x86-nat.h: Include "breakpoint.h" and "target.h".
	(x86_use_watchpoints): Delete.
	(x86_can_use_hw_breakpoint, x86_region_ok_for_hw_watchpoint)
	(x86_stopped_by_watchpoint, x86_stopped_data_address)
	(x86_insert_watchpoint, x86_remove_watchpoint)
	(x86_insert_hw_breakpoint, x86_remove_hw_breakpoint)
	(x86_stopped_by_hw_breakpoint): New declarations.
	(x86_nat_target): New template class.

	* ppc-linux-nat.c (ppc_linux_nat_target): New class.
	(the_ppc_linux_nat_target): New.
	(ppc_linux_fetch_inferior_registers)
	(ppc_linux_can_use_hw_breakpoint)
	(ppc_linux_region_ok_for_hw_watchpoint)
	(ppc_linux_ranged_break_num_registers)
	(ppc_linux_insert_hw_breakpoint, ppc_linux_remove_hw_breakpoint)
	(ppc_linux_insert_mask_watchpoint)
	(ppc_linux_remove_mask_watchpoint)
	(ppc_linux_can_accel_watchpoint_condition)
	(ppc_linux_insert_watchpoint, ppc_linux_remove_watchpoint)
	(ppc_linux_stopped_data_address, ppc_linux_stopped_by_watchpoint)
	(ppc_linux_watchpoint_addr_within_range)
	(ppc_linux_masked_watch_num_registers)
	(ppc_linux_store_inferior_registers, ppc_linux_auxv_parse)
	(ppc_linux_read_description): Refactor as methods of
	ppc_linux_nat_target.
	(_initialize_ppc_linux_nat): Adjust.  Set linux_target.

	* procfs.c (procfs_xfer_partial): Delete forward declaration.
	(procfs_target): New class.
	(the_procfs_target): New.
	(procfs_target): Delete function.
	(procfs_auxv_parse, procfs_attach, procfs_detach)
	(procfs_fetch_registers, procfs_store_registers, procfs_wait)
	(procfs_xfer_partial, procfs_resume, procfs_pass_signals)
	(procfs_files_info, procfs_kill_inferior, procfs_mourn_inferior)
	(procfs_create_inferior, procfs_update_thread_list)
	(procfs_thread_alive, procfs_pid_to_str)
	(procfs_can_use_hw_breakpoint, procfs_stopped_by_watchpoint)
	(procfs_stopped_data_address, procfs_insert_watchpoint)
	(procfs_remove_watchpoint, procfs_region_ok_for_hw_watchpoint)
	(proc_find_memory_regions, procfs_info_proc)
	(procfs_make_note_section): Refactor as methods of procfs_target.
	(_initialize_procfs): Adjust.
	* sol-thread.c (sol_thread_target): New class.
	(sol_thread_ops): Now a sol_thread_target.
	(sol_thread_detach, sol_thread_resume, sol_thread_wait)
	(sol_thread_fetch_registers, sol_thread_store_registers)
	(sol_thread_xfer_partial, sol_thread_mourn_inferior)
	(sol_thread_alive, solaris_pid_to_str, sol_update_thread_list)
	(sol_get_ada_task_ptid): Refactor as methods of sol_thread_target.
	(init_sol_thread_ops): Delete.
	(_initialize_sol_thread): Adjust.  Remove references to
	init_sol_thread_ops and complete_target_initialization.

	* windows-nat.c (windows_nat_target): New class.
	(windows_fetch_inferior_registers)
	(windows_store_inferior_registers, windows_resume, windows_wait)
	(windows_attach, windows_detach, windows_pid_to_exec_file)
	(windows_files_info, windows_create_inferior)
	(windows_mourn_inferior, windows_interrupt, windows_kill_inferior)
	(windows_close, windows_pid_to_str, windows_xfer_partial)
	(windows_get_tib_address, windows_get_ada_task_ptid)
	(windows_thread_name, windows_thread_alive): Refactor as
	windows_nat_target methods.
	(do_initial_windows_stuff): Adjust.
	(windows_target): Delete function.
	(_initialize_windows_nat): Adjust.

	* darwin-nat.c (darwin_resume, darwin_wait_to, darwin_interrupt)
	(darwin_mourn_inferior, darwin_kill_inferior)
	(darwin_create_inferior, darwin_attach, darwin_detach)
	(darwin_pid_to_str, darwin_thread_alive, darwin_xfer_partial)
	(darwin_pid_to_exec_file, darwin_get_ada_task_ptid)
	(darwin_supports_multi_process): Refactor as darwin_nat_target
	methods.
	(darwin_resume_to, darwin_files_info): Delete.
	(_initialize_darwin_inferior): Rename to ...
	(_initialize_darwin_nat): ... this.  Adjust to C++ification.
	* darwin-nat.h: Include "inf-child.h".
	(darwin_nat_target): New class.
	(darwin_complete_target): Delete.
	* i386-darwin-nat.c (i386_darwin_nat_target): New class.
	(darwin_target): New.
	(i386_darwin_fetch_inferior_registers)
	(i386_darwin_store_inferior_registers): Refactor as methods of
	darwin_nat_target.
	(darwin_complete_target): Delete, with ...
	(_initialize_i386_darwin_nat): ... bits factored out here.

	* alpha-linux-nat.c (alpha_linux_nat_target): New class.
	(the_alpha_linux_nat_target): New.
	(alpha_linux_register_u_offset): Refactor as
	alpha_linux_nat_target method.
	(_initialize_alpha_linux_nat): Adjust.
	* linux-nat-trad.c (inf_ptrace_register_u_offset): Delete.
	(inf_ptrace_fetch_register, inf_ptrace_fetch_registers)
	(inf_ptrace_store_register, inf_ptrace_store_registers): Refact as
	methods of linux_nat_trad_target.
	(linux_trad_target): Delete.
	* linux-nat-trad.h (linux_trad_target): Delete function.
	(linux_nat_trad_target): New class.
	* mips-linux-nat.c (mips_linux_nat_target): New class.
	(super_fetch_registers, super_store_registers, super_close):
	Delete.
	(the_mips_linux_nat_target): New.
	(mips64_linux_regsets_fetch_registers)
	(mips64_linux_regsets_store_registers)
	(mips64_linux_fetch_registers, mips64_linux_store_registers)
	(mips_linux_register_u_offset, mips_linux_read_description)
	(mips_linux_can_use_hw_breakpoint)
	(mips_linux_stopped_by_watchpoint)
	(mips_linux_stopped_data_address)
	(mips_linux_region_ok_for_hw_watchpoint)
	(mips_linux_insert_watchpoint, mips_linux_remove_watchpoint)
	(mips_linux_close): Refactor as methods of mips_linux_nat.
	(_initialize_mips_linux_nat): Adjust to C++ification.

	* aix-thread.c (aix_thread_target): New class.
	(aix_thread_ops): Now an aix_thread_target.
	(aix_thread_detach, aix_thread_resume, aix_thread_wait)
	(aix_thread_fetch_registers, aix_thread_store_registers)
	(aix_thread_xfer_partial, aix_thread_mourn_inferior)
	(aix_thread_thread_alive, aix_thread_pid_to_str)
	(aix_thread_extra_thread_info, aix_thread_get_ada_task_ptid):
	Refactor as methods of aix_thread_target.
	(init_aix_thread_ops): Delete.
	(_initialize_aix_thread): Remove references to init_aix_thread_ops
	and complete_target_initialization.
	* rs6000-nat.c (rs6000_xfer_shared_libraries): Delete.
	(rs6000_nat_target): New class.
	(the_rs6000_nat_target): New.
	(rs6000_fetch_inferior_registers, rs6000_store_inferior_registers)
	(rs6000_xfer_partial, rs6000_wait, rs6000_create_inferior)
	(rs6000_xfer_shared_libraries): Refactor as rs6000_nat_target methods.
	(super_create_inferior): Delete.
	(_initialize_rs6000_nat): Adjust to C++ification.

	* arm-linux-nat.c (arm_linux_nat_target): New class.
	(the_arm_linux_nat_target): New.
	(arm_linux_fetch_inferior_registers)
	(arm_linux_store_inferior_registers, arm_linux_read_description)
	(arm_linux_can_use_hw_breakpoint, arm_linux_insert_hw_breakpoint)
	(arm_linux_remove_hw_breakpoint)
	(arm_linux_region_ok_for_hw_watchpoint)
	(arm_linux_insert_watchpoint, arm_linux_remove_watchpoint)
	(arm_linux_stopped_data_address, arm_linux_stopped_by_watchpoint)
	(arm_linux_watchpoint_addr_within_range): Refactor as methods of
	arm_linux_nat_target.
	(_initialize_arm_linux_nat): Adjust to C++ification.

	* aarch64-linux-nat.c (aarch64_linux_nat_target): New class.
	(the_aarch64_linux_nat_target): New.
	(aarch64_linux_fetch_inferior_registers)
	(aarch64_linux_store_inferior_registers)
	(aarch64_linux_child_post_startup_inferior)
	(aarch64_linux_read_description)
	(aarch64_linux_can_use_hw_breakpoint)
	(aarch64_linux_insert_hw_breakpoint)
	(aarch64_linux_remove_hw_breakpoint)
	(aarch64_linux_insert_watchpoint, aarch64_linux_remove_watchpoint)
	(aarch64_linux_region_ok_for_hw_watchpoint)
	(aarch64_linux_stopped_data_address)
	(aarch64_linux_stopped_by_watchpoint)
	(aarch64_linux_watchpoint_addr_within_range)
	(aarch64_linux_can_do_single_step): Refactor as methods of
	aarch64_linux_nat_target.
	(super_post_startup_inferior): Delete.
	(_initialize_aarch64_linux_nat): Adjust to C++ification.

	* hppa-linux-nat.c (hppa_linux_nat_target): New class.
	(the_hppa_linux_nat_target): New.
	(hppa_linux_fetch_inferior_registers)
	(hppa_linux_store_inferior_registers): Refactor as methods of
	hppa_linux_nat_target.
	(_initialize_hppa_linux_nat): Adjust to C++ification.

	* ia64-linux-nat.c (ia64_linux_nat_target): New class.
	(the_ia64_linux_nat_target): New.
	(ia64_linux_insert_watchpoint, ia64_linux_remove_watchpoint)
	(ia64_linux_stopped_data_address)
	(ia64_linux_stopped_by_watchpoint, ia64_linux_fetch_registers)
	(ia64_linux_store_registers, ia64_linux_xfer_partial): Refactor as
	ia64_linux_nat_target methods.
	(super_xfer_partial): Delete.
	(_initialize_ia64_linux_nat): Adjust to C++ification.

	* m32r-linux-nat.c (m32r_linux_nat_target): New class.
	(the_m32r_linux_nat_target): New.
	(m32r_linux_fetch_inferior_registers)
	(m32r_linux_store_inferior_registers): Refactor as
	m32r_linux_nat_target methods.
	(_initialize_m32r_linux_nat): Adjust to C++ification.

	* m68k-linux-nat.c (m68k_linux_nat_target): New class.
	(the_m68k_linux_nat_target): New.
	(m68k_linux_fetch_inferior_registers)
	(m68k_linux_store_inferior_registers): Refactor as
	m68k_linux_nat_target methods.
	(_initialize_m68k_linux_nat): Adjust to C++ification.

	* s390-linux-nat.c (s390_linux_nat_target): New class.
	(the_s390_linux_nat_target): New.
	(s390_linux_fetch_inferior_registers)
	(s390_linux_store_inferior_registers, s390_stopped_by_watchpoint)
	(s390_insert_watchpoint, s390_remove_watchpoint)
	(s390_can_use_hw_breakpoint, s390_insert_hw_breakpoint)
	(s390_remove_hw_breakpoint, s390_region_ok_for_hw_watchpoint)
	(s390_auxv_parse, s390_read_description): Refactor as methods of
	s390_linux_nat_target.
	(_initialize_s390_nat): Adjust to C++ification.

	* sparc-linux-nat.c (sparc_linux_nat_target): New class.
	(the_sparc_linux_nat_target): New.
	(_initialize_sparc_linux_nat): Adjust to C++ification.
	* sparc-nat.c (sparc_fetch_inferior_registers)
	(sparc_store_inferior_registers): Remove target_ops parameter.
	* sparc-nat.h (sparc_fetch_inferior_registers)
	(sparc_store_inferior_registers): Remove target_ops parameter.
	* sparc64-linux-nat.c (sparc64_linux_nat_target): New class.
	(the_sparc64_linux_nat_target): New.
	(_initialize_sparc64_linux_nat): Adjust to C++ification.

	* spu-linux-nat.c (spu_linux_nat_target): New class.
	(the_spu_linux_nat_target): New.
	(spu_child_post_startup_inferior, spu_child_post_attach)
	(spu_child_wait, spu_fetch_inferior_registers)
	(spu_store_inferior_registers, spu_xfer_partial)
	(spu_can_use_hw_breakpoint): Refactor as spu_linux_nat_target
	methods.
	(_initialize_spu_nat): Adjust to C++ification.

	* tilegx-linux-nat.c (tilegx_linux_nat_target): New class.
	(the_tilegx_linux_nat_target): New.
	(fetch_inferior_registers, store_inferior_registers):
	Refactor as methods.
	(_initialize_tile_linux_nat): Adjust to C++ification.

	* xtensa-linux-nat.c (xtensa_linux_nat_target): New class.
	(the_xtensa_linux_nat_target): New.
	(xtensa_linux_fetch_inferior_registers)
	(xtensa_linux_store_inferior_registers): Refactor as
	xtensa_linux_nat_target methods.
	(_initialize_xtensa_linux_nat): Adjust to C++ification.

	* fbsd-nat.c (USE_SIGTRAP_SIGINFO): Delete.
	(fbsd_pid_to_exec_file, fbsd_find_memory_regions)
	(fbsd_find_memory_regions, fbsd_info_proc, fbsd_xfer_partial)
	(fbsd_thread_alive, fbsd_pid_to_str, fbsd_thread_name)
	(fbsd_update_thread_list, fbsd_resume, fbsd_wait)
	(fbsd_stopped_by_sw_breakpoint)
	(fbsd_supports_stopped_by_sw_breakpoint, fbsd_follow_fork)
	(fbsd_insert_fork_catchpoint, fbsd_remove_fork_catchpoint)
	(fbsd_insert_vfork_catchpoint, fbsd_remove_vfork_catchpoint)
	(fbsd_post_startup_inferior, fbsd_post_attach)
	(fbsd_insert_exec_catchpoint, fbsd_remove_exec_catchpoint)
	(fbsd_set_syscall_catchpoint)
	(super_xfer_partial, super_resume, super_wait)
	(fbsd_supports_stopped_by_hw_breakpoint): Delete.
	(fbsd_handle_debug_trap): Remove target_ops parameter.
	(fbsd_nat_add_target): Delete.
	* fbsd-nat.h: Include "inf-ptrace.h".
	(fbsd_nat_add_target): Delete.
	(USE_SIGTRAP_SIGINFO): Define.
	(fbsd_nat_target): New class.

	* amd64-bsd-nat.c (amd64bsd_fetch_inferior_registers)
	(amd64bsd_store_inferior_registers): Remove target_ops parameter.
	(amd64bsd_target): Delete.
	* amd64-bsd-nat.h: New file.
	* amd64-fbsd-nat.c: Include "amd64-bsd-nat.h" instead of
	"x86-bsd-nat.h".
	(amd64_fbsd_nat_target): New class.
	(the_amd64_fbsd_nat_target): New.
	(amd64fbsd_read_description): Refactor as method of
	amd64_fbsd_nat_target.
	(amd64_fbsd_nat_target::supports_stopped_by_hw_breakpoint): New.
	(_initialize_amd64fbsd_nat): Adjust to C++ification.
	* amd64-nat.h (amd64bsd_target): Delete function declaration.
	* i386-bsd-nat.c (i386bsd_fetch_inferior_registers)
	(i386bsd_store_inferior_registers): Remove target_ops parameter.
	(i386bsd_target): Delete.
	* i386-bsd-nat.h (i386bsd_target): Delete function declaration.
	(i386bsd_fetch_inferior_registers)
	(i386bsd_store_inferior_registers): Declare.
	(i386_bsd_nat_target): New class.
	* i386-fbsd-nat.c (i386_fbsd_nat_target): New class.
	(the_i386_fbsd_nat_target): New.
	(i386fbsd_resume, i386fbsd_read_description): Refactor as
	i386_fbsd_nat_target methods.
	(i386_fbsd_nat_target::supports_stopped_by_hw_breakpoint): New.
	(_initialize_i386fbsd_nat): Adjust to C++ification.
	* x86-bsd-nat.c (super_mourn_inferior): Delete.
	(x86bsd_mourn_inferior, x86bsd_target): Delete.
	(_initialize_x86_bsd_nat): Adjust to C++ification.
	* x86-bsd-nat.h: Include "x86-nat.h".
	(x86bsd_target): Delete declaration.
	(x86bsd_nat_target): New class.

	* aarch64-fbsd-nat.c (aarch64_fbsd_nat_target): New class.
	(the_aarch64_fbsd_nat_target): New.
	(aarch64_fbsd_fetch_inferior_registers)
	(aarch64_fbsd_store_inferior_registers): Refactor as methods of
	aarch64_fbsd_nat_target.
	(_initialize_aarch64_fbsd_nat): Adjust to C++ification.
	* alpha-bsd-nat.c (alpha_bsd_nat_target): New class.
	(the_alpha_bsd_nat_target): New.
	(alphabsd_fetch_inferior_registers)
	(alphabsd_store_inferior_registers): Refactor as
	alpha_bsd_nat_target methods.
	(_initialize_alphabsd_nat): Refactor as methods of
	alpha_bsd_nat_target.
	* amd64-nbsd-nat.c: Include "amd64-bsd-nat.h".
	(the_amd64_nbsd_nat_target): New.
	(_initialize_amd64nbsd_nat): Adjust to C++ification.
	* amd64-obsd-nat.c: Include "amd64-bsd-nat.h".
	(the_amd64_obsd_nat_target): New.
	(_initialize_amd64obsd_nat): Adjust to C++ification.
	* arm-fbsd-nat.c (arm_fbsd_nat_target): New.
	(the_arm_fbsd_nat_target): New.
	(arm_fbsd_fetch_inferior_registers)
	(arm_fbsd_store_inferior_registers, arm_fbsd_read_description):
	(_initialize_arm_fbsd_nat): Refactor as methods of
	arm_fbsd_nat_target.
	(_initialize_arm_fbsd_nat): Adjust to C++ification.
	* arm-nbsd-nat.c (arm_netbsd_nat_target): New class.
	(the_arm_netbsd_nat_target): New.
	(armnbsd_fetch_registers, armnbsd_store_registers): Refactor as
	arm_netbsd_nat_target.
	(_initialize_arm_netbsd_nat): Adjust to C++ification.
	* hppa-nbsd-nat.c (hppa_nbsd_nat_target): New class.
	(the_hppa_nbsd_nat_target): New.
	(hppanbsd_fetch_registers, hppanbsd_store_registers): Refactor as
	hppa_nbsd_nat_target methods.
	(_initialize_hppanbsd_nat): Adjust to C++ification.
	* hppa-obsd-nat.c (hppa_obsd_nat_target): New class.
	(the_hppa_obsd_nat_target): New.
	(hppaobsd_fetch_registers, hppaobsd_store_registers): Refactor as
	methods of hppa_obsd_nat_target.
	(_initialize_hppaobsd_nat): Adjust to C++ification.  Use
	add_target.
	* i386-nbsd-nat.c (the_i386_nbsd_nat_target): New.
	(_initialize_i386nbsd_nat): Adjust to C++ification.  Use
	add_target.
	* i386-obsd-nat.c (the_i386_obsd_nat_target): New.
	(_initialize_i386obsd_nat): Use add_target.
	* m68k-bsd-nat.c (m68k_bsd_nat_target): New class.
	(the_m68k_bsd_nat_target): New.
	(m68kbsd_fetch_inferior_registers)
	(m68kbsd_store_inferior_registers): Refactor as methods of
	m68k_bsd_nat_target.
	(_initialize_m68kbsd_nat): Adjust to C++ification.
	* mips-fbsd-nat.c (mips_fbsd_nat_target): New class.
	(the_mips_fbsd_nat_target): New.
	(mips_fbsd_fetch_inferior_registers)
	(mips_fbsd_store_inferior_registers): Refactor as methods of
	mips_fbsd_nat_target.
	(_initialize_mips_fbsd_nat): Adjust to C++ification.  Use
	add_target.
	* mips-nbsd-nat.c (mips_nbsd_nat_target): New class.
	(the_mips_nbsd_nat_target): New.
	(mipsnbsd_fetch_inferior_registers)
	(mipsnbsd_store_inferior_registers): Refactor as methods of
	mips_nbsd_nat_target.
	(_initialize_mipsnbsd_nat): Adjust to C++ification.
	* mips64-obsd-nat.c (mips64_obsd_nat_target): New class.
	(the_mips64_obsd_nat_target): New.
	(mips64obsd_fetch_inferior_registers)
	(mips64obsd_store_inferior_registers): Refactor as methods of
	mips64_obsd_nat_target.
	(_initialize_mips64obsd_nat): Adjust to C++ification.  Use
	add_target.
	* nbsd-nat.c (nbsd_pid_to_exec_file): Refactor as method of
	nbsd_nat_target.
	* nbsd-nat.h: Include "inf-ptrace.h".
	(nbsd_nat_target): New class.
	* obsd-nat.c (obsd_pid_to_str, obsd_update_thread_list)
	(obsd_wait): Refactor as methods of obsd_nat_target.
	(obsd_add_target): Delete.
	* obsd-nat.h: Include "inf-ptrace.h".
	(obsd_nat_target): New class.
	* ppc-fbsd-nat.c (ppc_fbsd_nat_target): New class.
	(the_ppc_fbsd_nat_target): New.
	(ppcfbsd_fetch_inferior_registers)
	(ppcfbsd_store_inferior_registers): Refactor as methods of
	ppc_fbsd_nat_target.
	(_initialize_ppcfbsd_nat): Adjust to C++ification.  Use
	add_target.
	* ppc-nbsd-nat.c (ppc_nbsd_nat_target): New class.
	(the_ppc_nbsd_nat_target): New.
	(ppcnbsd_fetch_inferior_registers)
	(ppcnbsd_store_inferior_registers): Refactor as methods of
	ppc_nbsd_nat_target.
	(_initialize_ppcnbsd_nat): Adjust to C++ification.
	* ppc-obsd-nat.c (ppc_obsd_nat_target): New class.
	(the_ppc_obsd_nat_target): New.
	(ppcobsd_fetch_registers, ppcobsd_store_registers): Refactor as
	methods of ppc_obsd_nat_target.
	(_initialize_ppcobsd_nat): Adjust to C++ification.  Use
	add_target.
	* sh-nbsd-nat.c (sh_nbsd_nat_target): New class.
	(the_sh_nbsd_nat_target): New.
	(shnbsd_fetch_inferior_registers)
	(shnbsd_store_inferior_registers): Refactor as methods of
	sh_nbsd_nat_target.
	(_initialize_shnbsd_nat): Adjust to C++ification.
	* sparc-nat.c (sparc_xfer_wcookie): Make extern.
	(inf_ptrace_xfer_partial): Delete.
	(sparc_xfer_partial, sparc_target): Delete.
	* sparc-nat.h (sparc_fetch_inferior_registers)
	(sparc_store_inferior_registers, sparc_xfer_wcookie): Declare.
	(sparc_target): Delete function declaration.
	(sparc_target): New template class.
	* sparc-nbsd-nat.c (the_sparc_nbsd_nat_target): New.
	(_initialize_sparcnbsd_nat): Adjust to C++ification.
	* sparc64-fbsd-nat.c (the_sparc64_fbsd_nat_target): New.
	(_initialize_sparc64fbsd_nat): Adjust to C++ification.  Use
	add_target.
	* sparc64-nbsd-nat.c (the_sparc64_nbsd_nat_target): New.
	(_initialize_sparc64nbsd_nat): Adjust to C++ification.
	* sparc64-obsd-nat.c (the_sparc64_obsd_nat_target): New.
	(_initialize_sparc64obsd_nat): Adjust to C++ification.  Use
	add_target.
	* vax-bsd-nat.c (vax_bsd_nat_target): New class.
	(the_vax_bsd_nat_target): New.
	(vaxbsd_fetch_inferior_registers)
	(vaxbsd_store_inferior_registers): Refactor as vax_bsd_nat_target
	methods.
	(_initialize_vaxbsd_nat): Adjust to C++ification.

	* bsd-kvm.c (bsd_kvm_target): New class.
	(bsd_kvm_ops): Now a bsd_kvm_target.
	(bsd_kvm_open, bsd_kvm_close, bsd_kvm_xfer_partial)
	(bsd_kvm_files_info, bsd_kvm_fetch_registers)
	(bsd_kvm_thread_alive, bsd_kvm_pid_to_str): Refactor as methods of
	bsd_kvm_target.
	(bsd_kvm_return_one): Delete.
	(bsd_kvm_add_target): Adjust to C++ification.

	* nto-procfs.c (nto_procfs_target, nto_procfs_target_native)
	(nto_procfs_target_procfs): New classes.
	(procfs_open_1, procfs_thread_alive, procfs_update_thread_list)
	(procfs_files_info, procfs_pid_to_exec_file, procfs_attach)
	(procfs_post_attach, procfs_wait, procfs_fetch_registers)
	(procfs_xfer_partial, procfs_detach, procfs_insert_breakpoint)
	(procfs_remove_breakpoint, procfs_insert_hw_breakpoint)
	(procfs_remove_hw_breakpoint, procfs_resume)
	(procfs_mourn_inferior, procfs_create_inferior, procfs_interrupt)
	(procfs_kill_inferior, procfs_store_registers)
	(procfs_pass_signals, procfs_pid_to_str, procfs_can_run): Refactor
	as methods of nto_procfs_target.
	(nto_procfs_ops): Now an nto_procfs_target_procfs.
	(nto_native_ops): Delete.
	(procfs_open, procfs_native_open): Delete.
	(nto_native_ops): Now an nto_procfs_target_native.
	(init_procfs_targets): Adjust to C++ification.
	(procfs_can_use_hw_breakpoint, procfs_remove_hw_watchpoint)
	(procfs_insert_hw_watchpoint, procfs_stopped_by_watchpoint):
	Refactor as methods of nto_procfs_target.

	* go32-nat.c (go32_nat_target): New class.
	(the_go32_nat_target): New.
	(go32_attach, go32_resume, go32_wait, go32_fetch_registers)
	(go32_store_registers, go32_xfer_partial, go32_files_info)
	(go32_kill_inferior, go32_create_inferior, go32_mourn_inferior)
	(go32_terminal_init, go32_terminal_info, go32_terminal_inferior)
	(go32_terminal_ours, go32_pass_ctrlc, go32_thread_alive)
	(go32_pid_to_str): Refactor as methods of go32_nat_target.
	(go32_target): Delete.
	(_initialize_go32_nat): Adjust to C++ification.

	* gnu-nat.c (gnu_wait, gnu_resume, gnu_kill_inferior)
	(gnu_mourn_inferior, gnu_create_inferior, gnu_attach, gnu_detach)
	(gnu_stop, gnu_thread_alive, gnu_xfer_partial)
	(gnu_find_memory_regions, gnu_pid_to_str): Refactor as methods of
	gnu_nat_target.
	(gnu_target): Delete.
	* gnu-nat.h (gnu_target): Delete.
	(gnu_nat_target): New class.
	* i386-gnu-nat.c (gnu_base_target): New.
	(i386_gnu_nat_target): New class.
	(the_i386_gnu_nat_target): New.
	(_initialize_i386gnu_nat): Adjust to C++ification.

gdb/testsuite/ChangeLog:
2018-05-02  Pedro Alves  <palves@redhat.com>

	* gdb.base/breakpoint-in-ro-region.exp: Adjust to to_resume and
	to_log_command renames.
	* gdb.base/sss-bp-on-user-bp-2.exp: Likewise.
2018-05-03 00:48:36 +01:00

3324 lines
83 KiB
C

/* Branch trace support for GDB, the GNU debugger.
Copyright (C) 2013-2018 Free Software Foundation, Inc.
Contributed by Intel Corp. <markus.t.metzger@intel.com>
This file is part of GDB.
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
#include "defs.h"
#include "record.h"
#include "record-btrace.h"
#include "gdbthread.h"
#include "target.h"
#include "gdbcmd.h"
#include "disasm.h"
#include "observable.h"
#include "cli/cli-utils.h"
#include "source.h"
#include "ui-out.h"
#include "symtab.h"
#include "filenames.h"
#include "regcache.h"
#include "frame-unwind.h"
#include "hashtab.h"
#include "infrun.h"
#include "event-loop.h"
#include "inf-loop.h"
#include "vec.h"
#include <algorithm>
/* The target_ops of record-btrace. */
class record_btrace_target final : public target_ops
{
public:
record_btrace_target ()
{ to_stratum = record_stratum; }
const char *shortname () override
{ return "record-btrace"; }
const char *longname () override
{ return _("Branch tracing target"); }
const char *doc () override
{ return _("Collect control-flow trace and provide the execution history."); }
void open (const char *, int) override;
void close () override;
void async (int) override;
void detach (inferior *inf, int from_tty) override
{ record_detach (this, inf, from_tty); }
void disconnect (const char *, int) override;
void mourn_inferior () override
{ record_mourn_inferior (this); }
void kill () override
{ record_kill (this); }
enum record_method record_method (ptid_t ptid) override;
void stop_recording () override;
void info_record () override;
void insn_history (int size, gdb_disassembly_flags flags) override;
void insn_history_from (ULONGEST from, int size,
gdb_disassembly_flags flags) override;
void insn_history_range (ULONGEST begin, ULONGEST end,
gdb_disassembly_flags flags) override;
void call_history (int size, record_print_flags flags) override;
void call_history_from (ULONGEST begin, int size, record_print_flags flags)
override;
void call_history_range (ULONGEST begin, ULONGEST end, record_print_flags flags)
override;
int record_is_replaying (ptid_t ptid) override;
int record_will_replay (ptid_t ptid, int dir) override;
void record_stop_replaying () override;
enum target_xfer_status xfer_partial (enum target_object object,
const char *annex,
gdb_byte *readbuf,
const gdb_byte *writebuf,
ULONGEST offset, ULONGEST len,
ULONGEST *xfered_len) override;
int insert_breakpoint (struct gdbarch *,
struct bp_target_info *) override;
int remove_breakpoint (struct gdbarch *, struct bp_target_info *,
enum remove_bp_reason) override;
void fetch_registers (struct regcache *, int) override;
void store_registers (struct regcache *, int) override;
void prepare_to_store (struct regcache *) override;
const struct frame_unwind *get_unwinder () override;
const struct frame_unwind *get_tailcall_unwinder () override;
void commit_resume () override;
void resume (ptid_t, int, enum gdb_signal) override;
ptid_t wait (ptid_t, struct target_waitstatus *, int) override;
void stop (ptid_t) override;
void update_thread_list () override;
int thread_alive (ptid_t ptid) override;
void goto_record_begin () override;
void goto_record_end () override;
void goto_record (ULONGEST insn) override;
int can_execute_reverse () override;
int stopped_by_sw_breakpoint () override;
int supports_stopped_by_sw_breakpoint () override;
int stopped_by_hw_breakpoint () override;
int supports_stopped_by_hw_breakpoint () override;
enum exec_direction_kind execution_direction () override;
void prepare_to_generate_core () override;
void done_generating_core () override;
};
static record_btrace_target record_btrace_ops;
/* Initialize the record-btrace target ops. */
/* Token associated with a new-thread observer enabling branch tracing
for the new thread. */
static const gdb::observers::token record_btrace_thread_observer_token;
/* Memory access types used in set/show record btrace replay-memory-access. */
static const char replay_memory_access_read_only[] = "read-only";
static const char replay_memory_access_read_write[] = "read-write";
static const char *const replay_memory_access_types[] =
{
replay_memory_access_read_only,
replay_memory_access_read_write,
NULL
};
/* The currently allowed replay memory access type. */
static const char *replay_memory_access = replay_memory_access_read_only;
/* The cpu state kinds. */
enum record_btrace_cpu_state_kind
{
CS_AUTO,
CS_NONE,
CS_CPU
};
/* The current cpu state. */
static enum record_btrace_cpu_state_kind record_btrace_cpu_state = CS_AUTO;
/* The current cpu for trace decode. */
static struct btrace_cpu record_btrace_cpu;
/* Command lists for "set/show record btrace". */
static struct cmd_list_element *set_record_btrace_cmdlist;
static struct cmd_list_element *show_record_btrace_cmdlist;
/* The execution direction of the last resume we got. See record-full.c. */
static enum exec_direction_kind record_btrace_resume_exec_dir = EXEC_FORWARD;
/* The async event handler for reverse/replay execution. */
static struct async_event_handler *record_btrace_async_inferior_event_handler;
/* A flag indicating that we are currently generating a core file. */
static int record_btrace_generating_corefile;
/* The current branch trace configuration. */
static struct btrace_config record_btrace_conf;
/* Command list for "record btrace". */
static struct cmd_list_element *record_btrace_cmdlist;
/* Command lists for "set/show record btrace bts". */
static struct cmd_list_element *set_record_btrace_bts_cmdlist;
static struct cmd_list_element *show_record_btrace_bts_cmdlist;
/* Command lists for "set/show record btrace pt". */
static struct cmd_list_element *set_record_btrace_pt_cmdlist;
static struct cmd_list_element *show_record_btrace_pt_cmdlist;
/* Command list for "set record btrace cpu". */
static struct cmd_list_element *set_record_btrace_cpu_cmdlist;
/* Print a record-btrace debug message. Use do ... while (0) to avoid
ambiguities when used in if statements. */
#define DEBUG(msg, args...) \
do \
{ \
if (record_debug != 0) \
fprintf_unfiltered (gdb_stdlog, \
"[record-btrace] " msg "\n", ##args); \
} \
while (0)
/* Return the cpu configured by the user. Returns NULL if the cpu was
configured as auto. */
const struct btrace_cpu *
record_btrace_get_cpu (void)
{
switch (record_btrace_cpu_state)
{
case CS_AUTO:
return nullptr;
case CS_NONE:
record_btrace_cpu.vendor = CV_UNKNOWN;
/* Fall through. */
case CS_CPU:
return &record_btrace_cpu;
}
error (_("Internal error: bad record btrace cpu state."));
}
/* Update the branch trace for the current thread and return a pointer to its
thread_info.
Throws an error if there is no thread or no trace. This function never
returns NULL. */
static struct thread_info *
require_btrace_thread (void)
{
struct thread_info *tp;
DEBUG ("require");
tp = find_thread_ptid (inferior_ptid);
if (tp == NULL)
error (_("No thread."));
validate_registers_access ();
btrace_fetch (tp, record_btrace_get_cpu ());
if (btrace_is_empty (tp))
error (_("No trace."));
return tp;
}
/* Update the branch trace for the current thread and return a pointer to its
branch trace information struct.
Throws an error if there is no thread or no trace. This function never
returns NULL. */
static struct btrace_thread_info *
require_btrace (void)
{
struct thread_info *tp;
tp = require_btrace_thread ();
return &tp->btrace;
}
/* Enable branch tracing for one thread. Warn on errors. */
static void
record_btrace_enable_warn (struct thread_info *tp)
{
TRY
{
btrace_enable (tp, &record_btrace_conf);
}
CATCH (error, RETURN_MASK_ERROR)
{
warning ("%s", error.message);
}
END_CATCH
}
/* Enable automatic tracing of new threads. */
static void
record_btrace_auto_enable (void)
{
DEBUG ("attach thread observer");
gdb::observers::new_thread.attach (record_btrace_enable_warn,
record_btrace_thread_observer_token);
}
/* Disable automatic tracing of new threads. */
static void
record_btrace_auto_disable (void)
{
DEBUG ("detach thread observer");
gdb::observers::new_thread.detach (record_btrace_thread_observer_token);
}
/* The record-btrace async event handler function. */
static void
record_btrace_handle_async_inferior_event (gdb_client_data data)
{
inferior_event_handler (INF_REG_EVENT, NULL);
}
/* See record-btrace.h. */
void
record_btrace_push_target (void)
{
const char *format;
record_btrace_auto_enable ();
push_target (&record_btrace_ops);
record_btrace_async_inferior_event_handler
= create_async_event_handler (record_btrace_handle_async_inferior_event,
NULL);
record_btrace_generating_corefile = 0;
format = btrace_format_short_string (record_btrace_conf.format);
gdb::observers::record_changed.notify (current_inferior (), 1, "btrace", format);
}
/* Disable btrace on a set of threads on scope exit. */
struct scoped_btrace_disable
{
scoped_btrace_disable () = default;
DISABLE_COPY_AND_ASSIGN (scoped_btrace_disable);
~scoped_btrace_disable ()
{
for (thread_info *tp : m_threads)
btrace_disable (tp);
}
void add_thread (thread_info *thread)
{
m_threads.push_front (thread);
}
void discard ()
{
m_threads.clear ();
}
private:
std::forward_list<thread_info *> m_threads;
};
/* The open method of target record-btrace. */
void
record_btrace_target::open (const char *args, int from_tty)
{
/* If we fail to enable btrace for one thread, disable it for the threads for
which it was successfully enabled. */
scoped_btrace_disable btrace_disable;
struct thread_info *tp;
DEBUG ("open");
record_preopen ();
if (!target_has_execution)
error (_("The program is not being run."));
ALL_NON_EXITED_THREADS (tp)
if (args == NULL || *args == 0 || number_is_in_list (args, tp->global_num))
{
btrace_enable (tp, &record_btrace_conf);
btrace_disable.add_thread (tp);
}
record_btrace_push_target ();
btrace_disable.discard ();
}
/* The stop_recording method of target record-btrace. */
void
record_btrace_target::stop_recording ()
{
struct thread_info *tp;
DEBUG ("stop recording");
record_btrace_auto_disable ();
ALL_NON_EXITED_THREADS (tp)
if (tp->btrace.target != NULL)
btrace_disable (tp);
}
/* The disconnect method of target record-btrace. */
void
record_btrace_target::disconnect (const char *args,
int from_tty)
{
struct target_ops *beneath = this->beneath;
/* Do not stop recording, just clean up GDB side. */
unpush_target (this);
/* Forward disconnect. */
beneath->disconnect (args, from_tty);
}
/* The close method of target record-btrace. */
void
record_btrace_target::close ()
{
struct thread_info *tp;
if (record_btrace_async_inferior_event_handler != NULL)
delete_async_event_handler (&record_btrace_async_inferior_event_handler);
/* Make sure automatic recording gets disabled even if we did not stop
recording before closing the record-btrace target. */
record_btrace_auto_disable ();
/* We should have already stopped recording.
Tear down btrace in case we have not. */
ALL_NON_EXITED_THREADS (tp)
btrace_teardown (tp);
}
/* The async method of target record-btrace. */
void
record_btrace_target::async (int enable)
{
if (enable)
mark_async_event_handler (record_btrace_async_inferior_event_handler);
else
clear_async_event_handler (record_btrace_async_inferior_event_handler);
this->beneath->async (enable);
}
/* Adjusts the size and returns a human readable size suffix. */
static const char *
record_btrace_adjust_size (unsigned int *size)
{
unsigned int sz;
sz = *size;
if ((sz & ((1u << 30) - 1)) == 0)
{
*size = sz >> 30;
return "GB";
}
else if ((sz & ((1u << 20) - 1)) == 0)
{
*size = sz >> 20;
return "MB";
}
else if ((sz & ((1u << 10) - 1)) == 0)
{
*size = sz >> 10;
return "kB";
}
else
return "";
}
/* Print a BTS configuration. */
static void
record_btrace_print_bts_conf (const struct btrace_config_bts *conf)
{
const char *suffix;
unsigned int size;
size = conf->size;
if (size > 0)
{
suffix = record_btrace_adjust_size (&size);
printf_unfiltered (_("Buffer size: %u%s.\n"), size, suffix);
}
}
/* Print an Intel Processor Trace configuration. */
static void
record_btrace_print_pt_conf (const struct btrace_config_pt *conf)
{
const char *suffix;
unsigned int size;
size = conf->size;
if (size > 0)
{
suffix = record_btrace_adjust_size (&size);
printf_unfiltered (_("Buffer size: %u%s.\n"), size, suffix);
}
}
/* Print a branch tracing configuration. */
static void
record_btrace_print_conf (const struct btrace_config *conf)
{
printf_unfiltered (_("Recording format: %s.\n"),
btrace_format_string (conf->format));
switch (conf->format)
{
case BTRACE_FORMAT_NONE:
return;
case BTRACE_FORMAT_BTS:
record_btrace_print_bts_conf (&conf->bts);
return;
case BTRACE_FORMAT_PT:
record_btrace_print_pt_conf (&conf->pt);
return;
}
internal_error (__FILE__, __LINE__, _("Unkown branch trace format."));
}
/* The info_record method of target record-btrace. */
void
record_btrace_target::info_record ()
{
struct btrace_thread_info *btinfo;
const struct btrace_config *conf;
struct thread_info *tp;
unsigned int insns, calls, gaps;
DEBUG ("info");
tp = find_thread_ptid (inferior_ptid);
if (tp == NULL)
error (_("No thread."));
validate_registers_access ();
btinfo = &tp->btrace;
conf = ::btrace_conf (btinfo);
if (conf != NULL)
record_btrace_print_conf (conf);
btrace_fetch (tp, record_btrace_get_cpu ());
insns = 0;
calls = 0;
gaps = 0;
if (!btrace_is_empty (tp))
{
struct btrace_call_iterator call;
struct btrace_insn_iterator insn;
btrace_call_end (&call, btinfo);
btrace_call_prev (&call, 1);
calls = btrace_call_number (&call);
btrace_insn_end (&insn, btinfo);
insns = btrace_insn_number (&insn);
/* If the last instruction is not a gap, it is the current instruction
that is not actually part of the record. */
if (btrace_insn_get (&insn) != NULL)
insns -= 1;
gaps = btinfo->ngaps;
}
printf_unfiltered (_("Recorded %u instructions in %u functions (%u gaps) "
"for thread %s (%s).\n"), insns, calls, gaps,
print_thread_id (tp), target_pid_to_str (tp->ptid));
if (btrace_is_replaying (tp))
printf_unfiltered (_("Replay in progress. At instruction %u.\n"),
btrace_insn_number (btinfo->replay));
}
/* Print a decode error. */
static void
btrace_ui_out_decode_error (struct ui_out *uiout, int errcode,
enum btrace_format format)
{
const char *errstr = btrace_decode_error (format, errcode);
uiout->text (_("["));
/* ERRCODE > 0 indicates notifications on BTRACE_FORMAT_PT. */
if (!(format == BTRACE_FORMAT_PT && errcode > 0))
{
uiout->text (_("decode error ("));
uiout->field_int ("errcode", errcode);
uiout->text (_("): "));
}
uiout->text (errstr);
uiout->text (_("]\n"));
}
/* Print an unsigned int. */
static void
ui_out_field_uint (struct ui_out *uiout, const char *fld, unsigned int val)
{
uiout->field_fmt (fld, "%u", val);
}
/* A range of source lines. */
struct btrace_line_range
{
/* The symtab this line is from. */
struct symtab *symtab;
/* The first line (inclusive). */
int begin;
/* The last line (exclusive). */
int end;
};
/* Construct a line range. */
static struct btrace_line_range
btrace_mk_line_range (struct symtab *symtab, int begin, int end)
{
struct btrace_line_range range;
range.symtab = symtab;
range.begin = begin;
range.end = end;
return range;
}
/* Add a line to a line range. */
static struct btrace_line_range
btrace_line_range_add (struct btrace_line_range range, int line)
{
if (range.end <= range.begin)
{
/* This is the first entry. */
range.begin = line;
range.end = line + 1;
}
else if (line < range.begin)
range.begin = line;
else if (range.end < line)
range.end = line;
return range;
}
/* Return non-zero if RANGE is empty, zero otherwise. */
static int
btrace_line_range_is_empty (struct btrace_line_range range)
{
return range.end <= range.begin;
}
/* Return non-zero if LHS contains RHS, zero otherwise. */
static int
btrace_line_range_contains_range (struct btrace_line_range lhs,
struct btrace_line_range rhs)
{
return ((lhs.symtab == rhs.symtab)
&& (lhs.begin <= rhs.begin)
&& (rhs.end <= lhs.end));
}
/* Find the line range associated with PC. */
static struct btrace_line_range
btrace_find_line_range (CORE_ADDR pc)
{
struct btrace_line_range range;
struct linetable_entry *lines;
struct linetable *ltable;
struct symtab *symtab;
int nlines, i;
symtab = find_pc_line_symtab (pc);
if (symtab == NULL)
return btrace_mk_line_range (NULL, 0, 0);
ltable = SYMTAB_LINETABLE (symtab);
if (ltable == NULL)
return btrace_mk_line_range (symtab, 0, 0);
nlines = ltable->nitems;
lines = ltable->item;
if (nlines <= 0)
return btrace_mk_line_range (symtab, 0, 0);
range = btrace_mk_line_range (symtab, 0, 0);
for (i = 0; i < nlines - 1; i++)
{
if ((lines[i].pc == pc) && (lines[i].line != 0))
range = btrace_line_range_add (range, lines[i].line);
}
return range;
}
/* Print source lines in LINES to UIOUT.
UI_ITEM_CHAIN is a cleanup chain for the last source line and the
instructions corresponding to that source line. When printing a new source
line, we do the cleanups for the open chain and open a new cleanup chain for
the new source line. If the source line range in LINES is not empty, this
function will leave the cleanup chain for the last printed source line open
so instructions can be added to it. */
static void
btrace_print_lines (struct btrace_line_range lines, struct ui_out *uiout,
gdb::optional<ui_out_emit_tuple> *src_and_asm_tuple,
gdb::optional<ui_out_emit_list> *asm_list,
gdb_disassembly_flags flags)
{
print_source_lines_flags psl_flags;
if (flags & DISASSEMBLY_FILENAME)
psl_flags |= PRINT_SOURCE_LINES_FILENAME;
for (int line = lines.begin; line < lines.end; ++line)
{
asm_list->reset ();
src_and_asm_tuple->emplace (uiout, "src_and_asm_line");
print_source_lines (lines.symtab, line, line + 1, psl_flags);
asm_list->emplace (uiout, "line_asm_insn");
}
}
/* Disassemble a section of the recorded instruction trace. */
static void
btrace_insn_history (struct ui_out *uiout,
const struct btrace_thread_info *btinfo,
const struct btrace_insn_iterator *begin,
const struct btrace_insn_iterator *end,
gdb_disassembly_flags flags)
{
DEBUG ("itrace (0x%x): [%u; %u)", (unsigned) flags,
btrace_insn_number (begin), btrace_insn_number (end));
flags |= DISASSEMBLY_SPECULATIVE;
struct gdbarch *gdbarch = target_gdbarch ();
btrace_line_range last_lines = btrace_mk_line_range (NULL, 0, 0);
ui_out_emit_list list_emitter (uiout, "asm_insns");
gdb::optional<ui_out_emit_tuple> src_and_asm_tuple;
gdb::optional<ui_out_emit_list> asm_list;
gdb_pretty_print_disassembler disasm (gdbarch);
for (btrace_insn_iterator it = *begin; btrace_insn_cmp (&it, end) != 0;
btrace_insn_next (&it, 1))
{
const struct btrace_insn *insn;
insn = btrace_insn_get (&it);
/* A NULL instruction indicates a gap in the trace. */
if (insn == NULL)
{
const struct btrace_config *conf;
conf = btrace_conf (btinfo);
/* We have trace so we must have a configuration. */
gdb_assert (conf != NULL);
uiout->field_fmt ("insn-number", "%u",
btrace_insn_number (&it));
uiout->text ("\t");
btrace_ui_out_decode_error (uiout, btrace_insn_get_error (&it),
conf->format);
}
else
{
struct disasm_insn dinsn;
if ((flags & DISASSEMBLY_SOURCE) != 0)
{
struct btrace_line_range lines;
lines = btrace_find_line_range (insn->pc);
if (!btrace_line_range_is_empty (lines)
&& !btrace_line_range_contains_range (last_lines, lines))
{
btrace_print_lines (lines, uiout, &src_and_asm_tuple, &asm_list,
flags);
last_lines = lines;
}
else if (!src_and_asm_tuple.has_value ())
{
gdb_assert (!asm_list.has_value ());
src_and_asm_tuple.emplace (uiout, "src_and_asm_line");
/* No source information. */
asm_list.emplace (uiout, "line_asm_insn");
}
gdb_assert (src_and_asm_tuple.has_value ());
gdb_assert (asm_list.has_value ());
}
memset (&dinsn, 0, sizeof (dinsn));
dinsn.number = btrace_insn_number (&it);
dinsn.addr = insn->pc;
if ((insn->flags & BTRACE_INSN_FLAG_SPECULATIVE) != 0)
dinsn.is_speculative = 1;
disasm.pretty_print_insn (uiout, &dinsn, flags);
}
}
}
/* The insn_history method of target record-btrace. */
void
record_btrace_target::insn_history (int size, gdb_disassembly_flags flags)
{
struct btrace_thread_info *btinfo;
struct btrace_insn_history *history;
struct btrace_insn_iterator begin, end;
struct ui_out *uiout;
unsigned int context, covered;
uiout = current_uiout;
ui_out_emit_tuple tuple_emitter (uiout, "insn history");
context = abs (size);
if (context == 0)
error (_("Bad record instruction-history-size."));
btinfo = require_btrace ();
history = btinfo->insn_history;
if (history == NULL)
{
struct btrace_insn_iterator *replay;
DEBUG ("insn-history (0x%x): %d", (unsigned) flags, size);
/* If we're replaying, we start at the replay position. Otherwise, we
start at the tail of the trace. */
replay = btinfo->replay;
if (replay != NULL)
begin = *replay;
else
btrace_insn_end (&begin, btinfo);
/* We start from here and expand in the requested direction. Then we
expand in the other direction, as well, to fill up any remaining
context. */
end = begin;
if (size < 0)
{
/* We want the current position covered, as well. */
covered = btrace_insn_next (&end, 1);
covered += btrace_insn_prev (&begin, context - covered);
covered += btrace_insn_next (&end, context - covered);
}
else
{
covered = btrace_insn_next (&end, context);
covered += btrace_insn_prev (&begin, context - covered);
}
}
else
{
begin = history->begin;
end = history->end;
DEBUG ("insn-history (0x%x): %d, prev: [%u; %u)", (unsigned) flags, size,
btrace_insn_number (&begin), btrace_insn_number (&end));
if (size < 0)
{
end = begin;
covered = btrace_insn_prev (&begin, context);
}
else
{
begin = end;
covered = btrace_insn_next (&end, context);
}
}
if (covered > 0)
btrace_insn_history (uiout, btinfo, &begin, &end, flags);
else
{
if (size < 0)
printf_unfiltered (_("At the start of the branch trace record.\n"));
else
printf_unfiltered (_("At the end of the branch trace record.\n"));
}
btrace_set_insn_history (btinfo, &begin, &end);
}
/* The insn_history_range method of target record-btrace. */
void
record_btrace_target::insn_history_range (ULONGEST from, ULONGEST to,
gdb_disassembly_flags flags)
{
struct btrace_thread_info *btinfo;
struct btrace_insn_iterator begin, end;
struct ui_out *uiout;
unsigned int low, high;
int found;
uiout = current_uiout;
ui_out_emit_tuple tuple_emitter (uiout, "insn history");
low = from;
high = to;
DEBUG ("insn-history (0x%x): [%u; %u)", (unsigned) flags, low, high);
/* Check for wrap-arounds. */
if (low != from || high != to)
error (_("Bad range."));
if (high < low)
error (_("Bad range."));
btinfo = require_btrace ();
found = btrace_find_insn_by_number (&begin, btinfo, low);
if (found == 0)
error (_("Range out of bounds."));
found = btrace_find_insn_by_number (&end, btinfo, high);
if (found == 0)
{
/* Silently truncate the range. */
btrace_insn_end (&end, btinfo);
}
else
{
/* We want both begin and end to be inclusive. */
btrace_insn_next (&end, 1);
}
btrace_insn_history (uiout, btinfo, &begin, &end, flags);
btrace_set_insn_history (btinfo, &begin, &end);
}
/* The insn_history_from method of target record-btrace. */
void
record_btrace_target::insn_history_from (ULONGEST from, int size,
gdb_disassembly_flags flags)
{
ULONGEST begin, end, context;
context = abs (size);
if (context == 0)
error (_("Bad record instruction-history-size."));
if (size < 0)
{
end = from;
if (from < context)
begin = 0;
else
begin = from - context + 1;
}
else
{
begin = from;
end = from + context - 1;
/* Check for wrap-around. */
if (end < begin)
end = ULONGEST_MAX;
}
insn_history_range (begin, end, flags);
}
/* Print the instruction number range for a function call history line. */
static void
btrace_call_history_insn_range (struct ui_out *uiout,
const struct btrace_function *bfun)
{
unsigned int begin, end, size;
size = bfun->insn.size ();
gdb_assert (size > 0);
begin = bfun->insn_offset;
end = begin + size - 1;
ui_out_field_uint (uiout, "insn begin", begin);
uiout->text (",");
ui_out_field_uint (uiout, "insn end", end);
}
/* Compute the lowest and highest source line for the instructions in BFUN
and return them in PBEGIN and PEND.
Ignore instructions that can't be mapped to BFUN, e.g. instructions that
result from inlining or macro expansion. */
static void
btrace_compute_src_line_range (const struct btrace_function *bfun,
int *pbegin, int *pend)
{
struct symtab *symtab;
struct symbol *sym;
int begin, end;
begin = INT_MAX;
end = INT_MIN;
sym = bfun->sym;
if (sym == NULL)
goto out;
symtab = symbol_symtab (sym);
for (const btrace_insn &insn : bfun->insn)
{
struct symtab_and_line sal;
sal = find_pc_line (insn.pc, 0);
if (sal.symtab != symtab || sal.line == 0)
continue;
begin = std::min (begin, sal.line);
end = std::max (end, sal.line);
}
out:
*pbegin = begin;
*pend = end;
}
/* Print the source line information for a function call history line. */
static void
btrace_call_history_src_line (struct ui_out *uiout,
const struct btrace_function *bfun)
{
struct symbol *sym;
int begin, end;
sym = bfun->sym;
if (sym == NULL)
return;
uiout->field_string ("file",
symtab_to_filename_for_display (symbol_symtab (sym)));
btrace_compute_src_line_range (bfun, &begin, &end);
if (end < begin)
return;
uiout->text (":");
uiout->field_int ("min line", begin);
if (end == begin)
return;
uiout->text (",");
uiout->field_int ("max line", end);
}
/* Get the name of a branch trace function. */
static const char *
btrace_get_bfun_name (const struct btrace_function *bfun)
{
struct minimal_symbol *msym;
struct symbol *sym;
if (bfun == NULL)
return "??";
msym = bfun->msym;
sym = bfun->sym;
if (sym != NULL)
return SYMBOL_PRINT_NAME (sym);
else if (msym != NULL)
return MSYMBOL_PRINT_NAME (msym);
else
return "??";
}
/* Disassemble a section of the recorded function trace. */
static void
btrace_call_history (struct ui_out *uiout,
const struct btrace_thread_info *btinfo,
const struct btrace_call_iterator *begin,
const struct btrace_call_iterator *end,
int int_flags)
{
struct btrace_call_iterator it;
record_print_flags flags = (enum record_print_flag) int_flags;
DEBUG ("ftrace (0x%x): [%u; %u)", int_flags, btrace_call_number (begin),
btrace_call_number (end));
for (it = *begin; btrace_call_cmp (&it, end) < 0; btrace_call_next (&it, 1))
{
const struct btrace_function *bfun;
struct minimal_symbol *msym;
struct symbol *sym;
bfun = btrace_call_get (&it);
sym = bfun->sym;
msym = bfun->msym;
/* Print the function index. */
ui_out_field_uint (uiout, "index", bfun->number);
uiout->text ("\t");
/* Indicate gaps in the trace. */
if (bfun->errcode != 0)
{
const struct btrace_config *conf;
conf = btrace_conf (btinfo);
/* We have trace so we must have a configuration. */
gdb_assert (conf != NULL);
btrace_ui_out_decode_error (uiout, bfun->errcode, conf->format);
continue;
}
if ((flags & RECORD_PRINT_INDENT_CALLS) != 0)
{
int level = bfun->level + btinfo->level, i;
for (i = 0; i < level; ++i)
uiout->text (" ");
}
if (sym != NULL)
uiout->field_string ("function", SYMBOL_PRINT_NAME (sym));
else if (msym != NULL)
uiout->field_string ("function", MSYMBOL_PRINT_NAME (msym));
else if (!uiout->is_mi_like_p ())
uiout->field_string ("function", "??");
if ((flags & RECORD_PRINT_INSN_RANGE) != 0)
{
uiout->text (_("\tinst "));
btrace_call_history_insn_range (uiout, bfun);
}
if ((flags & RECORD_PRINT_SRC_LINE) != 0)
{
uiout->text (_("\tat "));
btrace_call_history_src_line (uiout, bfun);
}
uiout->text ("\n");
}
}
/* The call_history method of target record-btrace. */
void
record_btrace_target::call_history (int size, record_print_flags flags)
{
struct btrace_thread_info *btinfo;
struct btrace_call_history *history;
struct btrace_call_iterator begin, end;
struct ui_out *uiout;
unsigned int context, covered;
uiout = current_uiout;
ui_out_emit_tuple tuple_emitter (uiout, "insn history");
context = abs (size);
if (context == 0)
error (_("Bad record function-call-history-size."));
btinfo = require_btrace ();
history = btinfo->call_history;
if (history == NULL)
{
struct btrace_insn_iterator *replay;
DEBUG ("call-history (0x%x): %d", (int) flags, size);
/* If we're replaying, we start at the replay position. Otherwise, we
start at the tail of the trace. */
replay = btinfo->replay;
if (replay != NULL)
{
begin.btinfo = btinfo;
begin.index = replay->call_index;
}
else
btrace_call_end (&begin, btinfo);
/* We start from here and expand in the requested direction. Then we
expand in the other direction, as well, to fill up any remaining
context. */
end = begin;
if (size < 0)
{
/* We want the current position covered, as well. */
covered = btrace_call_next (&end, 1);
covered += btrace_call_prev (&begin, context - covered);
covered += btrace_call_next (&end, context - covered);
}
else
{
covered = btrace_call_next (&end, context);
covered += btrace_call_prev (&begin, context- covered);
}
}
else
{
begin = history->begin;
end = history->end;
DEBUG ("call-history (0x%x): %d, prev: [%u; %u)", (int) flags, size,
btrace_call_number (&begin), btrace_call_number (&end));
if (size < 0)
{
end = begin;
covered = btrace_call_prev (&begin, context);
}
else
{
begin = end;
covered = btrace_call_next (&end, context);
}
}
if (covered > 0)
btrace_call_history (uiout, btinfo, &begin, &end, flags);
else
{
if (size < 0)
printf_unfiltered (_("At the start of the branch trace record.\n"));
else
printf_unfiltered (_("At the end of the branch trace record.\n"));
}
btrace_set_call_history (btinfo, &begin, &end);
}
/* The call_history_range method of target record-btrace. */
void
record_btrace_target::call_history_range (ULONGEST from, ULONGEST to,
record_print_flags flags)
{
struct btrace_thread_info *btinfo;
struct btrace_call_iterator begin, end;
struct ui_out *uiout;
unsigned int low, high;
int found;
uiout = current_uiout;
ui_out_emit_tuple tuple_emitter (uiout, "func history");
low = from;
high = to;
DEBUG ("call-history (0x%x): [%u; %u)", (int) flags, low, high);
/* Check for wrap-arounds. */
if (low != from || high != to)
error (_("Bad range."));
if (high < low)
error (_("Bad range."));
btinfo = require_btrace ();
found = btrace_find_call_by_number (&begin, btinfo, low);
if (found == 0)
error (_("Range out of bounds."));
found = btrace_find_call_by_number (&end, btinfo, high);
if (found == 0)
{
/* Silently truncate the range. */
btrace_call_end (&end, btinfo);
}
else
{
/* We want both begin and end to be inclusive. */
btrace_call_next (&end, 1);
}
btrace_call_history (uiout, btinfo, &begin, &end, flags);
btrace_set_call_history (btinfo, &begin, &end);
}
/* The call_history_from method of target record-btrace. */
void
record_btrace_target::call_history_from (ULONGEST from, int size,
record_print_flags flags)
{
ULONGEST begin, end, context;
context = abs (size);
if (context == 0)
error (_("Bad record function-call-history-size."));
if (size < 0)
{
end = from;
if (from < context)
begin = 0;
else
begin = from - context + 1;
}
else
{
begin = from;
end = from + context - 1;
/* Check for wrap-around. */
if (end < begin)
end = ULONGEST_MAX;
}
call_history_range ( begin, end, flags);
}
/* The record_method method of target record-btrace. */
enum record_method
record_btrace_target::record_method (ptid_t ptid)
{
struct thread_info * const tp = find_thread_ptid (ptid);
if (tp == NULL)
error (_("No thread."));
if (tp->btrace.target == NULL)
return RECORD_METHOD_NONE;
return RECORD_METHOD_BTRACE;
}
/* The record_is_replaying method of target record-btrace. */
int
record_btrace_target::record_is_replaying (ptid_t ptid)
{
struct thread_info *tp;
ALL_NON_EXITED_THREADS (tp)
if (ptid_match (tp->ptid, ptid) && btrace_is_replaying (tp))
return 1;
return 0;
}
/* The record_will_replay method of target record-btrace. */
int
record_btrace_target::record_will_replay (ptid_t ptid, int dir)
{
return dir == EXEC_REVERSE || record_is_replaying (ptid);
}
/* The xfer_partial method of target record-btrace. */
enum target_xfer_status
record_btrace_target::xfer_partial (enum target_object object,
const char *annex, gdb_byte *readbuf,
const gdb_byte *writebuf, ULONGEST offset,
ULONGEST len, ULONGEST *xfered_len)
{
/* Filter out requests that don't make sense during replay. */
if (replay_memory_access == replay_memory_access_read_only
&& !record_btrace_generating_corefile
&& record_is_replaying (inferior_ptid))
{
switch (object)
{
case TARGET_OBJECT_MEMORY:
{
struct target_section *section;
/* We do not allow writing memory in general. */
if (writebuf != NULL)
{
*xfered_len = len;
return TARGET_XFER_UNAVAILABLE;
}
/* We allow reading readonly memory. */
section = target_section_by_addr (this, offset);
if (section != NULL)
{
/* Check if the section we found is readonly. */
if ((bfd_get_section_flags (section->the_bfd_section->owner,
section->the_bfd_section)
& SEC_READONLY) != 0)
{
/* Truncate the request to fit into this section. */
len = std::min (len, section->endaddr - offset);
break;
}
}
*xfered_len = len;
return TARGET_XFER_UNAVAILABLE;
}
}
}
/* Forward the request. */
return this->beneath->xfer_partial (object, annex, readbuf, writebuf,
offset, len, xfered_len);
}
/* The insert_breakpoint method of target record-btrace. */
int
record_btrace_target::insert_breakpoint (struct gdbarch *gdbarch,
struct bp_target_info *bp_tgt)
{
const char *old;
int ret;
/* Inserting breakpoints requires accessing memory. Allow it for the
duration of this function. */
old = replay_memory_access;
replay_memory_access = replay_memory_access_read_write;
ret = 0;
TRY
{
ret = this->beneath->insert_breakpoint (gdbarch, bp_tgt);
}
CATCH (except, RETURN_MASK_ALL)
{
replay_memory_access = old;
throw_exception (except);
}
END_CATCH
replay_memory_access = old;
return ret;
}
/* The remove_breakpoint method of target record-btrace. */
int
record_btrace_target::remove_breakpoint (struct gdbarch *gdbarch,
struct bp_target_info *bp_tgt,
enum remove_bp_reason reason)
{
const char *old;
int ret;
/* Removing breakpoints requires accessing memory. Allow it for the
duration of this function. */
old = replay_memory_access;
replay_memory_access = replay_memory_access_read_write;
ret = 0;
TRY
{
ret = this->beneath->remove_breakpoint (gdbarch, bp_tgt, reason);
}
CATCH (except, RETURN_MASK_ALL)
{
replay_memory_access = old;
throw_exception (except);
}
END_CATCH
replay_memory_access = old;
return ret;
}
/* The fetch_registers method of target record-btrace. */
void
record_btrace_target::fetch_registers (struct regcache *regcache, int regno)
{
struct btrace_insn_iterator *replay;
struct thread_info *tp;
tp = find_thread_ptid (regcache_get_ptid (regcache));
gdb_assert (tp != NULL);
replay = tp->btrace.replay;
if (replay != NULL && !record_btrace_generating_corefile)
{
const struct btrace_insn *insn;
struct gdbarch *gdbarch;
int pcreg;
gdbarch = regcache->arch ();
pcreg = gdbarch_pc_regnum (gdbarch);
if (pcreg < 0)
return;
/* We can only provide the PC register. */
if (regno >= 0 && regno != pcreg)
return;
insn = btrace_insn_get (replay);
gdb_assert (insn != NULL);
regcache_raw_supply (regcache, regno, &insn->pc);
}
else
this->beneath->fetch_registers (regcache, regno);
}
/* The store_registers method of target record-btrace. */
void
record_btrace_target::store_registers (struct regcache *regcache, int regno)
{
struct target_ops *t;
if (!record_btrace_generating_corefile
&& record_is_replaying (regcache_get_ptid (regcache)))
error (_("Cannot write registers while replaying."));
gdb_assert (may_write_registers != 0);
this->beneath->store_registers (regcache, regno);
}
/* The prepare_to_store method of target record-btrace. */
void
record_btrace_target::prepare_to_store (struct regcache *regcache)
{
if (!record_btrace_generating_corefile
&& record_is_replaying (regcache_get_ptid (regcache)))
return;
this->beneath->prepare_to_store (regcache);
}
/* The branch trace frame cache. */
struct btrace_frame_cache
{
/* The thread. */
struct thread_info *tp;
/* The frame info. */
struct frame_info *frame;
/* The branch trace function segment. */
const struct btrace_function *bfun;
};
/* A struct btrace_frame_cache hash table indexed by NEXT. */
static htab_t bfcache;
/* hash_f for htab_create_alloc of bfcache. */
static hashval_t
bfcache_hash (const void *arg)
{
const struct btrace_frame_cache *cache
= (const struct btrace_frame_cache *) arg;
return htab_hash_pointer (cache->frame);
}
/* eq_f for htab_create_alloc of bfcache. */
static int
bfcache_eq (const void *arg1, const void *arg2)
{
const struct btrace_frame_cache *cache1
= (const struct btrace_frame_cache *) arg1;
const struct btrace_frame_cache *cache2
= (const struct btrace_frame_cache *) arg2;
return cache1->frame == cache2->frame;
}
/* Create a new btrace frame cache. */
static struct btrace_frame_cache *
bfcache_new (struct frame_info *frame)
{
struct btrace_frame_cache *cache;
void **slot;
cache = FRAME_OBSTACK_ZALLOC (struct btrace_frame_cache);
cache->frame = frame;
slot = htab_find_slot (bfcache, cache, INSERT);
gdb_assert (*slot == NULL);
*slot = cache;
return cache;
}
/* Extract the branch trace function from a branch trace frame. */
static const struct btrace_function *
btrace_get_frame_function (struct frame_info *frame)
{
const struct btrace_frame_cache *cache;
struct btrace_frame_cache pattern;
void **slot;
pattern.frame = frame;
slot = htab_find_slot (bfcache, &pattern, NO_INSERT);
if (slot == NULL)
return NULL;
cache = (const struct btrace_frame_cache *) *slot;
return cache->bfun;
}
/* Implement stop_reason method for record_btrace_frame_unwind. */
static enum unwind_stop_reason
record_btrace_frame_unwind_stop_reason (struct frame_info *this_frame,
void **this_cache)
{
const struct btrace_frame_cache *cache;
const struct btrace_function *bfun;
cache = (const struct btrace_frame_cache *) *this_cache;
bfun = cache->bfun;
gdb_assert (bfun != NULL);
if (bfun->up == 0)
return UNWIND_UNAVAILABLE;
return UNWIND_NO_REASON;
}
/* Implement this_id method for record_btrace_frame_unwind. */
static void
record_btrace_frame_this_id (struct frame_info *this_frame, void **this_cache,
struct frame_id *this_id)
{
const struct btrace_frame_cache *cache;
const struct btrace_function *bfun;
struct btrace_call_iterator it;
CORE_ADDR code, special;
cache = (const struct btrace_frame_cache *) *this_cache;
bfun = cache->bfun;
gdb_assert (bfun != NULL);
while (btrace_find_call_by_number (&it, &cache->tp->btrace, bfun->prev) != 0)
bfun = btrace_call_get (&it);
code = get_frame_func (this_frame);
special = bfun->number;
*this_id = frame_id_build_unavailable_stack_special (code, special);
DEBUG ("[frame] %s id: (!stack, pc=%s, special=%s)",
btrace_get_bfun_name (cache->bfun),
core_addr_to_string_nz (this_id->code_addr),
core_addr_to_string_nz (this_id->special_addr));
}
/* Implement prev_register method for record_btrace_frame_unwind. */
static struct value *
record_btrace_frame_prev_register (struct frame_info *this_frame,
void **this_cache,
int regnum)
{
const struct btrace_frame_cache *cache;
const struct btrace_function *bfun, *caller;
struct btrace_call_iterator it;
struct gdbarch *gdbarch;
CORE_ADDR pc;
int pcreg;
gdbarch = get_frame_arch (this_frame);
pcreg = gdbarch_pc_regnum (gdbarch);
if (pcreg < 0 || regnum != pcreg)
throw_error (NOT_AVAILABLE_ERROR,
_("Registers are not available in btrace record history"));
cache = (const struct btrace_frame_cache *) *this_cache;
bfun = cache->bfun;
gdb_assert (bfun != NULL);
if (btrace_find_call_by_number (&it, &cache->tp->btrace, bfun->up) == 0)
throw_error (NOT_AVAILABLE_ERROR,
_("No caller in btrace record history"));
caller = btrace_call_get (&it);
if ((bfun->flags & BFUN_UP_LINKS_TO_RET) != 0)
pc = caller->insn.front ().pc;
else
{
pc = caller->insn.back ().pc;
pc += gdb_insn_length (gdbarch, pc);
}
DEBUG ("[frame] unwound PC in %s on level %d: %s",
btrace_get_bfun_name (bfun), bfun->level,
core_addr_to_string_nz (pc));
return frame_unwind_got_address (this_frame, regnum, pc);
}
/* Implement sniffer method for record_btrace_frame_unwind. */
static int
record_btrace_frame_sniffer (const struct frame_unwind *self,
struct frame_info *this_frame,
void **this_cache)
{
const struct btrace_function *bfun;
struct btrace_frame_cache *cache;
struct thread_info *tp;
struct frame_info *next;
/* THIS_FRAME does not contain a reference to its thread. */
tp = find_thread_ptid (inferior_ptid);
gdb_assert (tp != NULL);
bfun = NULL;
next = get_next_frame (this_frame);
if (next == NULL)
{
const struct btrace_insn_iterator *replay;
replay = tp->btrace.replay;
if (replay != NULL)
bfun = &replay->btinfo->functions[replay->call_index];
}
else
{
const struct btrace_function *callee;
struct btrace_call_iterator it;
callee = btrace_get_frame_function (next);
if (callee == NULL || (callee->flags & BFUN_UP_LINKS_TO_TAILCALL) != 0)
return 0;
if (btrace_find_call_by_number (&it, &tp->btrace, callee->up) == 0)
return 0;
bfun = btrace_call_get (&it);
}
if (bfun == NULL)
return 0;
DEBUG ("[frame] sniffed frame for %s on level %d",
btrace_get_bfun_name (bfun), bfun->level);
/* This is our frame. Initialize the frame cache. */
cache = bfcache_new (this_frame);
cache->tp = tp;
cache->bfun = bfun;
*this_cache = cache;
return 1;
}
/* Implement sniffer method for record_btrace_tailcall_frame_unwind. */
static int
record_btrace_tailcall_frame_sniffer (const struct frame_unwind *self,
struct frame_info *this_frame,
void **this_cache)
{
const struct btrace_function *bfun, *callee;
struct btrace_frame_cache *cache;
struct btrace_call_iterator it;
struct frame_info *next;
struct thread_info *tinfo;
next = get_next_frame (this_frame);
if (next == NULL)
return 0;
callee = btrace_get_frame_function (next);
if (callee == NULL)
return 0;
if ((callee->flags & BFUN_UP_LINKS_TO_TAILCALL) == 0)
return 0;
tinfo = find_thread_ptid (inferior_ptid);
if (btrace_find_call_by_number (&it, &tinfo->btrace, callee->up) == 0)
return 0;
bfun = btrace_call_get (&it);
DEBUG ("[frame] sniffed tailcall frame for %s on level %d",
btrace_get_bfun_name (bfun), bfun->level);
/* This is our frame. Initialize the frame cache. */
cache = bfcache_new (this_frame);
cache->tp = tinfo;
cache->bfun = bfun;
*this_cache = cache;
return 1;
}
static void
record_btrace_frame_dealloc_cache (struct frame_info *self, void *this_cache)
{
struct btrace_frame_cache *cache;
void **slot;
cache = (struct btrace_frame_cache *) this_cache;
slot = htab_find_slot (bfcache, cache, NO_INSERT);
gdb_assert (slot != NULL);
htab_remove_elt (bfcache, cache);
}
/* btrace recording does not store previous memory content, neither the stack
frames content. Any unwinding would return errorneous results as the stack
contents no longer matches the changed PC value restored from history.
Therefore this unwinder reports any possibly unwound registers as
<unavailable>. */
const struct frame_unwind record_btrace_frame_unwind =
{
NORMAL_FRAME,
record_btrace_frame_unwind_stop_reason,
record_btrace_frame_this_id,
record_btrace_frame_prev_register,
NULL,
record_btrace_frame_sniffer,
record_btrace_frame_dealloc_cache
};
const struct frame_unwind record_btrace_tailcall_frame_unwind =
{
TAILCALL_FRAME,
record_btrace_frame_unwind_stop_reason,
record_btrace_frame_this_id,
record_btrace_frame_prev_register,
NULL,
record_btrace_tailcall_frame_sniffer,
record_btrace_frame_dealloc_cache
};
/* Implement the get_unwinder method. */
const struct frame_unwind *
record_btrace_target::get_unwinder ()
{
return &record_btrace_frame_unwind;
}
/* Implement the get_tailcall_unwinder method. */
const struct frame_unwind *
record_btrace_target::get_tailcall_unwinder ()
{
return &record_btrace_tailcall_frame_unwind;
}
/* Return a human-readable string for FLAG. */
static const char *
btrace_thread_flag_to_str (enum btrace_thread_flag flag)
{
switch (flag)
{
case BTHR_STEP:
return "step";
case BTHR_RSTEP:
return "reverse-step";
case BTHR_CONT:
return "cont";
case BTHR_RCONT:
return "reverse-cont";
case BTHR_STOP:
return "stop";
}
return "<invalid>";
}
/* Indicate that TP should be resumed according to FLAG. */
static void
record_btrace_resume_thread (struct thread_info *tp,
enum btrace_thread_flag flag)
{
struct btrace_thread_info *btinfo;
DEBUG ("resuming thread %s (%s): %x (%s)", print_thread_id (tp),
target_pid_to_str (tp->ptid), flag, btrace_thread_flag_to_str (flag));
btinfo = &tp->btrace;
/* Fetch the latest branch trace. */
btrace_fetch (tp, record_btrace_get_cpu ());
/* A resume request overwrites a preceding resume or stop request. */
btinfo->flags &= ~(BTHR_MOVE | BTHR_STOP);
btinfo->flags |= flag;
}
/* Get the current frame for TP. */
static struct frame_info *
get_thread_current_frame (struct thread_info *tp)
{
struct frame_info *frame;
ptid_t old_inferior_ptid;
int executing;
/* Set INFERIOR_PTID, which is implicitly used by get_current_frame. */
old_inferior_ptid = inferior_ptid;
inferior_ptid = tp->ptid;
/* Clear the executing flag to allow changes to the current frame.
We are not actually running, yet. We just started a reverse execution
command or a record goto command.
For the latter, EXECUTING is false and this has no effect.
For the former, EXECUTING is true and we're in wait, about to
move the thread. Since we need to recompute the stack, we temporarily
set EXECUTING to flase. */
executing = is_executing (inferior_ptid);
set_executing (inferior_ptid, 0);
frame = NULL;
TRY
{
frame = get_current_frame ();
}
CATCH (except, RETURN_MASK_ALL)
{
/* Restore the previous execution state. */
set_executing (inferior_ptid, executing);
/* Restore the previous inferior_ptid. */
inferior_ptid = old_inferior_ptid;
throw_exception (except);
}
END_CATCH
/* Restore the previous execution state. */
set_executing (inferior_ptid, executing);
/* Restore the previous inferior_ptid. */
inferior_ptid = old_inferior_ptid;
return frame;
}
/* Start replaying a thread. */
static struct btrace_insn_iterator *
record_btrace_start_replaying (struct thread_info *tp)
{
struct btrace_insn_iterator *replay;
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
replay = NULL;
/* We can't start replaying without trace. */
if (btinfo->functions.empty ())
return NULL;
/* GDB stores the current frame_id when stepping in order to detects steps
into subroutines.
Since frames are computed differently when we're replaying, we need to
recompute those stored frames and fix them up so we can still detect
subroutines after we started replaying. */
TRY
{
struct frame_info *frame;
struct frame_id frame_id;
int upd_step_frame_id, upd_step_stack_frame_id;
/* The current frame without replaying - computed via normal unwind. */
frame = get_thread_current_frame (tp);
frame_id = get_frame_id (frame);
/* Check if we need to update any stepping-related frame id's. */
upd_step_frame_id = frame_id_eq (frame_id,
tp->control.step_frame_id);
upd_step_stack_frame_id = frame_id_eq (frame_id,
tp->control.step_stack_frame_id);
/* We start replaying at the end of the branch trace. This corresponds
to the current instruction. */
replay = XNEW (struct btrace_insn_iterator);
btrace_insn_end (replay, btinfo);
/* Skip gaps at the end of the trace. */
while (btrace_insn_get (replay) == NULL)
{
unsigned int steps;
steps = btrace_insn_prev (replay, 1);
if (steps == 0)
error (_("No trace."));
}
/* We're not replaying, yet. */
gdb_assert (btinfo->replay == NULL);
btinfo->replay = replay;
/* Make sure we're not using any stale registers. */
registers_changed_ptid (tp->ptid);
/* The current frame with replaying - computed via btrace unwind. */
frame = get_thread_current_frame (tp);
frame_id = get_frame_id (frame);
/* Replace stepping related frames where necessary. */
if (upd_step_frame_id)
tp->control.step_frame_id = frame_id;
if (upd_step_stack_frame_id)
tp->control.step_stack_frame_id = frame_id;
}
CATCH (except, RETURN_MASK_ALL)
{
xfree (btinfo->replay);
btinfo->replay = NULL;
registers_changed_ptid (tp->ptid);
throw_exception (except);
}
END_CATCH
return replay;
}
/* Stop replaying a thread. */
static void
record_btrace_stop_replaying (struct thread_info *tp)
{
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
xfree (btinfo->replay);
btinfo->replay = NULL;
/* Make sure we're not leaving any stale registers. */
registers_changed_ptid (tp->ptid);
}
/* Stop replaying TP if it is at the end of its execution history. */
static void
record_btrace_stop_replaying_at_end (struct thread_info *tp)
{
struct btrace_insn_iterator *replay, end;
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
replay = btinfo->replay;
if (replay == NULL)
return;
btrace_insn_end (&end, btinfo);
if (btrace_insn_cmp (replay, &end) == 0)
record_btrace_stop_replaying (tp);
}
/* The resume method of target record-btrace. */
void
record_btrace_target::resume (ptid_t ptid, int step, enum gdb_signal signal)
{
struct thread_info *tp;
enum btrace_thread_flag flag, cflag;
DEBUG ("resume %s: %s%s", target_pid_to_str (ptid),
::execution_direction == EXEC_REVERSE ? "reverse-" : "",
step ? "step" : "cont");
/* Store the execution direction of the last resume.
If there is more than one resume call, we have to rely on infrun
to not change the execution direction in-between. */
record_btrace_resume_exec_dir = ::execution_direction;
/* As long as we're not replaying, just forward the request.
For non-stop targets this means that no thread is replaying. In order to
make progress, we may need to explicitly move replaying threads to the end
of their execution history. */
if ((::execution_direction != EXEC_REVERSE)
&& !record_is_replaying (minus_one_ptid))
{
this->beneath->resume (ptid, step, signal);
return;
}
/* Compute the btrace thread flag for the requested move. */
if (::execution_direction == EXEC_REVERSE)
{
flag = step == 0 ? BTHR_RCONT : BTHR_RSTEP;
cflag = BTHR_RCONT;
}
else
{
flag = step == 0 ? BTHR_CONT : BTHR_STEP;
cflag = BTHR_CONT;
}
/* We just indicate the resume intent here. The actual stepping happens in
record_btrace_wait below.
For all-stop targets, we only step INFERIOR_PTID and continue others. */
if (!target_is_non_stop_p ())
{
gdb_assert (ptid_match (inferior_ptid, ptid));
ALL_NON_EXITED_THREADS (tp)
if (ptid_match (tp->ptid, ptid))
{
if (ptid_match (tp->ptid, inferior_ptid))
record_btrace_resume_thread (tp, flag);
else
record_btrace_resume_thread (tp, cflag);
}
}
else
{
ALL_NON_EXITED_THREADS (tp)
if (ptid_match (tp->ptid, ptid))
record_btrace_resume_thread (tp, flag);
}
/* Async support. */
if (target_can_async_p ())
{
target_async (1);
mark_async_event_handler (record_btrace_async_inferior_event_handler);
}
}
/* The commit_resume method of target record-btrace. */
void
record_btrace_target::commit_resume ()
{
if ((::execution_direction != EXEC_REVERSE)
&& !record_is_replaying (minus_one_ptid))
beneath->commit_resume ();
}
/* Cancel resuming TP. */
static void
record_btrace_cancel_resume (struct thread_info *tp)
{
enum btrace_thread_flag flags;
flags = tp->btrace.flags & (BTHR_MOVE | BTHR_STOP);
if (flags == 0)
return;
DEBUG ("cancel resume thread %s (%s): %x (%s)",
print_thread_id (tp),
target_pid_to_str (tp->ptid), flags,
btrace_thread_flag_to_str (flags));
tp->btrace.flags &= ~(BTHR_MOVE | BTHR_STOP);
record_btrace_stop_replaying_at_end (tp);
}
/* Return a target_waitstatus indicating that we ran out of history. */
static struct target_waitstatus
btrace_step_no_history (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_NO_HISTORY;
return status;
}
/* Return a target_waitstatus indicating that a step finished. */
static struct target_waitstatus
btrace_step_stopped (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_STOPPED;
status.value.sig = GDB_SIGNAL_TRAP;
return status;
}
/* Return a target_waitstatus indicating that a thread was stopped as
requested. */
static struct target_waitstatus
btrace_step_stopped_on_request (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_STOPPED;
status.value.sig = GDB_SIGNAL_0;
return status;
}
/* Return a target_waitstatus indicating a spurious stop. */
static struct target_waitstatus
btrace_step_spurious (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_SPURIOUS;
return status;
}
/* Return a target_waitstatus indicating that the thread was not resumed. */
static struct target_waitstatus
btrace_step_no_resumed (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_NO_RESUMED;
return status;
}
/* Return a target_waitstatus indicating that we should wait again. */
static struct target_waitstatus
btrace_step_again (void)
{
struct target_waitstatus status;
status.kind = TARGET_WAITKIND_IGNORE;
return status;
}
/* Clear the record histories. */
static void
record_btrace_clear_histories (struct btrace_thread_info *btinfo)
{
xfree (btinfo->insn_history);
xfree (btinfo->call_history);
btinfo->insn_history = NULL;
btinfo->call_history = NULL;
}
/* Check whether TP's current replay position is at a breakpoint. */
static int
record_btrace_replay_at_breakpoint (struct thread_info *tp)
{
struct btrace_insn_iterator *replay;
struct btrace_thread_info *btinfo;
const struct btrace_insn *insn;
struct inferior *inf;
btinfo = &tp->btrace;
replay = btinfo->replay;
if (replay == NULL)
return 0;
insn = btrace_insn_get (replay);
if (insn == NULL)
return 0;
inf = find_inferior_ptid (tp->ptid);
if (inf == NULL)
return 0;
return record_check_stopped_by_breakpoint (inf->aspace, insn->pc,
&btinfo->stop_reason);
}
/* Step one instruction in forward direction. */
static struct target_waitstatus
record_btrace_single_step_forward (struct thread_info *tp)
{
struct btrace_insn_iterator *replay, end, start;
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
replay = btinfo->replay;
/* We're done if we're not replaying. */
if (replay == NULL)
return btrace_step_no_history ();
/* Check if we're stepping a breakpoint. */
if (record_btrace_replay_at_breakpoint (tp))
return btrace_step_stopped ();
/* Skip gaps during replay. If we end up at a gap (at the end of the trace),
jump back to the instruction at which we started. */
start = *replay;
do
{
unsigned int steps;
/* We will bail out here if we continue stepping after reaching the end
of the execution history. */
steps = btrace_insn_next (replay, 1);
if (steps == 0)
{
*replay = start;
return btrace_step_no_history ();
}
}
while (btrace_insn_get (replay) == NULL);
/* Determine the end of the instruction trace. */
btrace_insn_end (&end, btinfo);
/* The execution trace contains (and ends with) the current instruction.
This instruction has not been executed, yet, so the trace really ends
one instruction earlier. */
if (btrace_insn_cmp (replay, &end) == 0)
return btrace_step_no_history ();
return btrace_step_spurious ();
}
/* Step one instruction in backward direction. */
static struct target_waitstatus
record_btrace_single_step_backward (struct thread_info *tp)
{
struct btrace_insn_iterator *replay, start;
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
replay = btinfo->replay;
/* Start replaying if we're not already doing so. */
if (replay == NULL)
replay = record_btrace_start_replaying (tp);
/* If we can't step any further, we reached the end of the history.
Skip gaps during replay. If we end up at a gap (at the beginning of
the trace), jump back to the instruction at which we started. */
start = *replay;
do
{
unsigned int steps;
steps = btrace_insn_prev (replay, 1);
if (steps == 0)
{
*replay = start;
return btrace_step_no_history ();
}
}
while (btrace_insn_get (replay) == NULL);
/* Check if we're stepping a breakpoint.
For reverse-stepping, this check is after the step. There is logic in
infrun.c that handles reverse-stepping separately. See, for example,
proceed and adjust_pc_after_break.
This code assumes that for reverse-stepping, PC points to the last
de-executed instruction, whereas for forward-stepping PC points to the
next to-be-executed instruction. */
if (record_btrace_replay_at_breakpoint (tp))
return btrace_step_stopped ();
return btrace_step_spurious ();
}
/* Step a single thread. */
static struct target_waitstatus
record_btrace_step_thread (struct thread_info *tp)
{
struct btrace_thread_info *btinfo;
struct target_waitstatus status;
enum btrace_thread_flag flags;
btinfo = &tp->btrace;
flags = btinfo->flags & (BTHR_MOVE | BTHR_STOP);
btinfo->flags &= ~(BTHR_MOVE | BTHR_STOP);
DEBUG ("stepping thread %s (%s): %x (%s)", print_thread_id (tp),
target_pid_to_str (tp->ptid), flags,
btrace_thread_flag_to_str (flags));
/* We can't step without an execution history. */
if ((flags & BTHR_MOVE) != 0 && btrace_is_empty (tp))
return btrace_step_no_history ();
switch (flags)
{
default:
internal_error (__FILE__, __LINE__, _("invalid stepping type."));
case BTHR_STOP:
return btrace_step_stopped_on_request ();
case BTHR_STEP:
status = record_btrace_single_step_forward (tp);
if (status.kind != TARGET_WAITKIND_SPURIOUS)
break;
return btrace_step_stopped ();
case BTHR_RSTEP:
status = record_btrace_single_step_backward (tp);
if (status.kind != TARGET_WAITKIND_SPURIOUS)
break;
return btrace_step_stopped ();
case BTHR_CONT:
status = record_btrace_single_step_forward (tp);
if (status.kind != TARGET_WAITKIND_SPURIOUS)
break;
btinfo->flags |= flags;
return btrace_step_again ();
case BTHR_RCONT:
status = record_btrace_single_step_backward (tp);
if (status.kind != TARGET_WAITKIND_SPURIOUS)
break;
btinfo->flags |= flags;
return btrace_step_again ();
}
/* We keep threads moving at the end of their execution history. The wait
method will stop the thread for whom the event is reported. */
if (status.kind == TARGET_WAITKIND_NO_HISTORY)
btinfo->flags |= flags;
return status;
}
/* A vector of threads. */
typedef struct thread_info * tp_t;
DEF_VEC_P (tp_t);
/* Announce further events if necessary. */
static void
record_btrace_maybe_mark_async_event
(const std::vector<thread_info *> &moving,
const std::vector<thread_info *> &no_history)
{
bool more_moving = !moving.empty ();
bool more_no_history = !no_history.empty ();;
if (!more_moving && !more_no_history)
return;
if (more_moving)
DEBUG ("movers pending");
if (more_no_history)
DEBUG ("no-history pending");
mark_async_event_handler (record_btrace_async_inferior_event_handler);
}
/* The wait method of target record-btrace. */
ptid_t
record_btrace_target::wait (ptid_t ptid, struct target_waitstatus *status,
int options)
{
std::vector<thread_info *> moving;
std::vector<thread_info *> no_history;
DEBUG ("wait %s (0x%x)", target_pid_to_str (ptid), options);
/* As long as we're not replaying, just forward the request. */
if ((::execution_direction != EXEC_REVERSE)
&& !record_is_replaying (minus_one_ptid))
{
return this->beneath->wait (ptid, status, options);
}
/* Keep a work list of moving threads. */
{
thread_info *tp;
ALL_NON_EXITED_THREADS (tp)
{
if (ptid_match (tp->ptid, ptid)
&& ((tp->btrace.flags & (BTHR_MOVE | BTHR_STOP)) != 0))
moving.push_back (tp);
}
}
if (moving.empty ())
{
*status = btrace_step_no_resumed ();
DEBUG ("wait ended by %s: %s", target_pid_to_str (null_ptid),
target_waitstatus_to_string (status).c_str ());
return null_ptid;
}
/* Step moving threads one by one, one step each, until either one thread
reports an event or we run out of threads to step.
When stepping more than one thread, chances are that some threads reach
the end of their execution history earlier than others. If we reported
this immediately, all-stop on top of non-stop would stop all threads and
resume the same threads next time. And we would report the same thread
having reached the end of its execution history again.
In the worst case, this would starve the other threads. But even if other
threads would be allowed to make progress, this would result in far too
many intermediate stops.
We therefore delay the reporting of "no execution history" until we have
nothing else to report. By this time, all threads should have moved to
either the beginning or the end of their execution history. There will
be a single user-visible stop. */
struct thread_info *eventing = NULL;
while ((eventing == NULL) && !moving.empty ())
{
for (unsigned int ix = 0; eventing == NULL && ix < moving.size ();)
{
thread_info *tp = moving[ix];
*status = record_btrace_step_thread (tp);
switch (status->kind)
{
case TARGET_WAITKIND_IGNORE:
ix++;
break;
case TARGET_WAITKIND_NO_HISTORY:
no_history.push_back (ordered_remove (moving, ix));
break;
default:
eventing = unordered_remove (moving, ix);
break;
}
}
}
if (eventing == NULL)
{
/* We started with at least one moving thread. This thread must have
either stopped or reached the end of its execution history.
In the former case, EVENTING must not be NULL.
In the latter case, NO_HISTORY must not be empty. */
gdb_assert (!no_history.empty ());
/* We kept threads moving at the end of their execution history. Stop
EVENTING now that we are going to report its stop. */
eventing = unordered_remove (no_history, 0);
eventing->btrace.flags &= ~BTHR_MOVE;
*status = btrace_step_no_history ();
}
gdb_assert (eventing != NULL);
/* We kept threads replaying at the end of their execution history. Stop
replaying EVENTING now that we are going to report its stop. */
record_btrace_stop_replaying_at_end (eventing);
/* Stop all other threads. */
if (!target_is_non_stop_p ())
{
thread_info *tp;
ALL_NON_EXITED_THREADS (tp)
record_btrace_cancel_resume (tp);
}
/* In async mode, we need to announce further events. */
if (target_is_async_p ())
record_btrace_maybe_mark_async_event (moving, no_history);
/* Start record histories anew from the current position. */
record_btrace_clear_histories (&eventing->btrace);
/* We moved the replay position but did not update registers. */
registers_changed_ptid (eventing->ptid);
DEBUG ("wait ended by thread %s (%s): %s",
print_thread_id (eventing),
target_pid_to_str (eventing->ptid),
target_waitstatus_to_string (status).c_str ());
return eventing->ptid;
}
/* The stop method of target record-btrace. */
void
record_btrace_target::stop (ptid_t ptid)
{
DEBUG ("stop %s", target_pid_to_str (ptid));
/* As long as we're not replaying, just forward the request. */
if ((::execution_direction != EXEC_REVERSE)
&& !record_is_replaying (minus_one_ptid))
{
this->beneath->stop (ptid);
}
else
{
struct thread_info *tp;
ALL_NON_EXITED_THREADS (tp)
if (ptid_match (tp->ptid, ptid))
{
tp->btrace.flags &= ~BTHR_MOVE;
tp->btrace.flags |= BTHR_STOP;
}
}
}
/* The can_execute_reverse method of target record-btrace. */
int
record_btrace_target::can_execute_reverse ()
{
return 1;
}
/* The stopped_by_sw_breakpoint method of target record-btrace. */
int
record_btrace_target::stopped_by_sw_breakpoint ()
{
if (record_is_replaying (minus_one_ptid))
{
struct thread_info *tp = inferior_thread ();
return tp->btrace.stop_reason == TARGET_STOPPED_BY_SW_BREAKPOINT;
}
return this->beneath->stopped_by_sw_breakpoint ();
}
/* The supports_stopped_by_sw_breakpoint method of target
record-btrace. */
int
record_btrace_target::supports_stopped_by_sw_breakpoint ()
{
if (record_is_replaying (minus_one_ptid))
return 1;
return this->beneath->supports_stopped_by_sw_breakpoint ();
}
/* The stopped_by_sw_breakpoint method of target record-btrace. */
int
record_btrace_target::stopped_by_hw_breakpoint ()
{
if (record_is_replaying (minus_one_ptid))
{
struct thread_info *tp = inferior_thread ();
return tp->btrace.stop_reason == TARGET_STOPPED_BY_HW_BREAKPOINT;
}
return this->beneath->stopped_by_hw_breakpoint ();
}
/* The supports_stopped_by_hw_breakpoint method of target
record-btrace. */
int
record_btrace_target::supports_stopped_by_hw_breakpoint ()
{
if (record_is_replaying (minus_one_ptid))
return 1;
return this->beneath->supports_stopped_by_hw_breakpoint ();
}
/* The update_thread_list method of target record-btrace. */
void
record_btrace_target::update_thread_list ()
{
/* We don't add or remove threads during replay. */
if (record_is_replaying (minus_one_ptid))
return;
/* Forward the request. */
this->beneath->update_thread_list ();
}
/* The thread_alive method of target record-btrace. */
int
record_btrace_target::thread_alive (ptid_t ptid)
{
/* We don't add or remove threads during replay. */
if (record_is_replaying (minus_one_ptid))
return find_thread_ptid (ptid) != NULL;
/* Forward the request. */
return this->beneath->thread_alive (ptid);
}
/* Set the replay branch trace instruction iterator. If IT is NULL, replay
is stopped. */
static void
record_btrace_set_replay (struct thread_info *tp,
const struct btrace_insn_iterator *it)
{
struct btrace_thread_info *btinfo;
btinfo = &tp->btrace;
if (it == NULL)
record_btrace_stop_replaying (tp);
else
{
if (btinfo->replay == NULL)
record_btrace_start_replaying (tp);
else if (btrace_insn_cmp (btinfo->replay, it) == 0)
return;
*btinfo->replay = *it;
registers_changed_ptid (tp->ptid);
}
/* Start anew from the new replay position. */
record_btrace_clear_histories (btinfo);
stop_pc = regcache_read_pc (get_current_regcache ());
print_stack_frame (get_selected_frame (NULL), 1, SRC_AND_LOC, 1);
}
/* The goto_record_begin method of target record-btrace. */
void
record_btrace_target::goto_record_begin ()
{
struct thread_info *tp;
struct btrace_insn_iterator begin;
tp = require_btrace_thread ();
btrace_insn_begin (&begin, &tp->btrace);
/* Skip gaps at the beginning of the trace. */
while (btrace_insn_get (&begin) == NULL)
{
unsigned int steps;
steps = btrace_insn_next (&begin, 1);
if (steps == 0)
error (_("No trace."));
}
record_btrace_set_replay (tp, &begin);
}
/* The goto_record_end method of target record-btrace. */
void
record_btrace_target::goto_record_end ()
{
struct thread_info *tp;
tp = require_btrace_thread ();
record_btrace_set_replay (tp, NULL);
}
/* The goto_record method of target record-btrace. */
void
record_btrace_target::goto_record (ULONGEST insn)
{
struct thread_info *tp;
struct btrace_insn_iterator it;
unsigned int number;
int found;
number = insn;
/* Check for wrap-arounds. */
if (number != insn)
error (_("Instruction number out of range."));
tp = require_btrace_thread ();
found = btrace_find_insn_by_number (&it, &tp->btrace, number);
/* Check if the instruction could not be found or is a gap. */
if (found == 0 || btrace_insn_get (&it) == NULL)
error (_("No such instruction."));
record_btrace_set_replay (tp, &it);
}
/* The record_stop_replaying method of target record-btrace. */
void
record_btrace_target::record_stop_replaying ()
{
struct thread_info *tp;
ALL_NON_EXITED_THREADS (tp)
record_btrace_stop_replaying (tp);
}
/* The execution_direction target method. */
enum exec_direction_kind
record_btrace_target::execution_direction ()
{
return record_btrace_resume_exec_dir;
}
/* The prepare_to_generate_core target method. */
void
record_btrace_target::prepare_to_generate_core ()
{
record_btrace_generating_corefile = 1;
}
/* The done_generating_core target method. */
void
record_btrace_target::done_generating_core ()
{
record_btrace_generating_corefile = 0;
}
/* Start recording in BTS format. */
static void
cmd_record_btrace_bts_start (const char *args, int from_tty)
{
if (args != NULL && *args != 0)
error (_("Invalid argument."));
record_btrace_conf.format = BTRACE_FORMAT_BTS;
TRY
{
execute_command ("target record-btrace", from_tty);
}
CATCH (exception, RETURN_MASK_ALL)
{
record_btrace_conf.format = BTRACE_FORMAT_NONE;
throw_exception (exception);
}
END_CATCH
}
/* Start recording in Intel Processor Trace format. */
static void
cmd_record_btrace_pt_start (const char *args, int from_tty)
{
if (args != NULL && *args != 0)
error (_("Invalid argument."));
record_btrace_conf.format = BTRACE_FORMAT_PT;
TRY
{
execute_command ("target record-btrace", from_tty);
}
CATCH (exception, RETURN_MASK_ALL)
{
record_btrace_conf.format = BTRACE_FORMAT_NONE;
throw_exception (exception);
}
END_CATCH
}
/* Alias for "target record". */
static void
cmd_record_btrace_start (const char *args, int from_tty)
{
if (args != NULL && *args != 0)
error (_("Invalid argument."));
record_btrace_conf.format = BTRACE_FORMAT_PT;
TRY
{
execute_command ("target record-btrace", from_tty);
}
CATCH (exception, RETURN_MASK_ALL)
{
record_btrace_conf.format = BTRACE_FORMAT_BTS;
TRY
{
execute_command ("target record-btrace", from_tty);
}
CATCH (exception, RETURN_MASK_ALL)
{
record_btrace_conf.format = BTRACE_FORMAT_NONE;
throw_exception (exception);
}
END_CATCH
}
END_CATCH
}
/* The "set record btrace" command. */
static void
cmd_set_record_btrace (const char *args, int from_tty)
{
printf_unfiltered (_("\"set record btrace\" must be followed "
"by an appropriate subcommand.\n"));
help_list (set_record_btrace_cmdlist, "set record btrace ",
all_commands, gdb_stdout);
}
/* The "show record btrace" command. */
static void
cmd_show_record_btrace (const char *args, int from_tty)
{
cmd_show_list (show_record_btrace_cmdlist, from_tty, "");
}
/* The "show record btrace replay-memory-access" command. */
static void
cmd_show_replay_memory_access (struct ui_file *file, int from_tty,
struct cmd_list_element *c, const char *value)
{
fprintf_filtered (gdb_stdout, _("Replay memory access is %s.\n"),
replay_memory_access);
}
/* The "set record btrace cpu none" command. */
static void
cmd_set_record_btrace_cpu_none (const char *args, int from_tty)
{
if (args != nullptr && *args != 0)
error (_("Trailing junk: '%s'."), args);
record_btrace_cpu_state = CS_NONE;
}
/* The "set record btrace cpu auto" command. */
static void
cmd_set_record_btrace_cpu_auto (const char *args, int from_tty)
{
if (args != nullptr && *args != 0)
error (_("Trailing junk: '%s'."), args);
record_btrace_cpu_state = CS_AUTO;
}
/* The "set record btrace cpu" command. */
static void
cmd_set_record_btrace_cpu (const char *args, int from_tty)
{
if (args == nullptr)
args = "";
/* We use a hard-coded vendor string for now. */
unsigned int family, model, stepping;
int l1, l2, matches = sscanf (args, "intel: %u/%u%n/%u%n", &family,
&model, &l1, &stepping, &l2);
if (matches == 3)
{
if (strlen (args) != l2)
error (_("Trailing junk: '%s'."), args + l2);
}
else if (matches == 2)
{
if (strlen (args) != l1)
error (_("Trailing junk: '%s'."), args + l1);
stepping = 0;
}
else
error (_("Bad format. See \"help set record btrace cpu\"."));
if (USHRT_MAX < family)
error (_("Cpu family too big."));
if (UCHAR_MAX < model)
error (_("Cpu model too big."));
if (UCHAR_MAX < stepping)
error (_("Cpu stepping too big."));
record_btrace_cpu.vendor = CV_INTEL;
record_btrace_cpu.family = family;
record_btrace_cpu.model = model;
record_btrace_cpu.stepping = stepping;
record_btrace_cpu_state = CS_CPU;
}
/* The "show record btrace cpu" command. */
static void
cmd_show_record_btrace_cpu (const char *args, int from_tty)
{
const char *cpu;
if (args != nullptr && *args != 0)
error (_("Trailing junk: '%s'."), args);
switch (record_btrace_cpu_state)
{
case CS_AUTO:
printf_unfiltered (_("btrace cpu is 'auto'.\n"));
return;
case CS_NONE:
printf_unfiltered (_("btrace cpu is 'none'.\n"));
return;
case CS_CPU:
switch (record_btrace_cpu.vendor)
{
case CV_INTEL:
if (record_btrace_cpu.stepping == 0)
printf_unfiltered (_("btrace cpu is 'intel: %u/%u'.\n"),
record_btrace_cpu.family,
record_btrace_cpu.model);
else
printf_unfiltered (_("btrace cpu is 'intel: %u/%u/%u'.\n"),
record_btrace_cpu.family,
record_btrace_cpu.model,
record_btrace_cpu.stepping);
return;
}
}
error (_("Internal error: bad cpu state."));
}
/* The "s record btrace bts" command. */
static void
cmd_set_record_btrace_bts (const char *args, int from_tty)
{
printf_unfiltered (_("\"set record btrace bts\" must be followed "
"by an appropriate subcommand.\n"));
help_list (set_record_btrace_bts_cmdlist, "set record btrace bts ",
all_commands, gdb_stdout);
}
/* The "show record btrace bts" command. */
static void
cmd_show_record_btrace_bts (const char *args, int from_tty)
{
cmd_show_list (show_record_btrace_bts_cmdlist, from_tty, "");
}
/* The "set record btrace pt" command. */
static void
cmd_set_record_btrace_pt (const char *args, int from_tty)
{
printf_unfiltered (_("\"set record btrace pt\" must be followed "
"by an appropriate subcommand.\n"));
help_list (set_record_btrace_pt_cmdlist, "set record btrace pt ",
all_commands, gdb_stdout);
}
/* The "show record btrace pt" command. */
static void
cmd_show_record_btrace_pt (const char *args, int from_tty)
{
cmd_show_list (show_record_btrace_pt_cmdlist, from_tty, "");
}
/* The "record bts buffer-size" show value function. */
static void
show_record_bts_buffer_size_value (struct ui_file *file, int from_tty,
struct cmd_list_element *c,
const char *value)
{
fprintf_filtered (file, _("The record/replay bts buffer size is %s.\n"),
value);
}
/* The "record pt buffer-size" show value function. */
static void
show_record_pt_buffer_size_value (struct ui_file *file, int from_tty,
struct cmd_list_element *c,
const char *value)
{
fprintf_filtered (file, _("The record/replay pt buffer size is %s.\n"),
value);
}
/* Initialize btrace commands. */
void
_initialize_record_btrace (void)
{
add_prefix_cmd ("btrace", class_obscure, cmd_record_btrace_start,
_("Start branch trace recording."), &record_btrace_cmdlist,
"record btrace ", 0, &record_cmdlist);
add_alias_cmd ("b", "btrace", class_obscure, 1, &record_cmdlist);
add_cmd ("bts", class_obscure, cmd_record_btrace_bts_start,
_("\
Start branch trace recording in Branch Trace Store (BTS) format.\n\n\
The processor stores a from/to record for each branch into a cyclic buffer.\n\
This format may not be available on all processors."),
&record_btrace_cmdlist);
add_alias_cmd ("bts", "btrace bts", class_obscure, 1, &record_cmdlist);
add_cmd ("pt", class_obscure, cmd_record_btrace_pt_start,
_("\
Start branch trace recording in Intel Processor Trace format.\n\n\
This format may not be available on all processors."),
&record_btrace_cmdlist);
add_alias_cmd ("pt", "btrace pt", class_obscure, 1, &record_cmdlist);
add_prefix_cmd ("btrace", class_support, cmd_set_record_btrace,
_("Set record options"), &set_record_btrace_cmdlist,
"set record btrace ", 0, &set_record_cmdlist);
add_prefix_cmd ("btrace", class_support, cmd_show_record_btrace,
_("Show record options"), &show_record_btrace_cmdlist,
"show record btrace ", 0, &show_record_cmdlist);
add_setshow_enum_cmd ("replay-memory-access", no_class,
replay_memory_access_types, &replay_memory_access, _("\
Set what memory accesses are allowed during replay."), _("\
Show what memory accesses are allowed during replay."),
_("Default is READ-ONLY.\n\n\
The btrace record target does not trace data.\n\
The memory therefore corresponds to the live target and not \
to the current replay position.\n\n\
When READ-ONLY, allow accesses to read-only memory during replay.\n\
When READ-WRITE, allow accesses to read-only and read-write memory during \
replay."),
NULL, cmd_show_replay_memory_access,
&set_record_btrace_cmdlist,
&show_record_btrace_cmdlist);
add_prefix_cmd ("cpu", class_support, cmd_set_record_btrace_cpu,
_("\
Set the cpu to be used for trace decode.\n\n\
The format is \"<vendor>:<identifier>\" or \"none\" or \"auto\" (default).\n\
For vendor \"intel\" the format is \"<family>/<model>[/<stepping>]\".\n\n\
When decoding branch trace, enable errata workarounds for the specified cpu.\n\
The default is \"auto\", which uses the cpu on which the trace was recorded.\n\
When GDB does not support that cpu, this option can be used to enable\n\
workarounds for a similar cpu that GDB supports.\n\n\
When set to \"none\", errata workarounds are disabled."),
&set_record_btrace_cpu_cmdlist,
_("set record btrace cpu "), 1,
&set_record_btrace_cmdlist);
add_cmd ("auto", class_support, cmd_set_record_btrace_cpu_auto, _("\
Automatically determine the cpu to be used for trace decode."),
&set_record_btrace_cpu_cmdlist);
add_cmd ("none", class_support, cmd_set_record_btrace_cpu_none, _("\
Do not enable errata workarounds for trace decode."),
&set_record_btrace_cpu_cmdlist);
add_cmd ("cpu", class_support, cmd_show_record_btrace_cpu, _("\
Show the cpu to be used for trace decode."),
&show_record_btrace_cmdlist);
add_prefix_cmd ("bts", class_support, cmd_set_record_btrace_bts,
_("Set record btrace bts options"),
&set_record_btrace_bts_cmdlist,
"set record btrace bts ", 0, &set_record_btrace_cmdlist);
add_prefix_cmd ("bts", class_support, cmd_show_record_btrace_bts,
_("Show record btrace bts options"),
&show_record_btrace_bts_cmdlist,
"show record btrace bts ", 0, &show_record_btrace_cmdlist);
add_setshow_uinteger_cmd ("buffer-size", no_class,
&record_btrace_conf.bts.size,
_("Set the record/replay bts buffer size."),
_("Show the record/replay bts buffer size."), _("\
When starting recording request a trace buffer of this size. \
The actual buffer size may differ from the requested size. \
Use \"info record\" to see the actual buffer size.\n\n\
Bigger buffers allow longer recording but also take more time to process \
the recorded execution trace.\n\n\
The trace buffer size may not be changed while recording."), NULL,
show_record_bts_buffer_size_value,
&set_record_btrace_bts_cmdlist,
&show_record_btrace_bts_cmdlist);
add_prefix_cmd ("pt", class_support, cmd_set_record_btrace_pt,
_("Set record btrace pt options"),
&set_record_btrace_pt_cmdlist,
"set record btrace pt ", 0, &set_record_btrace_cmdlist);
add_prefix_cmd ("pt", class_support, cmd_show_record_btrace_pt,
_("Show record btrace pt options"),
&show_record_btrace_pt_cmdlist,
"show record btrace pt ", 0, &show_record_btrace_cmdlist);
add_setshow_uinteger_cmd ("buffer-size", no_class,
&record_btrace_conf.pt.size,
_("Set the record/replay pt buffer size."),
_("Show the record/replay pt buffer size."), _("\
Bigger buffers allow longer recording but also take more time to process \
the recorded execution.\n\
The actual buffer size may differ from the requested size. Use \"info record\" \
to see the actual buffer size."), NULL, show_record_pt_buffer_size_value,
&set_record_btrace_pt_cmdlist,
&show_record_btrace_pt_cmdlist);
add_target (&record_btrace_ops);
bfcache = htab_create_alloc (50, bfcache_hash, bfcache_eq, NULL,
xcalloc, xfree);
record_btrace_conf.bts.size = 64 * 1024;
record_btrace_conf.pt.size = 16 * 1024;
}