I would like to be able to use non-trivial types in gdbarch_tdep types.
This is not possible at the moment (in theory), because of the one
definition rule.
To allow it, rename all gdbarch_tdep types to <arch>_gdbarch_tdep, and
make them inherit from a gdbarch_tdep base class. The inheritance is
necessary to be able to pass pointers to all these <arch>_gdbarch_tdep
objects to gdbarch_alloc, which takes a pointer to gdbarch_tdep.
These objects are never deleted through a base class pointer, so I
didn't include a virtual destructor. In the future, if gdbarch objects
deletable, I could imagine that the gdbarch_tdep objects could become
owned by the gdbarch objects, and then it would become useful to have a
virtual destructor (so that the gdbarch object can delete the owned
gdbarch_tdep object). But that's not necessary right now.
It turns out that RISC-V already has a gdbarch_tdep that is
non-default-constructible, so that provides a good motivation for this
change.
Most changes are fairly straightforward, mostly needing to add some
casts all over the place. There is however the xtensa architecture,
doing its own little weird thing to define its gdbarch_tdep. I did my
best to adapt it, but I can't test those changes.
Change-Id: Ic001903f91ddd106bd6ca09a79dabe8df2d69f3b
The bug fixed by this [1] patch was caused by an out-of-bounds access to
a value's content. The code gets the value's content (just a pointer)
and then indexes it with a non-sensical index.
This made me think of changing functions that return value contents to
return array_views instead of a plain pointer. This has the advantage
that when GDB is built with _GLIBCXX_DEBUG, accesses to the array_view
are checked, making bugs more apparent / easier to find.
This patch changes the return types of these functions, and updates
callers to call .data() on the result, meaning it's not changing
anything in practice. Additional work will be needed (which can be done
little by little) to make callers propagate the use of array_view and
reap the benefits.
[1] https://sourceware.org/pipermail/gdb-patches/2021-September/182306.html
Change-Id: I5151f888f169e1c36abe2cbc57620110673816f3
The r_ldsomap field is specific to Solaris (part of librtld_db), and
should never be accessed for Linux. glibc is planning to add a field
to support multiple namespaces. But there will be no r_ldsomap when
r_version is bumped to 2. Add linux_[ilp32|lp64]_fetch_link_map_offsets
to set r_ldsomap_offset to -1 and use them for Linux targets.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28236
Remove the logical tag/top byte from the address whenever we have to work with
allocation tags.
gdb/ChangeLog:
2021-06-28 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c (aarch64_linux_memtag_matches_p): Remove the top
byte.
(aarch64_linux_set_memtags): Likewise.
(aarch64_linux_get_memtag): Likewise.
(aarch64_linux_report_signal_info): Likewise.
The FFR register has a size of VL bits, not 32 bits.
This causes issues when writing core files with the gcore command and when
reading them. The FFR register sometimes shows up with garbage data.
gdb/ChangeLog:
2021-06-28 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c
(aarch64_linux_iterate_over_regset_sections): Fix FFR register size.
This register should be 64 bits in size, but the current code only saves
32 bits. This is due to an early assumption that tag_ctl would be 32 bits
in size.
gdb/ChangeLog:
2021-06-28 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c
(aarch64_linux_iterate_over_regset_sections): Update tag_ctl register
size.
* aarch64-linux-tdep.h (AARCH64_LINUX_SIZEOF_MTE_REGSET): Set to
8 and update comments.
This patch handles the tagged_addr_ctrl register that is exported when
generating a core file.
gdb/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c
(aarch64_linux_iterate_over_regset_sections): Handle MTE register set.
* aarch64-linux-tdep.h (AARCH64_LINUX_SIZEOF_MTE_REGSET): Define.
Whenever a memory tag violation occurs, we get a SIGSEGV. Additional
information can be obtained through the siginfo data structure.
For AArch64 the Linux kernel may expose the fault address and tag
information, if we have a synchronous event. Otherwise there is
no fault address available.
The synchronous event looks like this:
--
(gdb) continue
Continuing.
Program received signal SIGSEGV, Segmentation fault
Memory tag violation while accessing address 0x0500fffff7ff8000
Allocation tag 0x1.
Logical tag 0x5
--
The asynchronous event looks like this:
--
(gdb) continue
Continuing.
Program received signal SIGSEGV, Segmentation fault
Memory tag violation
Fault address unavailable.
--
gdb/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c
(aarch64_linux_report_signal_info): New function.
(aarch64_linux_init_abi): Register
aarch64_linux_report_signal_info as the report_signal_info hook.
* arch/aarch64-linux.h (SEGV_MTEAERR): Define.
(SEGV_MTESERR): Define.
Add some unit testing to exercise setting/getting logical tags in the
AArch64 implementation.
gdb/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c: Include gdbsupport/selftest.h.
(aarch64_linux_ltag_tests): New function.
(_initialize_aarch64_linux_tdep): Register aarch64_linux_ltag_tests.
This patch adds a target description and feature "mte" for aarch64.
It includes one new register, tag_ctl, that can be used to configure the
tag generation rules and sync/async modes. It is 64-bit in size.
The patch also adjusts the code that creates the target descriptions at
runtime based on CPU feature checks.
gdb/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-nat.c
(aarch64_linux_nat_target::read_description): Take MTE flag into
account.
Slight refactor to hwcap flag checking.
* aarch64-linux-tdep.c
(aarch64_linux_core_read_description): Likewise.
* aarch64-tdep.c (tdesc_aarch64_list): Add one more dimension for
MTE.
(aarch64_read_description): Add mte_p parameter and update to use it.
Update the documentation.
(aarch64_gdbarch_init): Update call to aarch64_read_description.
* aarch64-tdep.h (aarch64_read_description): Add mte_p parameter.
* arch/aarch64.c: Include ../features/aarch64-mte.c.
(aarch64_create_target_description): Add mte_p parameter and update
the code to use it.
* arch/aarch64.h (aarch64_create_target_description): Add mte_p
parameter.
* features/Makefile (FEATURE_XMLFILES): Add aarch64-mte.xml.
* features/aarch64-mte.c: New file, generated.
* features/aarch64-mte.xml: New file.
gdbserver/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* linux-aarch64-ipa.cc (get_ipa_tdesc): Update call to
aarch64_linux_read_description.
(initialize_low_tracepoint): Likewise.
* linux-aarch64-low.cc (aarch64_target::low_arch_setup): Take MTE flag
into account.
* linux-aarch64-tdesc.cc (tdesc_aarch64_list): Add one more dimension
for MTE.
(aarch64_linux_read_description): Add mte_p parameter and update to
use it.
* linux-aarch64-tdesc.h (aarch64_linux_read_description): Add mte_p
parameter.
This patch is a preparation for the next patches implementing MTE. It just adds
a HWCAP2 constant for MTE, creates a new generic arch/aarch64-mte-linux.h file
and includes that file in the source files that will use it.
gdb/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* Makefile.in (HFILES_NO_SRCDIR): Add arch/aarch64-mte-linux.h.
* aarch64-linux-nat.c: Include arch/aarch64-mte-linux.h.
* aarch64-linux-tdep.c: Likewise
* arch/aarch64-mte-linux.h: New file.
gdbserver/ChangeLog:
2021-03-24 Luis Machado <luis.machado@linaro.org>
* linux-aarch64-low.cc: Include arch/aarch64-mte-linux.h.
This patch updates the functions setting value bytes in trad-frame to use
a gdb::array_view instead of passing a buffer and a size.
gdb/ChangeLog:
2021-01-19 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c (aarch64_linux_restore_vreg): Pass in an
array_view.
* trad-frame.c (trad_frame_set_value_bytes): Use gdb::array_view
instead of buffer and size.
(trad_frame_set_reg_value_bytes): Likewise.
* trad-frame.h (trad_frame_set_reg_value_bytes): Likewise.
(trad_frame_set_value_bytes): Likewise.
This commits the result of running gdb/copyright.py as per our Start
of New Year procedure...
gdb/ChangeLog
Update copyright year range in copyright header of all GDB files.
The FPSIMD dump in signal frames and ptrace FPSIMD dump in the SVE context
structure follows the target endianness, whereas the SVE dumps are
endianness-independent (LE).
Therefore, when the system is in BE mode, we need to reverse the bytes
for the FPSIMD data.
Given the V registers are larger than 64-bit, I've added a way for value
bytes to be set, as opposed to passing a 64-bit fixed quantity. This fits
nicely with the unwinding *_got_bytes function and makes the trad-frame
more flexible and capable of saving larger registers.
The memory for the bytes is allocated via the frame obstack, so it gets freed
after we're done inspecting the frame.
gdb/ChangeLog:
2020-12-10 Luis Machado <luis.machado@linaro.org>
* aarch64-linux-tdep.c (aarch64_linux_restore_vreg) New function.
(aarch64_linux_sigframe_init): Call aarch64_linux_restore_vreg.
* aarch64-tdep.h (V_REGISTER_SIZE): Move to ...
* arch/aarch64.h: ... here.
* nat/aarch64-sve-linux-ptrace.c: Include endian.h.
(aarch64_maybe_swab128): New function.
(aarch64_sve_regs_copy_to_reg_buf)
(aarch64_sve_regs_copy_from_reg_buf): Adjust FPSIMD entries.
* trad-frame.c (trad_frame_reset_saved_regs): Initialize
the data field.
(TF_REG_VALUE_BYTES): New enum value.
(trad_frame_value_bytes_p): New function.
(trad_frame_set_value_bytes): New function.
(trad_frame_set_reg_value_bytes): New function.
(trad_frame_get_prev_register): Handle register values saved as bytes.
* trad-frame.h (trad_frame_set_reg_value_bytes): New prototype.
(struct trad_frame_saved_reg) <data>: New field.
(trad_frame_set_value_bytes): New prototype.
(trad_frame_value_bytes_p): New prototype.
Today, GDB only allows a single displaced stepping operation to happen
per inferior at a time. There is a single displaced stepping buffer per
inferior, whose address is fixed (obtained with
gdbarch_displaced_step_location), managed by infrun.c.
In the case of the AMD ROCm target [1] (in the context of which this
work has been done), it is typical to have thousands of threads (or
waves, in SMT terminology) executing the same code, hitting the same
breakpoint (possibly conditional) and needing to to displaced step it at
the same time. The limitation of only one displaced step executing at a
any given time becomes a real bottleneck.
To fix this bottleneck, we want to make it possible for threads of a
same inferior to execute multiple displaced steps in parallel. This
patch builds the foundation for that.
In essence, this patch moves the task of preparing a displaced step and
cleaning up after to gdbarch functions. This allows using different
schemes for allocating and managing displaced stepping buffers for
different platforms. The gdbarch decides how to assign a buffer to a
thread that needs to execute a displaced step.
On the ROCm target, we are able to allocate one displaced stepping
buffer per thread, so a thread will never have to wait to execute a
displaced step.
On Linux, the entry point of the executable if used as the displaced
stepping buffer, since we assume that this code won't get used after
startup. From what I saw (I checked with a binary generated against
glibc and musl), on AMD64 we have enough space there to fit two
displaced stepping buffers. A subsequent patch makes AMD64/Linux use
two buffers.
In addition to having multiple displaced stepping buffers, there is also
the idea of sharing displaced stepping buffers between threads. Two
threads doing displaced steps for the same PC could use the same buffer
at the same time. Two threads stepping over the same instruction (same
opcode) at two different PCs may also be able to share a displaced
stepping buffer. This is an idea for future patches, but the
architecture built by this patch is made to allow this.
Now, the implementation details. The main part of this patch is moving
the responsibility of preparing and finishing a displaced step to the
gdbarch. Before this patch, preparing a displaced step is driven by the
displaced_step_prepare_throw function. It does some calls to the
gdbarch to do some low-level operations, but the high-level logic is
there. The steps are roughly:
- Ask the gdbarch for the displaced step buffer location
- Save the existing bytes in the displaced step buffer
- Ask the gdbarch to copy the instruction into the displaced step buffer
- Set the pc of the thread to the beginning of the displaced step buffer
Similarly, the "fixup" phase, executed after the instruction was
successfully single-stepped, is driven by the infrun code (function
displaced_step_finish). The steps are roughly:
- Restore the original bytes in the displaced stepping buffer
- Ask the gdbarch to fixup the instruction result (adjust the target's
registers or memory to do as if the instruction had been executed in
its original location)
The displaced_step_inferior_state::step_thread field indicates which
thread (if any) is currently using the displaced stepping buffer, so it
is used by displaced_step_prepare_throw to check if the displaced
stepping buffer is free to use or not.
This patch defers the whole task of preparing and cleaning up after a
displaced step to the gdbarch. Two new main gdbarch methods are added,
with the following semantics:
- gdbarch_displaced_step_prepare: Prepare for the given thread to
execute a displaced step of the instruction located at its current PC.
Upon return, everything should be ready for GDB to resume the thread
(with either a single step or continue, as indicated by
gdbarch_displaced_step_hw_singlestep) to make it displaced step the
instruction.
- gdbarch_displaced_step_finish: Called when the thread stopped after
having started a displaced step. Verify if the instruction was
executed, if so apply any fixup required to compensate for the fact
that the instruction was executed at a different place than its
original pc. Release any resources that were allocated for this
displaced step. Upon return, everything should be ready for GDB to
resume the thread in its "normal" code path.
The displaced_step_prepare_throw function now pretty much just offloads
to gdbarch_displaced_step_prepare and the displaced_step_finish function
offloads to gdbarch_displaced_step_finish.
The gdbarch_displaced_step_location method is now unnecessary, so is
removed. Indeed, the core of GDB doesn't know how many displaced step
buffers there are nor where they are.
To keep the existing behavior for existing architectures, the logic that
was previously implemented in infrun.c for preparing and finishing a
displaced step is moved to displaced-stepping.c, to the
displaced_step_buffer class. Architectures are modified to implement
the new gdbarch methods using this class. The behavior is not expected
to change.
The other important change (which arises from the above) is that the
core of GDB no longer prevents concurrent displaced steps. Before this
patch, start_step_over walks the global step over chain and tries to
initiate a step over (whether it is in-line or displaced). It follows
these rules:
- if an in-line step is in progress (in any inferior), don't start any
other step over
- if a displaced step is in progress for an inferior, don't start
another displaced step for that inferior
After starting a displaced step for a given inferior, it won't start
another displaced step for that inferior.
In the new code, start_step_over simply tries to initiate step overs for
all the threads in the list. But because threads may be added back to
the global list as it iterates the global list, trying to initiate step
overs, start_step_over now starts by stealing the global queue into a
local queue and iterates on the local queue. In the typical case, each
thread will either:
- have initiated a displaced step and be resumed
- have been added back by the global step over queue by
displaced_step_prepare_throw, because the gdbarch will have returned
that there aren't enough resources (i.e. buffers) to initiate a
displaced step for that thread
Lastly, if start_step_over initiates an in-line step, it stops
iterating, and moves back whatever remaining threads it had in its local
step over queue to the global step over queue.
Two other gdbarch methods are added, to handle some slightly annoying
corner cases. They feel awkwardly specific to these cases, but I don't
see any way around them:
- gdbarch_displaced_step_copy_insn_closure_by_addr: in
arm_pc_is_thumb, arm-tdep.c wants to get the closure for a given
buffer address.
- gdbarch_displaced_step_restore_all_in_ptid: when a process forks
(at least on Linux), the address space is copied. If some displaced
step buffers were in use at the time of the fork, we need to restore
the original bytes in the child's address space.
These two adjustments are also made in infrun.c:
- prepare_for_detach: there may be multiple threads doing displaced
steps when we detach, so wait until all of them are done
- handle_inferior_event: when we handle a fork event for a given
thread, it's possible that other threads are doing a displaced step at
the same time. Make sure to restore the displaced step buffer
contents in the child for them.
[1] https://github.com/ROCm-Developer-Tools/ROCgdb
gdb/ChangeLog:
* displaced-stepping.h (struct
displaced_step_copy_insn_closure): Adjust comments.
(struct displaced_step_inferior_state) <step_thread,
step_gdbarch, step_closure, step_original, step_copy,
step_saved_copy>: Remove fields.
(struct displaced_step_thread_state): New.
(struct displaced_step_buffer): New.
* displaced-stepping.c (displaced_step_buffer::prepare): New.
(write_memory_ptid): Move from infrun.c.
(displaced_step_instruction_executed_successfully): New,
factored out of displaced_step_finish.
(displaced_step_buffer::finish): New.
(displaced_step_buffer::copy_insn_closure_by_addr): New.
(displaced_step_buffer::restore_in_ptid): New.
* gdbarch.sh (displaced_step_location): Remove.
(displaced_step_prepare, displaced_step_finish,
displaced_step_copy_insn_closure_by_addr,
displaced_step_restore_all_in_ptid): New.
* gdbarch.c: Re-generate.
* gdbarch.h: Re-generate.
* gdbthread.h (class thread_info) <displaced_step_state>: New
field.
(thread_step_over_chain_remove): New declaration.
(thread_step_over_chain_next): New declaration.
(thread_step_over_chain_length): New declaration.
* thread.c (thread_step_over_chain_remove): Make non-static.
(thread_step_over_chain_next): New.
(global_thread_step_over_chain_next): Use
thread_step_over_chain_next.
(thread_step_over_chain_length): New.
(global_thread_step_over_chain_enqueue): Add debug print.
(global_thread_step_over_chain_remove): Add debug print.
* infrun.h (get_displaced_step_copy_insn_closure_by_addr):
Remove.
* infrun.c (get_displaced_stepping_state): New.
(displaced_step_in_progress_any_inferior): Remove.
(displaced_step_in_progress_thread): Adjust.
(displaced_step_in_progress): Adjust.
(displaced_step_in_progress_any_thread): New.
(get_displaced_step_copy_insn_closure_by_addr): Remove.
(gdbarch_supports_displaced_stepping): Use
gdbarch_displaced_step_prepare_p.
(displaced_step_reset): Change parameter from inferior to
thread.
(displaced_step_prepare_throw): Implement using
gdbarch_displaced_step_prepare.
(write_memory_ptid): Move to displaced-step.c.
(displaced_step_restore): Remove.
(displaced_step_finish): Implement using
gdbarch_displaced_step_finish.
(start_step_over): Allow starting more than one displaced step.
(prepare_for_detach): Handle possibly multiple threads doing
displaced steps.
(handle_inferior_event): Handle possibility that fork event
happens while another thread displaced steps.
* linux-tdep.h (linux_displaced_step_prepare): New.
(linux_displaced_step_finish): New.
(linux_displaced_step_copy_insn_closure_by_addr): New.
(linux_displaced_step_restore_all_in_ptid): New.
(linux_init_abi): Add supports_displaced_step parameter.
* linux-tdep.c (struct linux_info) <disp_step_buf>: New field.
(linux_displaced_step_prepare): New.
(linux_displaced_step_finish): New.
(linux_displaced_step_copy_insn_closure_by_addr): New.
(linux_displaced_step_restore_all_in_ptid): New.
(linux_init_abi): Add supports_displaced_step parameter,
register displaced step methods if true.
(_initialize_linux_tdep): Register inferior_execd observer.
* amd64-linux-tdep.c (amd64_linux_init_abi_common): Add
supports_displaced_step parameter, adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
(amd64_linux_init_abi): Adjust call to
amd64_linux_init_abi_common.
(amd64_x32_linux_init_abi): Likewise.
* aarch64-linux-tdep.c (aarch64_linux_init_abi): Adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
* arm-linux-tdep.c (arm_linux_init_abi): Likewise.
* i386-linux-tdep.c (i386_linux_init_abi): Likewise.
* alpha-linux-tdep.c (alpha_linux_init_abi): Adjust call to
linux_init_abi.
* arc-linux-tdep.c (arc_linux_init_osabi): Likewise.
* bfin-linux-tdep.c (bfin_linux_init_abi): Likewise.
* cris-linux-tdep.c (cris_linux_init_abi): Likewise.
* csky-linux-tdep.c (csky_linux_init_abi): Likewise.
* frv-linux-tdep.c (frv_linux_init_abi): Likewise.
* hppa-linux-tdep.c (hppa_linux_init_abi): Likewise.
* ia64-linux-tdep.c (ia64_linux_init_abi): Likewise.
* m32r-linux-tdep.c (m32r_linux_init_abi): Likewise.
* m68k-linux-tdep.c (m68k_linux_init_abi): Likewise.
* microblaze-linux-tdep.c (microblaze_linux_init_abi): Likewise.
* mips-linux-tdep.c (mips_linux_init_abi): Likewise.
* mn10300-linux-tdep.c (am33_linux_init_osabi): Likewise.
* nios2-linux-tdep.c (nios2_linux_init_abi): Likewise.
* or1k-linux-tdep.c (or1k_linux_init_abi): Likewise.
* riscv-linux-tdep.c (riscv_linux_init_abi): Likewise.
* s390-linux-tdep.c (s390_linux_init_abi_any): Likewise.
* sh-linux-tdep.c (sh_linux_init_abi): Likewise.
* sparc-linux-tdep.c (sparc32_linux_init_abi): Likewise.
* sparc64-linux-tdep.c (sparc64_linux_init_abi): Likewise.
* tic6x-linux-tdep.c (tic6x_uclinux_init_abi): Likewise.
* tilegx-linux-tdep.c (tilegx_linux_init_abi): Likewise.
* xtensa-linux-tdep.c (xtensa_linux_init_abi): Likewise.
* ppc-linux-tdep.c (ppc_linux_init_abi): Adjust call to
linux_init_abi. Remove call to
set_gdbarch_displaced_step_location.
* arm-tdep.c (arm_pc_is_thumb): Call
gdbarch_displaced_step_copy_insn_closure_by_addr instead of
get_displaced_step_copy_insn_closure_by_addr.
* rs6000-aix-tdep.c (rs6000_aix_init_osabi): Adjust calls to
clear gdbarch methods.
* rs6000-tdep.c (struct ppc_inferior_data): New structure.
(get_ppc_per_inferior): New function.
(ppc_displaced_step_prepare): New function.
(ppc_displaced_step_finish): New function.
(ppc_displaced_step_restore_all_in_ptid): New function.
(rs6000_gdbarch_init): Register new gdbarch methods.
* s390-tdep.c (s390_gdbarch_init): Don't call
set_gdbarch_displaced_step_location, set new gdbarch methods.
gdb/testsuite/ChangeLog:
* gdb.arch/amd64-disp-step-avx.exp: Adjust pattern.
* gdb.threads/forking-threads-plus-breakpoint.exp: Likewise.
* gdb.threads/non-stop-fair-events.exp: Likewise.
Change-Id: I387cd235a442d0620ec43608fd3dc0097fcbf8c8
regset_from_core_section doesn't exist anymore; it has been replaced
by the iterate_over_regset_sections gdbarch method. Update comments
accordingly to not confuse readers.
gdb/ChangeLog:
2020-01-24 Christian Biesinger <cbiesinger@google.com>
* aarch64-fbsd-tdep.c (aarch64_fbsd_iterate_over_regset_sections):
Update comment.
* aarch64-linux-tdep.c (aarch64_linux_iterate_over_regset_sections):
Likewise.
* arm-fbsd-tdep.c (arm_fbsd_iterate_over_regset_sections): Likewise.
* gdbcore.h (deprecated_add_core_fns): Update comment to point to
the correct replacement (iterate_over_regset_sections).
* riscv-fbsd-tdep.c (riscv_fbsd_iterate_over_regset_sections):
Update comment.
Change-Id: I5eea4d18e15edae5d6dfd5d0d6241e5b2ae40daa
This patch was inspired by a recent review that recommended using
std::string in a new implementation of the gcc_target_options gdbarch
function. It changes this function to return std::string rather than
an ordinary xmalloc'd string.
I believe this caught a latent memory leak in compile.c:get_args.
Tested on x86-64 Fedora 29.
gdb/ChangeLog
2019-10-15 Tom Tromey <tromey@adacore.com>
* gdbarch.h, gdbarch.c: Rebuild.
* gdbarch.sh (gcc_target_options): Change return type to
std::string.
* compile/compile.c (get_args): Update.
* nios2-tdep.c (nios2_gcc_target_options): Return std::string.
* arm-linux-tdep.c (arm_linux_gcc_target_options): Return
std::string.
* aarch64-linux-tdep.c (aarch64_linux_gcc_target_options): Return
std::string.
* arch-utils.c (default_gcc_target_options): Return std::string.
* arch-utils.h (default_gcc_target_options): Return std::string.
* s390-tdep.c (s390_gcc_target_options): Return std::string.
Change-Id: I51f61703426a323089e646da8f22320a2cafbc1f
Both targets define DISPLACED_MODIFIED_INSNS, each with different values.
Add ARM_ and AARCH64_ to the start of the name to prevent confusion.
No functionality changes.
gdb/ChangeLog:
* aarch64-linux-tdep.c (aarch64_linux_init_abi): Use
AARCH64_DISPLACED_MODIFIED_INSNS.
* aarch64-tdep.c (struct aarch64_displaced_step_data)
(aarch64_displaced_step_copy_insn): Likewise.
* aarch64-tdep.h (DISPLACED_MODIFIED_INSNS): Rename from..
(AARCH64_DISPLACED_MODIFIED_INSNS): ...to this.
* arm-linux-tdep.c (arm_linux_cleanup_svc): Use
ARM_DISPLACED_MODIFIED_INSNS.
* arm-tdep.c (arm_gdbarch_init): Likewise.
* arm-tdep.h (DISPLACED_MODIFIED_INSNS): Rename from..
(ARM_DISPLACED_MODIFIED_INSNS): ...to this.
(struct arm_displaced_step_closure): Use
ARM_DISPLACED_MODIFIED_INSNS.
Add aarch64_get_hwcap functions for reading the HWCAP.
From this extract the PACA value and use this to enable pauth.
gdb/ChangeLog:
* aarch64-linux-nat.c
(aarch64_linux_nat_target::read_description): Read PACA hwcap.
* aarch64-linux-tdep.c
(aarch64_linux_core_read_description): Likewise.
(aarch64_linux_get_hwcap): New function.
* aarch64-linux-tdep.h (AARCH64_HWCAP_PACA): New define.
(aarch64_linux_get_hwcap): New declaration.
gdb/gdbserver/ChangeLog:
* linux-aarch64-low.c (AARCH64_HWCAP_PACA): New define.
(aarch64_get_hwcap): New function.
(aarch64_arch_setup): Read APIA hwcap.
Pointer Authentication is a new feature in AArch64 v8.3-a. When enabled in
the compiler, function return addresses will be mangled by the kernel.
Add register description xml and wire up to aarch64_linux_read_description.
This description includes the two pauth user registers.
Nothing yet uses the feature - that is added in later patches.
gdb/ChangeLog:
* aarch64-linux-nat.c
(aarch64_linux_nat_target::read_description): Add pauth param.
* aarch64-linux-tdep.c
(aarch64_linux_core_read_description): Likewise.
* aarch64-tdep.c (struct target_desc): Add in pauth.
(aarch64_read_description): Add pauth param.
(aarch64_gdbarch_init): Likewise.
* aarch64-tdep.h (aarch64_read_description): Likewise.
* arch/aarch64.c (aarch64_create_target_description): Likewise.
* arch/aarch64.h (aarch64_create_target_description): Likewise.
* features/Makefile: Add new files.
* features/aarch64-pauth.c: New file.
* features/aarch64-pauth.xml: New file.
gdb/doc/ChangeLog:
* gdb.texinfo: Describe pauth feature.
gdb/gdbserver/ChangeLog:
* linux-aarch64-ipa.c (get_ipa_tdesc): Add pauth param.
(initialize_low_tracepoint): Likewise.
* linux-aarch64-low.c (aarch64_arch_setup): Likewise.
* linux-aarch64-tdesc-selftest.c (aarch64_tdesc_test): Likewise.
* linux-aarch64-tdesc.c (struct target_desc): Likewise.
(aarch64_linux_read_description): Likewise.
* linux-aarch64-tdesc.h (aarch64_linux_read_description): Likewise.
Checking the syscall number when stopped on entry/exit relies on checking
the value in register X8.
However, on exit from an execve syscall, the registers will all be cleared.
Given this is only checked on syscall entry/exit, then a cleared register
state either means execve exit or syscall 0 (io_setup) entry with invalid
parameters and an invalid FR and LR, which in reality should never happen.
Use this to detect execve exit.
Move function to allow use of aarch64_sys_execve enum, and use newer
regcache functions.
Fixes gdb.base/catch-syscall.exp on Aarch64.
gdb/ChangeLog:
* aarch64-linux-tdep.c (aarch64_linux_get_syscall_number): Check
for execve.
This commit applies all changes made after running the gdb/copyright.py
script.
Note that one file was flagged by the script, due to an invalid
copyright header
(gdb/unittests/basic_string_view/element_access/char/empty.cc).
As the file was copied from GCC's libstdc++-v3 testsuite, this commit
leaves this file untouched for the time being; a patch to fix the header
was sent to gcc-patches first.
gdb/ChangeLog:
Update copyright year range in all GDB files.
When reading the reserved section in the sigcontext ensure the
address is updated on an unknown section. Also add additional
checks to prevent reading past the end of the array.
Fixes gdb.base/savedregs.exp
* aarch64-linux-tdep.c (AARCH64_SIGCONTEXT_RESERVED_SIZE): New
define.
(aarch64_linux_sigframe_init): Extra boundary checks.
I tried a build on macOS today and it failed due to a mismatch between
the printf format and the type in aarch64-linux-tdep.c. This patch
fixes the problem by using pulongest and %s rather than %ld.
gdb/ChangeLog
2018-10-02 Tom Tromey <tom@tromey.com>
* aarch64-linux-tdep.c (aarch64_linux_sigframe_init): Use pulongest.
Both the VFP and SVE registers may be contained within the reserved space of
the sigcontext and can be found by seraching for MAGIC values. Detect these
and add the registers (including pseudos) to the trad frame cache.
gdb/
* aarch64-linux-tdep.c (AARCH64_SIGCONTEXT_RESERVED_OFFSET): Add
define.
(AARCH64_EXTRA_MAGIC): Likewise.
(AARCH64_FPSIMD_MAGIC): Likewise.
(AARCH64_SVE_MAGIC): Likewise.
(AARCH64_EXTRA_DATAP_OFFSET): Likewise.
(AARCH64_FPSIMD_FPSR_OFFSET): Likewise.
(AARCH64_FPSIMD_FPCR_OFFSET): Likewise.
(AARCH64_FPSIMD_V0_OFFSET): Likewise.
(AARCH64_FPSIMD_VREG_SIZE): Likewise.
(AARCH64_SVE_CONTEXT_VL_OFFSET): Likewise.
(AARCH64_SVE_CONTEXT_REGS_OFFSET): Likewise.
(AARCH64_SVE_CONTEXT_P_REGS_OFFSET): Likewise.
(AARCH64_SVE_CONTEXT_FFR_OFFSET): Likewise.
(AARCH64_SVE_CONTEXT_SIZE): Likewise.
(read_aarch64_ctx): Add function.
(aarch64_linux_sigframe_init): Detect FP registers.
When building with --enable-targets=all on macOS, I get this error:
CXX aarch64-linux-tdep.o
/Users/simark/src/binutils-gdb/gdb/aarch64-linux-tdep.c:328:7: error: no matching function for call to 'store_integer'
store_integer ((gdb_byte *)&vg_target, sizeof (uint64_t), byte_order,
^~~~~~~~~~~~~
/Users/simark/src/binutils-gdb/gdb/defs.h:556:13: note: candidate template ignored: requirement 'Or<is_same<unsigned long long, long>, is_same<unsigned long long, unsigned long> >::value' was not satisfied [with T = unsigned long long]
extern void store_integer (gdb_byte *addr, int len, enum bfd_endian byte_order,
^
I believe it's because uint64_t is defined as "unsigned long long" on macOS,
even though "unsigned long" is also 64 bits. Other 64-bits platforms define
uint64_t as "unsigned long".
This makes the type of the argument to store_integer (unsigned long long) not
match the requirement that it must be the same as ULONGEST, which is unsigned
long.
Fix it by changing the type of the vl variable to be ULONGEST, which is what
extract_unsigned_integer returns anyway.
gdb/ChangeLog:
* aarch64-linux-tdep.c (aarch64_linux_supply_sve_regset): Change type
of vl to ULONGEST.
This avoids -Wnarrowing warnings in
aarch64_linux_iterate_over_regset_sections, by adding some casts to
int.
gdb/ChangeLog
2018-08-27 Tom Tromey <tom@tromey.com>
* aarch64-linux-tdep.c
(aarch64_linux_iterate_over_regset_sections) <sve_regmap>: Add
casts to int.
While testing a patch on the buildbot, I got this error:
../../binutils-gdb/gdb/aarch64-linux-tdep.c: In function uint64_t aarch64_linux_core_read_vq(gdbarch*, bfd*):
../../binutils-gdb/gdb/aarch64-linux-tdep.c:285:29: error: format %ld expects argument of type long int, but argument 2 has type uint64_t {aka long long unsigned int} [-Werror=format=]
This patch avoids the problem by using pulongest rather than %ld.
This seems safe to me because, if aarch64-linux-tdep.c is included in
the build, then ULONGEST must be a 64-bit type.
gdb/ChangeLog
2018-08-15 Tom Tromey <tom@tromey.com>
* aarch64-linux-tdep.c (aarch64_linux_core_read_vq): Use pulongest.
sve_regmap cannot be global static as the size is dependant on the current
vector length.
gdb/
* aarch64-linux-tdep.c (aarch64_linux_supply_sve_regset): New function.
(aarch64_linux_collect_sve_regset): Likewise.
(aarch64_linux_iterate_over_regset_sections): Check for SVE.
* regcache.h (regcache_map_entry_size): New function.
The SVE section in a core file contains a header followed by the registers.
Add defines to easily access the header fields within a buffer.
gdb/
* aarch64-linux-tdep.c (SVE_HEADER_SIZE_LENGTH): Add define.
(SVE_HEADER_MAX_SIZE_LENGTH): Likewise.
(SVE_HEADER_VL_LENGTH): Likewise.
(SVE_HEADER_MAX_VL_LENGTH): Likewise.
(SVE_HEADER_FLAGS_LENGTH): Likewise.
(SVE_HEADER_RESERVED_LENGTH): Likewise.
(SVE_HEADER_SIZE_OFFSET): Likewise.
(SVE_HEADER_MAX_SIZE_OFFSET): Likewise.
(SVE_HEADER_VL_OFFSET): Likewise.
(SVE_HEADER_MAX_VL_OFFSET): Likewise.
(SVE_HEADER_FLAGS_OFFSET): Likewise.
(SVE_HEADER_RESERVED_OFFSET): Likewise.
(SVE_HEADER_SIZE): Likewise.
(aarch64_linux_core_read_vq): Add function.
(aarch64_linux_core_read_description): Check for SVE section.
In the existing code, when using the regset section iteration functions, the
size parameter is used in different ways.
With collect, size is used to create the buffer in which to write the regset.
(see linux-tdep.c::linux_collect_regset_section_cb).
With supply, size is used to confirm the existing regset is the correct size.
If REGSET_VARIABLE_SIZE is set then the regset can be bigger than size.
Effectively, size is the minimum possible size of the regset.
(see corelow.c::get_core_register_section).
There are currently no targets with both REGSET_VARIABLE_SIZE and a collect
function.
In SVE, a corefile can contain one of two formats after the header, both of
which are different sizes. However, when writing a core file, we always want
to write out the full bigger size.
To allow support of collects for REGSET_VARIABLE_SIZE we need two sizes.
This is done by adding supply_size and collect_size.
gdb/
* aarch64-fbsd-tdep.c
(aarch64_fbsd_iterate_over_regset_sections): Add supply_size and
collect_size.
* aarch64-linux-tdep.c
(aarch64_linux_iterate_over_regset_sections): Likewise.
* alpha-linux-tdep.c
(alpha_linux_iterate_over_regset_sections):
* alpha-nbsd-tdep.c
(alphanbsd_iterate_over_regset_sections): Likewise.
* amd64-fbsd-tdep.c
(amd64fbsd_iterate_over_regset_sections): Likewise.
* amd64-linux-tdep.c
(amd64_linux_iterate_over_regset_sections): Likewise.
* arm-bsd-tdep.c
(armbsd_iterate_over_regset_sections): Likewise.
* arm-fbsd-tdep.c
(arm_fbsd_iterate_over_regset_sections): Likewise.
* arm-linux-tdep.c
(arm_linux_iterate_over_regset_sections): Likewise.
* corelow.c (get_core_registers_cb): Likewise.
(core_target::fetch_registers): Likewise.
* fbsd-tdep.c (fbsd_collect_regset_section_cb): Likewise.
* frv-linux-tdep.c (frv_linux_iterate_over_regset_sections): Likewise.
* gdbarch.h (void): Regenerate.
* gdbarch.sh: Add supply_size and collect_size.
* hppa-linux-tdep.c (hppa_linux_iterate_over_regset_sections): Likewise.
* hppa-nbsd-tdep.c (hppanbsd_iterate_over_regset_sections): Likewise.
* hppa-obsd-tdep.c (hppaobsd_iterate_over_regset_sections): Likewise.
* i386-fbsd-tdep.c (i386fbsd_iterate_over_regset_sections): Likewise.
* i386-linux-tdep.c (i386_linux_iterate_over_regset_sections): Likewise.
* i386-tdep.c (i386_iterate_over_regset_sections): Likewise.
* ia64-linux-tdep.c (ia64_linux_iterate_over_regset_sections): Likewise.
* linux-tdep.c (linux_collect_regset_section_cb): Likewise.
* m32r-linux-tdep.c (m32r_linux_iterate_over_regset_sections): Likewise.
* m68k-bsd-tdep.c (m68kbsd_iterate_over_regset_sections): Likewise.
* m68k-linux-tdep.c (m68k_linux_iterate_over_regset_sections): Likewise.
* mips-fbsd-tdep.c (mips_fbsd_iterate_over_regset_sections): Likewise.
* mips-linux-tdep.c (mips_linux_iterate_over_regset_sections): Likewise.
* mips-nbsd-tdep.c (mipsnbsd_iterate_over_regset_sections): Likewise.
* mips64-obsd-tdep.c (mips64obsd_iterate_over_regset_sections): Likewise.
* mn10300-linux-tdep.c (am33_iterate_over_regset_sections): Likewise.
* nios2-linux-tdep.c (nios2_iterate_over_regset_sections): Likewise.
* ppc-fbsd-tdep.c (ppcfbsd_iterate_over_regset_sections): Likewise.
* ppc-linux-tdep.c (ppc_linux_iterate_over_regset_sections): Likewise.
* ppc-nbsd-tdep.c (ppcnbsd_iterate_over_regset_sections): Likewise.
* ppc-obsd-tdep.c (ppcobsd_iterate_over_regset_sections): Likewise.
* riscv-linux-tdep.c (riscv_linux_iterate_over_regset_sections): Likewise.
* rs6000-aix-tdep.c (rs6000_aix_iterate_over_regset_sections): Likewise.
* s390-linux-tdep.c (s390_iterate_over_regset_sections): Likewise.
* score-tdep.c (score7_linux_iterate_over_regset_sections): Likewise.
* sh-tdep.c (sh_iterate_over_regset_sections): Likewise.
* sparc-tdep.c (sparc_iterate_over_regset_sections): Likewise.
* tilegx-linux-tdep.c (tilegx_iterate_over_regset_sections): Likewise.
* vax-tdep.c (vax_iterate_over_regset_sections): Likewise.
* xtensa-tdep.c (xtensa_iterate_over_regset_sections): Likewise.
This is more preparation bits for multi-target support.
In a multi-target scenario, we need to address the case of different
processes/threads running on different targets that happen to have the
same PID/PTID. E.g., we can have both process 123 in target 1, and
process 123 in target 2, while they're in reality different processes
running on different machines. Or maybe we've loaded multiple
instances of the same core file. Etc.
To address this, in my WIP multi-target branch, threads and processes
are uniquely identified by the (process_stratum target_ops *, ptid_t)
and (process_stratum target_ops *, pid) tuples respectively. I.e.,
each process_stratum instance has its own thread/process number space.
As you can imagine, that requires passing around target_ops * pointers
in a number of functions where we're currently passing only a ptid_t
or an int. E.g., when we look up a thread_info object by ptid_t in
find_thread_ptid, the ptid_t alone isn't sufficient.
In many cases though, we already have the thread_info or inferior
pointer handy, but we "lose" it somewhere along the call stack, only
to look it up again by ptid_t/pid. Since thread_info or inferior
objects know their parent target, if we pass around thread_info or
inferior pointers when possible, we avoid having to add extra
target_ops parameters to many functions, and also, we eliminate a
number of by ptid_t/int lookups.
So that's what this patch does. In a bit more detail:
- Changes a number of functions and methods to take a thread_info or
inferior pointer instead of a ptid_t or int parameter.
- Changes a number of structure fields from ptid_t/int to inferior or
thread_info pointers.
- Uses the inferior_thread() function whenever possible instead of
inferior_ptid.
- Uses thread_info pointers directly when possible instead of the
is_running/is_stopped etc. routines that require a lookup.
- A number of functions are eliminated along the way, such as:
int valid_gdb_inferior_id (int num);
int pid_to_gdb_inferior_id (int pid);
int gdb_inferior_id_to_pid (int num);
int in_inferior_list (int pid);
- A few structures and places hold a thread_info pointer across
inferior execution, so now they take a strong reference to the
(refcounted) thread_info object to avoid the thread_info pointer
getting stale. This is done in enable_thread_stack_temporaries and
in the infcall.c code.
- Related, there's a spot in infcall.c where using a RAII object to
handle the refcount would be handy, so a gdb::ref_ptr specialization
for thread_info is added (thread_info_ref, in gdbthread.h), along
with a gdb_ref_ptr policy that works for all refcounted_object types
(in common/refcounted-object.h).
gdb/ChangeLog:
2018-06-21 Pedro Alves <palves@redhat.com>
* ada-lang.h (ada_get_task_number): Take a thread_info pointer
instead of a ptid_t. All callers adjusted.
* ada-tasks.c (ada_get_task_number): Likewise. All callers
adjusted.
(print_ada_task_info, display_current_task_id, task_command_1):
Adjust.
* breakpoint.c (watchpoint_in_thread_scope): Adjust to use
inferior_thread.
(breakpoint_kind): Adjust.
(remove_breakpoints_pid): Rename to ...
(remove_breakpoints_inf): ... this. Adjust to take an inferior
pointer. All callers adjusted.
(bpstat_clear_actions): Use inferior_thread.
(get_bpstat_thread): New.
(bpstat_do_actions): Use it.
(bpstat_check_breakpoint_conditions, bpstat_stop_status): Adjust
to take a thread_info pointer. All callers adjusted.
(set_longjmp_breakpoint_for_call_dummy, set_momentary_breakpoint)
(breakpoint_re_set_thread): Use inferior_thread.
* breakpoint.h (struct inferior): Forward declare.
(bpstat_stop_status): Update.
(remove_breakpoints_pid): Delete.
(remove_breakpoints_inf): New.
* bsd-uthread.c (bsd_uthread_target::wait)
(bsd_uthread_target::update_thread_list): Use find_thread_ptid.
* btrace.c (btrace_add_pc, btrace_enable, btrace_fetch)
(maint_btrace_packet_history_cmd)
(maint_btrace_clear_packet_history_cmd): Adjust.
(maint_btrace_clear_cmd, maint_info_btrace_cmd): Adjust to use
inferior_thread.
* cli/cli-interp.c: Include "inferior.h".
* common/refcounted-object.h (struct
refcounted_object_ref_policy): New.
* compile/compile-object-load.c: Include gdbthread.h.
(store_regs): Use inferior_thread.
* corelow.c (core_target::close): Use current_inferior.
(core_target_open): Adjust to use first_thread_of_inferior and use
the current inferior.
* ctf.c (ctf_target::close): Adjust to use current_inferior.
* dummy-frame.c (dummy_frame_id) <ptid>: Delete, replaced by ...
<thread>: ... this new field. All references adjusted.
(dummy_frame_pop, dummy_frame_discard, register_dummy_frame_dtor):
Take a thread_info pointer instead of a ptid_t.
* dummy-frame.h (dummy_frame_push, dummy_frame_pop)
(dummy_frame_discard, register_dummy_frame_dtor): Take a
thread_info pointer instead of a ptid_t.
* elfread.c: Include "inferior.h".
(elf_gnu_ifunc_resolver_stop, elf_gnu_ifunc_resolver_return_stop):
Use inferior_thread.
* eval.c (evaluate_subexp): Likewise.
* frame.c (frame_pop, has_stack_frames, find_frame_sal): Use
inferior_thread.
* gdb_proc_service.h (struct thread_info): Forward declare.
(struct ps_prochandle) <ptid>: Delete, replaced by ...
<thread>: ... this new field. All references adjusted.
* gdbarch.h, gdbarch.c: Regenerate.
* gdbarch.sh (get_syscall_number): Replace 'ptid' parameter with a
'thread' parameter. All implementations and callers adjusted.
* gdbthread.h (thread_info) <set_running>: New method.
(delete_thread, delete_thread_silent): Take a thread_info pointer
instead of a ptid.
(global_thread_id_to_ptid, ptid_to_global_thread_id): Delete.
(first_thread_of_process): Delete, replaced by ...
(first_thread_of_inferior): ... this new function. All callers
adjusted.
(any_live_thread_of_process): Delete, replaced by ...
(any_live_thread_of_inferior): ... this new function. All callers
adjusted.
(switch_to_thread, switch_to_no_thread): Declare.
(is_executing): Delete.
(enable_thread_stack_temporaries): Update comment.
<enable_thread_stack_temporaries>: Take a thread_info pointer
instead of a ptid_t. Incref the thread.
<~enable_thread_stack_temporaries>: Decref the thread.
<m_ptid>: Delete
<m_thr>: New.
(thread_stack_temporaries_enabled_p, push_thread_stack_temporary)
(get_last_thread_stack_temporary)
(value_in_thread_stack_temporaries, can_access_registers_thread):
Take a thread_info pointer instead of a ptid_t. All callers
adjusted.
* infcall.c (get_call_return_value): Use inferior_thread.
(run_inferior_call): Work with thread pointers instead of ptid_t.
(call_function_by_hand_dummy): Work with thread pointers instead
of ptid_t. Use thread_info_ref.
* infcmd.c (proceed_thread_callback): Access thread's state
directly.
(ensure_valid_thread, ensure_not_running): Use inferior_thread,
access thread's state directly.
(continue_command): Use inferior_thread.
(info_program_command): Use find_thread_ptid and access thread
state directly.
(proceed_after_attach_callback): Use thread state directly.
(notice_new_inferior): Take a thread_info pointer instead of a
ptid_t. All callers adjusted.
(exit_inferior): Take an inferior pointer instead of a pid. All
callers adjusted.
(exit_inferior_silent): New.
(detach_inferior): Delete.
(valid_gdb_inferior_id, pid_to_gdb_inferior_id)
(gdb_inferior_id_to_pid, in_inferior_list): Delete.
(detach_inferior_command, kill_inferior_command): Use
find_inferior_id instead of valid_gdb_inferior_id and
gdb_inferior_id_to_pid.
(inferior_command): Use inferior and thread pointers.
* inferior.h (struct thread_info): Forward declare.
(notice_new_inferior): Take a thread_info pointer instead of a
ptid_t. All callers adjusted.
(detach_inferior): Delete declaration.
(exit_inferior, exit_inferior_silent): Take an inferior pointer
instead of a pid. All callers adjusted.
(gdb_inferior_id_to_pid, pid_to_gdb_inferior_id, in_inferior_list)
(valid_gdb_inferior_id): Delete.
* infrun.c (follow_fork_inferior, proceed_after_vfork_done)
(handle_vfork_child_exec_or_exit, follow_exec): Adjust.
(struct displaced_step_inferior_state) <pid>: Delete, replaced by
...
<inf>: ... this new field.
<step_ptid>: Delete, replaced by ...
<step_thread>: ... this new field.
(get_displaced_stepping_state): Take an inferior pointer instead
of a pid. All callers adjusted.
(displaced_step_in_progress_any_inferior): Adjust.
(displaced_step_in_progress_thread): Take a thread pointer instead
of a ptid_t. All callers adjusted.
(displaced_step_in_progress, add_displaced_stepping_state): Take
an inferior pointer instead of a pid. All callers adjusted.
(get_displaced_step_closure_by_addr): Adjust.
(remove_displaced_stepping_state): Take an inferior pointer
instead of a pid. All callers adjusted.
(displaced_step_prepare_throw, displaced_step_prepare)
(displaced_step_fixup): Take a thread pointer instead of a ptid_t.
All callers adjusted.
(start_step_over): Adjust.
(infrun_thread_ptid_changed): Remove bit updating ptids in the
displaced step queue.
(do_target_resume): Adjust.
(fetch_inferior_event): Use inferior_thread.
(context_switch, get_inferior_stop_soon): Take an
execution_control_state pointer instead of a ptid_t. All callers
adjusted.
(switch_to_thread_cleanup): Delete.
(stop_all_threads): Use scoped_restore_current_thread.
* inline-frame.c: Include "gdbthread.h".
(inline_state) <inline_state>: Take a thread pointer instead of a
ptid_t. All callers adjusted.
<ptid>: Delete, replaced by ...
<thread>: ... this new field.
(find_inline_frame_state): Take a thread pointer instead of a
ptid_t. All callers adjusted.
(skip_inline_frames, step_into_inline_frame)
(inline_skipped_frames, inline_skipped_symbol): Take a thread
pointer instead of a ptid_t. All callers adjusted.
* inline-frame.h (skip_inline_frames, step_into_inline_frame)
(inline_skipped_frames, inline_skipped_symbol): Likewise.
* linux-fork.c (delete_checkpoint_command): Adjust to use thread
pointers directly.
* linux-nat.c (get_detach_signal): Likewise.
* linux-thread-db.c (thread_from_lwp): New 'stopped' parameter.
(thread_db_notice_clone): Adjust.
(thread_db_find_new_threads_silently)
(thread_db_find_new_threads_2, thread_db_find_new_threads_1): Take
a thread pointer instead of a ptid_t. All callers adjusted.
* mi/mi-cmd-var.c: Include "inferior.h".
(mi_cmd_var_update_iter): Update to use thread pointers.
* mi/mi-interp.c (mi_new_thread): Update to use the thread's
inferior directly.
(mi_output_running_pid, mi_inferior_count): Delete, bits factored
out to ...
(mi_output_running): ... this new function.
(mi_on_resume_1): Adjust to use it.
(mi_user_selected_context_changed): Adjust to use inferior_thread.
* mi/mi-main.c (proceed_thread): Adjust to use thread pointers
directly.
(interrupt_thread_callback): : Adjust to use thread and inferior
pointers.
* proc-service.c: Include "gdbthread.h".
(ps_pglobal_lookup): Adjust to use the thread's inferior directly.
* progspace-and-thread.c: Include "inferior.h".
* progspace.c: Include "inferior.h".
* python/py-exitedevent.c (create_exited_event_object): Adjust to
hold a reference to an inferior_object.
* python/py-finishbreakpoint.c (bpfinishpy_init): Adjust to use
inferior_thread.
* python/py-inferior.c (struct inferior_object): Give the type a
tag name instead of a typedef.
(python_on_normal_stop): No need to check if the current thread is
listed.
(inferior_to_inferior_object): Change return type to
inferior_object. All callers adjusted.
(find_thread_object): Delete, bits factored out to ...
(thread_to_thread_object): ... this new function.
* python/py-infthread.c (create_thread_object): Use
inferior_to_inferior_object.
(thpy_is_stopped): Use thread pointer directly.
(gdbpy_selected_thread): Use inferior_thread.
* python/py-record-btrace.c (btpy_list_object) <ptid>: Delete
field, replaced with ...
<thread>: ... this new field. All users adjusted.
(btpy_insn_or_gap_new): Drop const.
(btpy_list_new): Take a thread pointer instead of a ptid_t. All
callers adjusted.
* python/py-record.c: Include "gdbthread.h".
(recpy_insn_new, recpy_func_new): Take a thread pointer instead of
a ptid_t. All callers adjusted.
(gdbpy_current_recording): Use inferior_thread.
* python/py-record.h (recpy_record_object) <ptid>: Delete
field, replaced with ...
<thread>: ... this new field. All users adjusted.
(recpy_element_object) <ptid>: Delete
field, replaced with ...
<thread>: ... this new field. All users adjusted.
(recpy_insn_new, recpy_func_new): Take a thread pointer instead of
a ptid_t. All callers adjusted.
* python/py-threadevent.c: Include "gdbthread.h".
(get_event_thread): Use thread_to_thread_object.
* python/python-internal.h (struct inferior_object): Forward
declare.
(find_thread_object, find_inferior_object): Delete declarations.
(thread_to_thread_object, inferior_to_inferior_object): New
declarations.
* record-btrace.c: Include "inferior.h".
(require_btrace_thread): Use inferior_thread.
(record_btrace_frame_sniffer)
(record_btrace_tailcall_frame_sniffer): Use inferior_thread.
(get_thread_current_frame): Use scoped_restore_current_thread and
switch_to_thread.
(get_thread_current_frame): Use thread pointer directly.
(record_btrace_replay_at_breakpoint): Use thread's inferior
pointer directly.
* record-full.c: Include "inferior.h".
* regcache.c: Include "gdbthread.h".
(get_thread_arch_regcache): Use the inferior's address space
directly.
(get_thread_regcache, registers_changed_thread): New.
* regcache.h (get_thread_regcache(thread_info *thread)): New
overload.
(registers_changed_thread): New.
(remote_target) <remote_detach_1>: Swap order of parameters.
(remote_add_thread): <remote_add_thread>: Return the new thread.
(get_remote_thread_info(ptid_t)): New overload.
(remote_target::remote_notice_new_inferior): Use thread pointers
directly.
(remote_target::process_initial_stop_replies): Use
thread_info::set_running.
(remote_target::remote_detach_1, remote_target::detach)
(extended_remote_target::detach): Adjust.
* stack.c (frame_show_address): Use inferior_thread.
* target-debug.h (target_debug_print_thread_info_pp): New.
* target-delegates.c: Regenerate.
* target.c (default_thread_address_space): Delete.
(memory_xfer_partial_1): Use current_inferior.
(target_detach): Use current_inferior.
(target_thread_address_space): Delete.
(generic_mourn_inferior): Use current_inferior.
* target.h (struct target_ops) <thread_address_space>: Delete.
(target_thread_address_space): Delete.
* thread.c (init_thread_list): Use ALL_THREADS_SAFE. Use thread
pointers directly.
(delete_thread_1, delete_thread, delete_thread_silent): Take a
thread pointer instead of a ptid_t. Adjust all callers.
(ptid_to_global_thread_id, global_thread_id_to_ptid): Delete.
(first_thread_of_process): Delete, replaced by ...
(first_thread_of_inferior): ... this new function. All callers
adjusted.
(any_thread_of_process): Rename to ...
(any_thread_of_inferior): ... this, and take an inferior pointer.
(any_live_thread_of_process): Rename to ...
(any_live_thread_of_inferior): ... this, and take an inferior
pointer.
(thread_stack_temporaries_enabled_p, push_thread_stack_temporary)
(value_in_thread_stack_temporaries)
(get_last_thread_stack_temporary): Take a thread pointer instead
of a ptid_t. Adjust all callers.
(thread_info::set_running): New.
(validate_registers_access): Use inferior_thread.
(can_access_registers_ptid): Rename to ...
(can_access_registers_thread): ... this, and take a thread
pointer.
(print_thread_info_1): Adjust to compare thread pointers instead
of ptids.
(switch_to_no_thread, switch_to_thread): Make extern.
(scoped_restore_current_thread::~scoped_restore_current_thread):
Use m_thread pointer directly.
(scoped_restore_current_thread::scoped_restore_current_thread):
Use inferior_thread.
(thread_command): Use thread pointer directly.
(thread_num_make_value_helper): Use inferior_thread.
* top.c (execute_command): Use inferior_thread.
* tui/tui-interp.c: Include "inferior.h".
* varobj.c (varobj_create): Use inferior_thread.
(value_of_root_1): Use find_thread_global_id instead of
global_thread_id_to_ptid.
This patch fixes tagged pointer support for AArch64 GDB. Linux kernel
debugging failure was reported after tagged pointer support was committed.
After a discussion around best path forward to manage tagged pointers
on GDB side we are going to disable tagged pointers support for
aarch64-none-elf-gdb because for non-linux applications we cant be
sure if tagged pointers will be used by MMU or not.
Also for aarch64-linux-gdb we are going to sign extend user-space
address after clearing tag bits. This will help debug both kernel
and user-space addresses based on information from linux kernel
documentation given below:
According to AArch64 memory map:
https://www.kernel.org/doc/Documentation/arm64/memory.txt
"User addresses have bits 63:48 set to 0 while the kernel addresses have
the same bits set to 1."
According to AArch64 tagged pointers document:
https://www.kernel.org/doc/Documentation/arm64/tagged-pointers.txt
The kernel configures the translation tables so that translations made
via TTBR0 (i.e. userspace mappings) have the top byte (bits 63:56) of
the virtual address ignored by the translation hardware. This frees up
this byte for application use.
Running gdb testsuite after applying this patch introduces no regressions
and tagged pointer test cases still pass.
gdb/ChangeLog:
2018-05-10 Omair Javaid <omair.javaid@linaro.org>
PR gdb/23127
* aarch64-linux-tdep.c (aarch64_linux_init_abi): Add call to
set_gdbarch_significant_addr_bit.
* aarch64-tdep.c (aarch64_gdbarch_init): Remove call to
set_gdbarch_significant_addr_bit.
* utils.c (address_significant): Update to sign extend addr.