Remove the breakpoint_pointer_iterator layer. Adjust all users of
all_breakpoints and all_tracepoints to use references instead of
pointers.
Change-Id: I376826f812117cee1e6b199c384a10376973af5d
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Remove the bp_location_pointer_iterator layer. Adjust all users of
breakpoint::locations to use references instead of pointers.
Change-Id: Iceed34f5e0f5790a9cf44736aa658be6d1ba1afa
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
This patch augments the DAP launch request with some optional new
parameters that let the client control the command-line arguments and
the environment of the inferior.
Reviewed-By: Andrew Burgess <aburgess@redhat.com>
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
This adds two new attributes and three new methods to gdb.Inferior.
The attributes let Python code see the command-line arguments and the
name of "main". Argument setting is also supported.
The methods let Python code manipulate the inferior's environment
variables.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
This adds a 'global_context' parse_and_eval to gdb.parse_and_eval.
This lets users request a parse that is done at "global scope".
I considered letting callers pass in a block instead, with None
meaning "global" -- but then there didn't seem to be a clean way to
express the default for this parameter.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
This adds a new Python function, gdb.execute_mi, that can be used to
invoke an MI command but get the output as a Python object, rather
than a string. This is done by implementing a new ui_out subclass
that builds a Python object.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=11688
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
This changes mi_parse::args to be a private member, retrieved via
accessor. It also changes this member to be a std::string. This
makes it simpler for a subsequent patch to implement different
behavior for argument parsing.
If an MI command written in Python includes a number in its output,
currently that is simply emitted as a string. However, it's
convenient for a later patch if these are emitted using field_signed.
This does not make a difference to ordinary MI clients.
I recently added a 'dap' component to bugzilla, and I filed a few bugs
there. This patch removes the corresponding FIXME comments.
A few such comments still exist. In at least one case, I have a fix
I'll be submitting eventually; in others I think I need to do a bit of
investigation to properly file a bug report.
This commit extends the Python Disassembler API to allow for styling
of the instructions.
Before this commit the Python Disassembler API allowed the user to do
two things:
- They could intercept instruction disassembly requests and return a
string of their choosing, this string then became the disassembled
instruction, or
- They could call builtin_disassemble, which would call back into
libopcode to perform the disassembly. As libopcode printed the
instruction GDB would collect these print requests and build a
string. This string was then returned from the builtin_disassemble
call, and the user could modify or extend this string as needed.
Neither of these approaches allowed for, or preserved, disassembler
styling, which is now available within libopcodes for many of the more
popular architectures GDB supports.
This commit aims to fill this gap. After this commit a user will be
able to do the following things:
- Implement a custom instruction disassembler entirely in Python
without calling back into libopcodes, the custom disassembler will
be able to return styling information such that GDB will display
the instruction fully styled. All of GDB's existing style
settings will affect how instructions coming from the Python
disassembler are displayed in the expected manner.
- Call builtin_disassemble and receive a result that represents how
libopcode would like the instruction styled. The user can then
adjust or extend the disassembled instruction before returning the
result to GDB. Again, the instruction will be styled as expected.
To achieve this I will add two new classes to GDB,
DisassemblerTextPart and DisassemblerAddressPart.
Within builtin_disassemble, instead of capturing the print calls from
libopcodes and building a single string, we will now create either a
text part or address part and store these parts in a vector.
The DisassemblerTextPart will capture a small piece of text along with
the associated style that should be used to display the text. This
corresponds to the disassembler calling
disassemble_info::fprintf_styled_func, or for disassemblers that don't
support styling disassemble_info::fprintf_func.
The DisassemblerAddressPart is used when libopcodes requests that an
address be printed, and takes care of printing the address and
associated symbol, this corresponds to the disassembler calling
disassemble_info::print_address_func.
These parts are then placed within the DisassemblerResult when
builtin_disassemble returns.
Alternatively, the user can directly create parts by calling two new
methods on the DisassembleInfo class: DisassembleInfo.text_part and
DisassembleInfo.address_part.
Having created these parts the user can then pass these parts when
initializing a new DisassemblerResult object.
Finally, when we return from Python to gdbpy_print_insn, one way or
another, the result being returned will have a list of parts. Back in
GDB's C++ code we walk the list of parts and call back into GDB's core
to display the disassembled instruction with the correct styling.
The new API lives in parallel with the old API. Any existing code
that creates a DisassemblerResult using a single string immediately
creates a single DisassemblerTextPart containing the entire
instruction and gives this part the default text style. This is also
what happens if the user calls builtin_disassemble for an architecture
that doesn't (yet) support libopcode styling.
This matches up with what happens when the Python API is not involved,
an architecture without disassembler styling support uses the old
libopcodes printing API (the API that doesn't pass style info), and
GDB just prints everything using the default text style.
The reason that parts are created by calling methods on
DisassembleInfo, rather than calling the class constructor directly,
is DisassemblerAddressPart. Ideally this part would only hold the
address which the part represents, but in order to support backwards
compatibility we need to be able to convert the
DisassemblerAddressPart into a string. To do that we need to call
GDB's internal print_address function, and to do that we need an
gdbarch.
What this means is that the DisassemblerAddressPart needs to take a
gdb.Architecture object at creation time. The only valid place a user
can pull this from is from the DisassembleInfo object, so having the
DisassembleInfo act as a factory ensures that the correct gdbarch is
passed over each time. I implemented both solutions (the one
presented here, and an alternative where parts could be constructed
directly), and this felt like the cleanest solution.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit is a refactor ahead of the next change which will make
disassembler styling available through the Python API.
Unfortunately, in order to make the styling support available, I think
the easiest solution is to make a very small change to the existing
API.
The current API relies on returning a DisassemblerResult object to
represent each disassembled instruction. Currently GDB allows the
DisassemblerResult class to be sub-classed, which could mean that a
user tries to override the various attributes that exist on the
DisassemblerResult object.
This commit removes this ability, effectively making the
DisassemblerResult class final.
Though this is a change to the existing API, I'm hoping this isn't
going to cause too many issues:
- The Python disassembler API was only added in the previous release
of GDB, so I don't expect it to be widely used yet, and
- It's not clear to me why a user would need to sub-class the
DisassemblerResult type, I allowed it in the original patch
because at the time I couldn't see any reason to NOT allow it.
Having prevented sub-classing I can now rework the tail end of the
gdbpy_print_insn function; instead of pulling the results out of the
DisassemblerResult object by calling back into Python, I now cast the
Python object back to its C++ type (disasm_result_object), and access
the fields directly from there. In later commits I will be reworking
the disasm_result_object type in order to hold information about the
styled disassembler output.
The tests that dealt with sub-classing DisassemblerResult have been
removed, and a new test that confirms that DisassemblerResult can't be
sub-classed has been added.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
I noticed many spots checking whether a dynamic property's kind is
PROP_CONST. Some spots, I think, are doing a slightly incorrect check
-- checking for != PROP_UNDEFINED where == PROP_CONST is actually
required, the key thing being that const_val may only be called for
PROP_CONST properties.
This patch adds dynamic::is_constant and then updates these checks to
use it.
Regression tested on x86-64 Fedora 36.
I noticed that gdb's DAP code did not provide a way to see register
values. DAP defines a "register" scope, which this patch implements.
This patch also adds the missing (and optional) "presentationHint" to
scopes.
Add the DisassemblerResult.__str__ method. This gives the same result
as the DisassemblerResult.string attribute, but can be useful
sometimes depending on how the user is trying to print the object.
There's a test for the new functionality.
Currently, when we add a new python sub-system to GDB,
e.g. py-inferior.c, we end up having to create a new function like
gdbpy_initialize_inferior, which then has to be called from the
function do_start_initialization in python.c.
In some cases (py-micmd.c and py-tui.c), we have two functions
gdbpy_initialize_*, and gdbpy_finalize_*, with the second being called
from finalize_python which is also in python.c.
This commit proposes a mechanism to manage these initialization and
finalization calls, this means that adding a new Python subsystem will
no longer require changes to python.c or python-internal.h, instead,
the initialization and finalization functions will be registered
directly from the sub-system file, e.g. py-inferior.c, or py-micmd.c.
The initialization and finalization functions are managed through a
new class gdbpy_initialize_file in python-internal.h. This class
contains a single global vector of all the initialization and
finalization functions.
In each Python sub-system we create a new gdbpy_initialize_file
object, the object constructor takes care of registering the two
callback functions.
Now from python.c we can call static functions on the
gdbpy_initialize_file class which take care of walking the callback
list and invoking each callback in turn.
To slightly simplify the Python sub-system files I added a new macro
GDBPY_INITIALIZE_FILE, which hides the need to create an object. We
can now just do this:
GDBPY_INITIALIZE_FILE (gdbpy_initialize_registers);
One possible problem with this change is that there is now no
guaranteed ordering of how the various sub-systems are initialized (or
finalized). To try and avoid dependencies creeping in I have added a
use of the environment variable GDB_REVERSE_INIT_FUNCTIONS, this is
the same environment variable used in the generated init.c file.
Just like with init.c, when this environment variable is set we
reverse the list of Python initialization (and finalization)
functions. As there is already a test that starts GDB with the
environment variable set then this should offer some level of
protection against dependencies creeping in - though for full
protection I guess we'd need to run all gdb.python/*.exp tests with
the variable set.
I have tested this patch with the environment variable set, and saw no
regressions, so I think we are fine right now.
One other change of note was for gdbpy_initialize_gdb_readline, this
function previously returned void. In order to make this function
have the correct signature I've updated its return type to int, and we
now return 0 to indicate success.
All of the other initialize (and finalize) functions have been made
static within their respective sub-system files.
There should be no user visible changes after this commit.
SUMMARY
The '--simple-values' argument to '-stack-list-arguments' and similar
GDB/MI commands does not take reference types into account, so that
references to arbitrarily large structures are considered "simple" and
printed. This means that the '--simple-values' argument cannot be used
by IDEs when tracing the stack due to the time taken to print large
structures passed by reference.
DETAILS
Various GDB/MI commands ('-stack-list-arguments', '-stack-list-locals',
'-stack-list-variables' and so on) take a PRINT-VALUES argument which
may be '--no-values' (0), '--all-values' (1) or '--simple-values' (2).
In the '--simple-values' case, the command is supposed to print the
name, type, and value of variables with simple types, and print only the
name and type of variables with compound types.
The '--simple-values' argument ought to be suitable for IDEs that need
to update their user interface with the program's call stack every time
the program stops. However, it does not take C++ reference types into
account, and this makes the argument unsuitable for this purpose.
For example, consider the following C++ program:
struct s {
int v[10];
};
int
sum(const struct s &s)
{
int total = 0;
for (int i = 0; i < 10; ++i) total += s.v[i];
return total;
}
int
main(void)
{
struct s s = { { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 } };
return sum(s);
}
If we start GDB in MI mode and continue to 'sum', the behaviour of
'-stack-list-arguments' is as follows:
(gdb)
-stack-list-arguments --simple-values
^done,stack-args=[frame={level="0",args=[{name="s",type="const s &",value="@0x7fffffffe310: {v = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}}"}]},frame={level="1",args=[]}]
Note that the value of the argument 's' was printed, even though 's' is
a reference to a structure, which is not a simple value.
See https://github.com/microsoft/MIEngine/pull/673 for a case where this
behaviour caused Microsoft to avoid the use of '--simple-values' in
their MIEngine debug adapter, because it caused Visual Studio Code to
take too long to refresh the call stack in the user interface.
SOLUTIONS
There are two ways we could fix this problem, depending on whether we
consider the current behaviour to be a bug.
1. If the current behaviour is a bug, then we can update the behaviour
of '--simple-values' so that it takes reference types into account:
that is, a value is simple if it is neither an array, struct, or
union, nor a reference to an array, struct or union.
In this case we must add a feature to the '-list-features' command so
that IDEs can detect that it is safe to use the '--simple-values'
argument when refreshing the call stack.
2. If the current behaviour is not a bug, then we can add a new option
for the PRINT-VALUES argument, for example, '--scalar-values' (3),
that would be suitable for use by IDEs.
In this case we must add a feature to the '-list-features' command
so that IDEs can detect that the '--scalar-values' argument is
available for use when refreshing the call stack.
PATCH
This patch implements solution (1) as I think the current behaviour of
not printing structures, but printing references to structures, is
contrary to reasonable expectation.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=29554
I'd like to move some things so they become methods on struct ui. But
first, I think that struct ui and the related things are big enough to
deserve their own file, instead of being scattered through top.{c,h} and
event-top.c.
Change-Id: I15594269ace61fd76ef80a7b58f51ff3ab6979bc
This changes field_is_static to be a method on struct field, and
updates all the callers. Most of this patch was written by script.
Regression tested on x86-64 Fedora 36.
This changes apply_ext_lang_type_printers to use unique_xmalloc_ptr,
removing some manual memory management. Regression tested on x86-64
Fedora 36.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
This commit allows Frame.read_var to accept named arguments, and also
improves (I think) some of the error messages emitted when values of
the wrong type are passed to this function.
The read_var method takes two arguments, one a variable, which is
either a gdb.Symbol or a string, while the second, optional, argument
is always a gdb.Block.
I'm now using 'O!' as the format specifier for the second argument,
which allows the argument type to be checked early on. Currently, if
the second argument is of the wrong type then we get this error:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: Second argument must be block.
Error while executing Python code.
(gdb)
After this commit, we now get an error like this:
(gdb) python print(gdb.selected_frame().read_var("a1", "xxx"))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 2 must be gdb.Block, not str
Error while executing Python code.
(gdb)
Changes are:
1. Exception type is TypeError not RuntimeError, this is unfortunate
as user code _could_ be relying on this, but I think the improvement
is worth the risk, user code relying on the exact exception type is
likely to be pretty rare,
2. New error message gives argument position and expected argument
type, as well as the type that was passed.
If the first argument, the variable, has the wrong type then the
previous exception was already a TypeError, however, I've updated the
text of the exception to more closely match the "standard" error
message we see above. If the first argument has the wrong type then
before this commit we saw this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Argument must be a symbol or string.
Error while executing Python code.
(gdb)
And after we see this:
(gdb) python print(gdb.selected_frame().read_var(123))
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: argument 1 must be gdb.Symbol or str, not int
Error while executing Python code.
(gdb)
For existing code that doesn't use named arguments and doesn't rely on
exceptions, there will be no changes after this commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
Following on from the previous commit, this updates
Frame.read_register to accept named arguments. As with the previous
commit there's no huge benefit for the users in accepting named
arguments here -- this function only takes a single argument after
all.
But I do think it is worth keeping Frame.read_register method in sync
with the PendingFrame.read_register method, this allows for the
possibility that the user has some code that can operate on either a
Frame or a Pending frame.
Minor update to allow for named arguments, and an extra test to check
the new functionality.
Reviewed-By: Tom Tromey <tom@tromey.com>
Update the two gdb.PendingFrame methods gdb.PendingFrame.read_register
and gdb.PendingFrame.create_unwind_info to accept keyword arguments.
There's no huge benefit for making this change, both of these methods
only take a single argument, so it is (maybe) less likely that a user
will take advantage of the keyword arguments in these cases, but I
think it's nice to be consistent, and I don't see any particular draw
backs to making this change.
For PendingFrame.read_register I've changed the argument name from
'reg' to 'register' in the documentation and used 'register' as the
argument name in GDB. My preference for APIs is to use full words
where possible, and given we didn't support named arguments before
this change should not break any existing code.
There should be no user visible changes (for existing code) after this
commit.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
Update gdb.UnwindInfo.add_saved_register to accept named keyword
arguments.
As part of this update we now use gdb_PyArg_ParseTupleAndKeywords
instead of PyArg_UnpackTuple to parse the function arguments.
By switching to gdb_PyArg_ParseTupleAndKeywords, we can now use 'O!'
as the argument format for the function's value argument. This means
that we can check the argument type (is gdb.Value) as part of the
argument processing rather than manually performing the check later in
the function. One result of this is that we now get a better error
message (at least, I think so). Previously we would get something
like:
ValueError: Bad register value
Now we get:
TypeError: argument 2 must be gdb.Value, not XXXX
It's unfortunate that the exception type changed, but I think the new
exception type actually makes more sense.
My preference for argument names is to use full words where that's not
too excessive. As such, I've updated the name of the argument from
'reg' to 'register' in the documentation, which is the argument name
I've made GDB look for here.
For existing unwinder code that doesn't throw any exceptions nothing
should change with this commit. It is possible that a user has some
code that throws and catches the ValueError, and this code will break
after this commit, but I think this is going to be sufficiently rare
that we can take the risk here.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
Make find_thread_ptid (the overload that takes an inferior) a method of
struct inferior.
Change-Id: Ie5b9fa623ff35aa7ddb45e2805254fc8e83c9cd4
Reviewed-By: Tom Tromey <tom@tromey.com>
This adds the DAP readMemory and writeMemory requests. A small change
to the evaluation code is needed in order to test this -- this is one
of the few ways for a client to actually acquire a memory reference.
When writing an unwinder it is necessary to create a new class to act
as a frame-id. This new class is almost certainly just going to set a
'sp' and 'pc' attribute within the instance.
This commit adds a little helper class gdb.unwinder.FrameId that does
this job. Users can make use of this to avoid having to write out
standard boilerplate code any time they write an unwinder.
Of course, if the user wants their FrameId class to be more
complicated in some way, then they can still write their own class,
just like they could before.
I've simplified the example code in the documentation to now use the
new helper class, and I've also made use of this helper within the
testsuite.
Any existing user code will continue to work just as it did before
after this change.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
Currently when creating a gdb.UnwindInfo object a user must call
gdb.PendingFrame.create_unwind_info and pass a frame-id object.
The frame-id object should have at least a 'sp' attribute, and
probably a 'pc' attribute too (it can also, in some cases have a
'special' attribute).
Currently all of these frame-id attributes need to be gdb.Value
objects, but the only reason for that requirement is that we have some
code in py-unwind.c that only handles gdb.Value objects.
If instead we switch to using get_addr_from_python in py-utils.c then
we will support both gdb.Value objects and also raw numbers, which
might make things simpler in some cases.
So, I started rewriting pyuw_object_attribute_to_pointer (in
py-unwind.c) to use get_addr_from_python. However, while looking at
the code I noticed a problem.
The pyuw_object_attribute_to_pointer function returns a boolean flag,
if everything goes OK we return true, but we return false in two
cases, (1) when the attribute is not present, which might be
acceptable, or might be an error, and (2) when we get an error trying
to extract the attribute value, in which case a Python error will have
been set.
Now in pending_framepy_create_unwind_info we have this code:
if (!pyuw_object_attribute_to_pointer (pyo_frame_id, "sp", &sp))
{
PyErr_SetString (PyExc_ValueError,
_("frame_id should have 'sp' attribute."));
return NULL;
}
Notice how we always set an error. This will override any error that
is already set.
So, if you create a frame-id object that has an 'sp' attribute, but
the attribute is not a gdb.Value, then currently we fail to extract
the attribute value (it's not a gdb.Value) and set this error in
pyuw_object_attribute_to_pointer:
rc = pyuw_value_obj_to_pointer (pyo_value.get (), addr);
if (!rc)
PyErr_Format (
PyExc_ValueError,
_("The value of the '%s' attribute is not a pointer."),
attr_name);
Then we return to pending_framepy_create_unwind_info and immediately
override this error with the error about 'sp' being missing.
This all feels very confused.
Here's my proposed solution: pyuw_object_attribute_to_pointer will now
return a tri-state enum, with states OK, MISSING, or ERROR. The
meanings of these states are:
OK - Attribute exists and was extracted fine,
MISSING - Attribute doesn't exist, no Python error was set.
ERROR - Attribute does exist, but there was an error while
extracting it, a Python error was set.
We need to update pending_framepy_create_unwind_info, the only user of
pyuw_object_attribute_to_pointer, but now I think things are much
clearer. Errors from lower levels are not blindly overridden with the
generic meaningless error message, but we still get the "missing 'sp'
attribute" error when appropriate.
This change also includes the switch to get_addr_from_python which was
what started this whole journey.
For well behaving user code there should be no visible changes after
this commit.
For user code that hits an error, hopefully the new errors should be
more helpful in figuring out what's gone wrong.
Additionally, users can now use integers for the 'sp' and 'pc'
attributes in their frame-id objects if that is useful.
Reviewed-By: Tom Tromey <tom@tromey.com>
It is not currently possible to directly create gdb.UnwindInfo
instances, they need to be created by calling
gdb.PendingFrame.create_unwind_info so that the newly created
UnwindInfo can be linked to the pending frame.
As such there's no tp_init method defined for UnwindInfo.
A consequence of all this is that it doesn't really make sense to
allow sub-classing of gdb.UnwindInfo. Any sub-class can't call the
parents __init__ method to correctly link up the PendingFrame
object (there is no parent __init__ method). And any instances that
sub-classes UnwindInfo but doesn't call the parent __init__ is going
to be invalid for use in GDB.
This commit removes the Py_TPFLAGS_BASETYPE flag from the UnwindInfo
class, which prevents the class being sub-classed. Then I've added a
test to check that this is indeed prevented.
Any functional user code will not have any issues with this change.
Reviewed-By: Tom Tromey <tom@tromey.com>
Having a useful __repr__ method can make debugging Python code that
little bit easier. This commit adds __repr__ for gdb.PendingFrame and
gdb.UnwindInfo classes, along with some tests.
Reviewed-By: Tom Tromey <tom@tromey.com>
The gdb.Frame class has far more methods than gdb.PendingFrame. Given
that a PendingFrame hasn't yet been claimed by an unwinder, there is a
limit to which methods we can add to it, but many of the methods that
the Frame class has, the PendingFrame class could also support.
In this commit I've added those methods to PendingFrame that I believe
are safe.
In terms of implementation: if I was starting from scratch then I
would implement many of these (or most of these) as attributes rather
than methods. However, given both Frame and PendingFrame are just
different representation of a frame, I think there is value in keeping
the interface for the two classes the same. For this reason
everything here is a method -- that's what the Frame class does.
The new methods I've added are:
- gdb.PendingFrame.is_valid: Return True if the pending frame
object is valid.
- gdb.PendingFrame.name: Return the name for the frame's function,
or None.
- gdb.PendingFrame.pc: Return the $pc register value for this
frame.
- gdb.PendingFrame.language: Return a string containing the
language for this frame, or None.
- gdb.PendingFrame.find_sal: Return a gdb.Symtab_and_line object
for the current location within the pending frame, or None.
- gdb.PendingFrame.block: Return a gdb.Block for the current
pending frame, or None.
- gdb.PendingFrame.function: Return a gdb.Symbol for the current
pending frame, or None.
In every case I've just copied the implementation over from gdb.Frame
and cleaned the code slightly e.g. NULL to nullptr. Additionally each
function required a small update to reflect the PendingFrame type, but
that's pretty minor.
There are tests for all the new methods.
For more extensive testing, I added the following code to the file
gdb/python/lib/command/unwinders.py:
from gdb.unwinder import Unwinder
class TestUnwinder(Unwinder):
def __init__(self):
super().__init__("XXX_TestUnwinder_XXX")
def __call__(self,pending_frame):
lang = pending_frame.language()
try:
block = pending_frame.block()
assert isinstance(block, gdb.Block)
except RuntimeError as rte:
assert str(rte) == "Cannot locate block for frame."
function = pending_frame.function()
arch = pending_frame.architecture()
assert arch is None or isinstance(arch, gdb.Architecture)
name = pending_frame.name()
assert name is None or isinstance(name, str)
valid = pending_frame.is_valid()
pc = pending_frame.pc()
sal = pending_frame.find_sal()
assert sal is None or isinstance(sal, gdb.Symtab_and_line)
return None
gdb.unwinder.register_unwinder(None, TestUnwinder())
This registers a global unwinder that calls each of the new
PendingFrame methods and checks the result is of an acceptable type.
The unwinder never claims any frames though, so shouldn't change how
GDB actually behaves.
I then ran the testsuite. There was only a single regression, a test
that uses 'disable unwinder' and expects a single unwinder to be
disabled -- the extra unwinder is now disabled too, which changes the
test output. So I'm reasonably confident that the new methods are not
going to crash GDB.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit copies the pattern that is present in many other py-*.c
files: having a single macro to check that the Python object is still
valid.
This cleans up the code a little throughout the py-unwind.c file.
Some of the exception messages will change slightly with this commit,
though the type of the exceptions is still ValueError in all cases.
I started writing some tests for this change and immediately ran into
a problem: GDB would crash. It turns out that the PendingFrame
objects are not being marked as invalid!
In pyuw_sniffer where the pending frames are created, we make use of a
scoped_restore to invalidate the pending frame objects. However, this
only restores the pending_frame_object::frame_info field to its
previous value -- and it turns out we never actually give this field
an initial value, it's left undefined.
So, when the scoped_restore (called invalidate_frame) performs its
cleanup, it actually restores the frame_info field to an undefined
value. If this undefined value is not nullptr then any future
accesses to the PendingFrame object result in undefined behaviour and
most likely, a crash.
As part of this commit I now initialize the frame_info field, which
ensures all the new tests now pass.
Reviewed-By: Tom Tromey <tom@tromey.com>
Spotted a redundant nullptr check in python/py-frame.c in the function
frapy_block. This was introduced in commit 57126e4a45 when we
expanded an earlier check in return early if the pointer in question
is nullptr.
There should be no user visible changes after this commit.
Reviewed-By: Tom Tromey <tom@tromey.com>
This commit makes a few related changes to the gdb.unwinder.Unwinder
class attributes:
1. The 'name' attribute is now a read-only attribute. This prevents
user code from changing the name after registering the unwinder. It
seems very unlikely that any user is actually trying to do this in
the wild, so I'm not very worried that this will upset anyone,
2. We now validate that the name is a string in the
Unwinder.__init__ method, and throw an error if this is not the
case. Hopefully nobody was doing this in the wild. This should
make it easier to ensure the 'info unwinder' command shows sane
output (how to display a non-string name for an unwinder?),
3. The 'enabled' attribute is now implemented with a getter and
setter. In the setter we ensure that the new value is a boolean,
but the real important change is that we call
'gdb.invalidate_cached_frames()'. This means that the backtrace
will be updated if a user manually disables an unwinder (rather than
calling the 'disable unwinder' command). It is not unreasonable to
think that a user might register multiple unwinders (relating to
some project) and have one command that disables/enables all the
related unwinders. This command might operate by poking the enabled
attribute of each unwinder object directly, after this commit, this
would now work correctly.
There's tests for all the changes, and lots of documentation updates
that both cover the new changes, but also further improve (I think)
the general documentation for GDB's Unwinder API.
Reviewed-By: Eli Zaretskii <eliz@gnu.org>
Reviewed-By: Tom Tromey <tom@tromey.com>
The evaluate command supports a "context" parameter which tells the
adapter the context in which an evaluation occurs. One of the
supported values is "repl", which we took to mean evaluation of a gdb
command. That is what this patch implements.
Note that some gdb commands probably will not work correctly with the
rest of the protocol. For example if the user types "continue",
confusion may result.
This patch requires the earlier patch to fix up scopes in DAP.
Internal AdaCore DAP testing on Windows has had occasional failures
that show:
assert threading.current_thread() is _dap_thread
I think this is a race in DAP startup: the _dap_thread global is only
set on return from start_thread, but it seems possible that the thread
itself could already run and encounter a @in_dap_thread decorator.
This patch fixes the problem by setting the global before running any
of the code in the new thread. This also lets us remove a FIXME.
This input sequence is accepted by DAP:
...
{"seq": 4, "type": "request", "command": "configurationDone"}Content-Length: 84
...
This input sequence has the same effect:
...
{"seq": 4, "type": "request", "command": "configurationDone"}ignorethis
Content-Length: 84
...
but the 'ignorethis' part is silently ignored.
Log the ignored bit, such that we have:
...
READ: <<<{"seq": 4, "type": "request", "command": "configurationDone"}>>>
WROTE: <<<{"request_seq": 4, "type": "response", "command": "configurationDone"
, "success": true}>>>
+++ run
IGNORED: <<<b'ignorethis'>>>
...
The DAP code already claimed to implement "scopes" and "evaluate", but
this wasn't done completely correctly. This patch implements these
and also implements the "variables" request.
After this patch, variables and scopes correctly report their
sub-structure. This also interfaces with the gdb pretty-printer API,
so the output of pretty-printers is available.
Tom de Vries pointed out that one DAP test failed on Python 3.6
because gdb.Frame is not hashable.
This patch fixes the problem by using a list to hold the frames. This
is less efficient but there normally won't be that many frames.
Tested-by: Tom de Vries <tdevries@suse.de>
Linetables no longer change after they are created. This patch
applies const to them.
Note there is one hack to cast away const in mdebugread.c. This code
allocates a linetable using 'malloc', then later copies it to the
obstack. While this could be cleaned up, I chose not to do so because
I have no way of testing it.
Approved-By: Simon Marchi <simon.marchi@efficios.com>
This changes linetables to not add the text offset to the addresses
they contain. I did this in a few steps, necessarily combined
together in one patch: I renamed the 'pc' member to 'm_pc', added the
appropriate accessors, and then recompiled. Then I fixed all the
errors. Where possible I generally chose to use the raw_pc accessor,
as it is less expensive.
Note that this patch discounts the possibility that the text section
offset might cause wraparound in the addresses in the line table.
However, this was already discounted -- in particular,
objfile_relocate1 did not re-sort the table in this scenario. (There
was a bug open about this, but as far as I can tell this has never
happened, it's not even clear what inspired that bug.)
Approved-By: Simon Marchi <simon.marchi@efficios.com>
As of 2023-02-13 "gdb/python: deallocate tui window factories at Python
shut down" (9ae4519da9), a TUI-less build fails with:
$src/gdb/python/py-tui.c: In function ‘void gdbpy_finalize_tui()’:
$src/gdb/python/py-tui.c:621:3: error: ‘gdbpy_tui_window_maker’ has not been declared
621 | gdbpy_tui_window_maker::invalidate_all ();
| ^~~~~~~~~~~~~~~~~~~~~~
Since gdbpy_tui_window_maker is only defined under #ifdef TUI, add an
#ifdef guard in gdbpy_finalize_tui as well.
The DAP stackTrace implementation did not fully account for frames
without debuginfo. Attemping this would yield a result like:
{"request_seq": 5, "type": "response", "command": "stackTrace", "success": false, "message": "'NoneType' object has no attribute 'filename'", "seq": 11}
This patch fixes the problem by adding another check for None.
Small cleanup to use std::string::size instead of calling strlen on
the result of std::string::c_str.
Should be no user visible changes after this call.