This commit brings all the changes made by running gdb/copyright.py
as per GDB's Start of New Year Procedure.
For the avoidance of doubt, all changes in this commits were
performed by the script.
The documentation suggests that we implement gdb.Value.__init__,
however, this is not currently true, we really implement
gdb.Value.__new__. This will cause confusion if a user tries to
sub-class gdb.Value. They might write:
class MyVal (gdb.Value):
def __init__ (self, val):
gdb.Value.__init__(self, val)
obj = MyVal(123)
print ("Got: %s" % obj)
But, when they source this code they'll see:
(gdb) source ~/tmp/value-test.py
Traceback (most recent call last):
File "/home/andrew/tmp/value-test.py", line 7, in <module>
obj = MyVal(123)
File "/home/andrew/tmp/value-test.py", line 5, in __init__
gdb.Value.__init__(self, val)
TypeError: object.__init__() takes exactly one argument (the instance to initialize)
(gdb)
The reason for this is that, as we don't implement __init__ for
gdb.Value, Python ends up calling object.__init__ instead, which
doesn't expect any arguments.
The Python docs suggest that the reason why we might take this
approach is because we want gdb.Value to be immutable:
https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_new
But I don't see any reason why we should require gdb.Value to be
immutable when other types defined in GDB are not. This current
immutability can be seen in this code:
obj = gdb.Value(1234)
print("Got: %s" % obj)
obj.__init__ (5678)
print("Got: %s" % obj)
Which currently runs without error, but prints:
Got: 1234
Got: 1234
In this commit I propose that we switch to using __init__ to
initialize gdb.Value objects.
This does introduce some additional complexity, during the __init__
call a gdb.Value might already be associated with a gdb value object,
in which case we need to cleanly break that association before
installing the new gdb value object. However, the cost of doing this
is not great, and the benefit - being able to easily sub-class
gdb.Value seems worth it.
After this commit the first example above works without error, while
the second example now prints:
Got: 1234
Got: 5678
In order to make it easier to override the gdb.Value.__init__ method,
I have tweaked the definition of gdb.Value.__init__. The second,
optional argument to __init__ is a gdb.Type, if this argument is not
present then GDB figures out a suitable type.
However, if we want to override the __init__ method in a sub-class,
and still support the default argument, it is easier to write:
class MyVal (gdb.Value):
def __init__ (self, val, type=None):
gdb.Value.__init__(self, val, type)
Currently, passing None for the Type will result in an error:
TypeError: type argument must be a gdb.Type.
After this commit I now allow the type argument to be None, in which
case GDB figures out a suitable type just as if the type had not been
passed at all.
Unless a user is trying to reinitialize a value, or create sub-classes
of gdb.Value, there should be no user visible changes after this
commit.
A test in gdb.python/py-send-packet.exp added in this commit:
commit 24b2de7b77
Date: Tue Aug 31 14:04:36 2021 +0100
gdb/python: add gdb.RemoteTargetConnection.send_packet
included a large amount of binary data in the command sent to GDB. As
this test didn't have a real test name the binary data was included in
the gdb.sum file. The contents of the binary data could change
between different runs of GDB, and this makes comparing results
harder.
This commit gives the test a real test name.
When running the gdb.python/py-arch.exp tests on a GDB built
against Python 2 I ran into some errors. The problem is that this
test script exercises the gdb.Architecture.integer_type method, and
this method uses 'p' as an argument format specifier in a call to
gdb_PyArg_ParseTupleAndKeywords.
Unfortunately this specified was only added in Python 3.3, so will
cause an error for earlier versions of Python.
This commit switches to use the 'O' specifier to collect a PyObject,
and then uses PyObject_IsTrue to convert the object to a boolean.
An earlier version of this patch incorrectly switched from using 'p'
to use 'i', however, it was pointed out during review that this would
cause some changes in behaviour, for example both of these will work
with 'p', but not with 'i':
gdb.selected_inferior().architecture().integer_type(32, None)
gdb.selected_inferior().architecture().integer_type(32, "foo")
The new approach of using 'O' works fine with these cases. I've added
some new tests to cover both of the above.
There should be no user visible changes after this commit.
The gdb.python/py-inferior-leak.exp test makes use of the tracemalloc
module. When running the Python tests with a GDB built against Python
2 I ran into a test failure due to the tracemalloc module not being
available.
This commit adds a new helper function to lib/gdb-python.exp that
checks if a named module is available. Using this we can then skip
the py-inferior-leak.exp test when the tracemalloc module is not
available.
This commits adds a new sub-class of gdb.TargetConnection,
gdb.RemoteTargetConnection. This sub-class is created for all
'remote' and 'extended-remote' targets.
This new sub-class has one additional method over its base class,
'send_packet'. This new method is equivalent to the 'maint
packet' CLI command, it allows a custom packet to be sent to a remote
target.
The outgoing packet can either be a bytes object, or a Unicode string,
so long as the Unicode string contains only ASCII characters.
The result of calling RemoteTargetConnection.send_packet is a bytes
object containing the reply that came from the remote.
This commit adds a new object type gdb.TargetConnection. This new
type represents a connection within GDB (a connection as displayed by
'info connections').
There's three ways to find a gdb.TargetConnection, there's a new
'gdb.connections()' function, which returns a list of all currently
active connections.
Or you can read the new 'connection' property on the gdb.Inferior
object type, this contains the connection for that inferior (or None
if the inferior has no connection, for example, it is exited).
Finally, there's a new gdb.events.connection_removed event registry,
this emits a new gdb.ConnectionEvent whenever a connection is removed
from GDB (this can happen when all inferiors using a connection exit,
though this is not always the case, depending on the connection type).
The gdb.ConnectionEvent has a 'connection' property, which is the
gdb.TargetConnection being removed from GDB.
The gdb.TargetConnection has an 'is_valid()' method. A connection
object becomes invalid when the underlying connection is removed from
GDB (as discussed above, this might be when all inferiors using a
connection exit, or it might be when the user explicitly replaces a
connection in GDB by issuing another 'target' command).
The gdb.TargetConnection has the following read-only properties:
'num': The number for this connection,
'type': e.g. 'native', 'remote', 'sim', etc
'description': The longer description as seen in the 'info
connections' command output.
'details': A string or None. Extra details for the connection, for
example, a remote connection's details might be
'hostname:port'.
This adds a new Python function, gdb.Architecture.integer_type, which
can be used to look up an integer type of a given size and
signed-ness. This is useful to avoid dependency on debuginfo when a
particular integer type would be useful.
v2 moves this to be a method on gdb.Architecture and addresses other
review comments.
Add a new function to the Python API, gdb.architecture_names(). This
function returns a list containing all of the supported architecture
names within the current build of GDB.
The values returned in this list are all of the possible values that
can be returned from gdb.Architecture.name().
When running with target board native-gdbserver, we run into a number of FAILs
due to use of the start command (and similar), which is not supported when
use_gdb_stub == 1.
Fix this by:
- requiring use_gdb_stub == 0 for the entire test-case, or
- guarding some tests in the test-case with use_gdb_stub == 0.
Tested on x86_64-linux.
When running test-case gdb.python/python.exp, we have:
...
PASS: gdb.python/python.exp: starti via gdb.execute, not from tty
PASS: gdb.python/python.exp: starti via interactive input
...
The two tests are instances of the same test, with different values for
starti command argument from_tty, so it's strange that the test names are so
different.
This is due to using a gdb_test nested in a gdb_test_multiple, with the inner
one using a different test name than the outer one. [ That could still make
sense if both produced passes, but that's not the case here. ]
Fix this by using $gdb_test_name, such that we have:
...
PASS: gdb.python/python.exp: starti via gdb.execute, not from tty
PASS: gdb.python/python.exp: starti via gdb.execute, from tty
...
Also make this more readable by using variables.
Tested on x86_64-linux.
When a user creates a gdb.Inferior object for the first time a new
Python object is created. This object is then cached within GDB's
inferior object using the registry mechanism (see
inferior_to_inferior_object in py-inferior.c, specifically the calls
to inferior_data and set_inferior_data).
The Python Reference to the gdb.Inferior object held within the real
inferior object ensures that the reference count on the Python
gdb.Inferior object never reaches zero while the GDB inferior object
continues to exist.
At the same time, the gdb.Inferior object maintains a C++ pointer back
to GDB's real inferior object. We therefore end up with a system that
looks like this:
Python Reference
|
|
.----------. | .--------------.
| |------------------->| |
| inferior | | gdb.Inferior |
| |<-------------------| |
'----------' | '--------------'
|
|
C++ Pointer
When GDB's inferior object is deleted (say the inferior exits) then
py_free_inferior is called (thanks to the registry system), this
function looks up the Python gdb.Inferior object and sets the C++
pointer to nullptr and finally reduces the reference count on the
Python gdb.Inferior object.
If at this point the user still holds a reference to the Python
gdb.Inferior object then nothing happens. However, the gdb.Inferior
object is now in the non-valid state (see infpy_is_valid in
py-inferior.c), but otherwise, everything is fine.
However, if there are no further references to the Python gdb.Inferior
object, or, once the user has given up all their references to the
gdb.Inferior object, then infpy_dealloc is called.
This function currently checks to see if the inferior pointer within
the gdb.Inferior object is nullptr or not. If the pointer is nullptr
then infpy_dealloc immediately returns.
Only when the inferior point in the gdb.Inferior is not nullptr do
we (a) set the gdb.Inferior reference inside GDB's inferior to
nullptr, and (b) call the underlying Python tp_free function.
There are a number things wrong here:
1. The Python gdb.Inferior reference within GDB's inferior object
holds a reference count, thus, setting this reference to nullptr
without first decrementing the reference count would leak a
reference, however...
2. As GDB's inferior holds a reference then infpy_dealloc will never
be called until GDB's inferior object is deleted. Deleting a GDB
inferior ohject calls py_free_inferior, and so gives up the
reference. At this point there is no longer a need to call
set_inferior_data to set the field back to NULL, that field must
have been cleared in order to get the reference count to zero, which
means...
3. If we know that py_free_inferior must be called before
infpy_dealloc, then we know that the inferior pointer in
gdb.Inferior will always be nullptr when infpy_dealloc is called,
this means that the call to the underlying tp_free function will
always be skipped. Skipping this call will cause Python to leak the
memory associated with the gdb.Inferior object, which is what we
currently always do.
Given all of the above, I assert that the C++ pointer within
gdb.Inferior will always be nullptr when infpy_dealloc is called.
That's what this patch does.
I wrote a test for this issue making use of Pythons tracemalloc
module, which allows us to spot this memory leak.
Add a new event, gdb.events.gdb_exiting, which is called once GDB
decides it is going to exit.
This event is not triggered in the case that GDB performs a hard
abort, for example, when handling an internal error and the user
decides to quit the debug session, or if GDB hits an unexpected,
fatal, signal.
This event is triggered if the user just types 'quit' at the command
prompt, or if GDB is run with '-batch' and has processed all of the
required commands.
The new event type is gdb.GdbExitingEvent, and it has a single
attribute exit_code, which is the value that GDB is about to exit
with.
The event is triggered before GDB starts dismantling any of its own
internal state, so, my expectation is that most Python calls should
work just fine at this point.
When considering this functionality I wondered about using the
'atexit' Python module. However, this is triggered when the Python
environment is shut down, which is done from a final cleanup. At
this point we don't know for sure what other GDB state has already
been cleaned up.
The test gdb.python/py-events.exp sets up a handler for the gdb.exited
event. Unfortunately the handler is slightly broken, it assumes that
the exit_code attribute will always be present. This is not always
the case.
In a later commit I am going to add more tests to py-events.exp test
script, and in so doing I expose the bug in our handling of gdb.exited
events.
Just to be clear, GDB itself is fine, it is the test that is not
written correctly according to the Python Events API.
So, in this commit I fix the Python code in the test, and extend the
test case to exercise more paths through the Python code.
Additionally, I noticed that the gdb.exited event is used as an
example in the documentation for how to write an event handler.
Unfortunately the same bug that we had in our test was also present in
the example code in the manual.
So I've fixed that too.
After this commit there is no functional change to GDB.
As follow-up to this discussion:
https://sourceware.org/pipermail/gdb-patches/2020-August/171385.html
... make runto_main not pass no-message to runto. This means that if we
fail to run to main, for some reason, we'll emit a FAIL. This is the
behavior we want the majority of (if not all) the time.
Without this, we rely on tests logging a failure if runto_main fails,
otherwise. They do so in a very inconsisteny mannet, sometimes using
"fail", "unsupported" or "untested". The messages also vary widly.
This patch removes all these messages as well.
Also, remove a few "fail" where we call runto (and not runto_main). by
default (without an explicit no-message argument), runto prints a
failure already. In two places, gdb.multi/multi-re-run.exp and
gdb.python/py-pp-registration.exp, remove "message" passed to runto.
This removes a few PASSes that we don't care about (but FAILs will still
be printed if we fail to run to where we want to). This aligns their
behavior with the rest of the testsuite.
Change-Id: Ib763c98c5f4fb6898886b635210d7c34bd4b9023
With a gdb build using python 2.7, I run into:
...
(gdb) python \
gdb.events.breakpoint_modified.connect(lambda bp: print(bp.enabled))^M
File "<string>", line 1^M
gdb.events.breakpoint_modified.connect(lambda bp: print(bp.enabled))^M
^^M
SyntaxError: invalid syntax^M
Error while executing Python code.^M
(gdb) FAIL: gdb.python/py-breakpoint.exp: test_bkpt_auto_disable: \
trap breakpoint_modified event
...
This is caused by the following:
- a lambda function body needs to be an expression
- in python 2, print is a statement, while in python 3 it's a function
- a function call is an expression, and a statement is not.
Fix this by defining a function print_bp_enabled:
...
def print_bp_enabled (bp):
print (bp.enabled)
end
...
and using that instead.
Tested on x86_64-linux.
Minimize gdb restarts, applying the following rules:
- don't use prepare_for_testing unless necessary
- don't use clean_restart unless necessary
Also, if possible, replace build_for_executable + clean_restart
with prepare_for_testing for brevity.
Touches 68 test-cases.
Tested on x86_64-linux.
With test-case gdb.python/py-events.exp on ubuntu 18.04.5 we run into:
...
(gdb) info threads^M
Id Target Id Frame ^M
* 1 Thread 0x7ffff7fc3740 (LWP 31467) "py-events" do_nothing () at \
src/gdb/testsuite/gdb.python/py-events-shlib.c:19^M
(gdb) FAIL: gdb.python/py-events.exp: get current thread
...
The info thread commands uses "Thread" instead of "process" because
libpthread is already loaded:
...
new objfile name: /lib/x86_64-linux-gnu/libdl.so.2^M
[Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".^M
event type: new_objfile^M
new objfile name: /lib/x86_64-linux-gnu/libpthread.so.0^M
...
and consequently thread_db_target::pid_to_str is used.
Fix this by parsing the "Thread" expression.
Tested on x86_64-linux.
The documentation of mi_gdb_test states that the command, pattern and message
arguments are mandatory:
...
# mi_gdb_test COMMAND PATTERN MESSAGE [IPATTERN] -- send a command to gdb;
# test the result.
...
However, this is not checked, and when mi_gdb_test is called with less than 3
arguments, it passes or fails silently.
Fix this by using the following semantics:
- if there are 1 or 2 arguments, use the command as the message.
- if there is 1 argument, use ".*" as the pattern.
- if there are no or too much arguments, error out.
Fix a PATH issue in gdb.mi/mi-logging.exp, introduced by using the command as
message. Fix a few other trivial-looking FAILs.
There are 11 less trivial-looking FAILs left in gdb.mi in test-cases:
- mi-nsmoribund.exp
- mi-breakpoint-changed.exp
- mi-break.exp.
Tested on x86_64-linux.
The guile API has (history-append! <value>) to add values into GDB's
history list. There is currently no equivalent in the Python API.
This commit adds gdb.add_history(<value>) to the Python API, this
function takes <value> a gdb.Value (or anything that can be passed to
the constructor of gdb.Value), and adds the value it represents to
GDB's history list. The index of the newly added value is returned.
As breakpoint_modified observer is currently notified upon breakpoint stop
before handling auto-disabling when enable count is reached, the observer
is never notified of the disabling.
The problem affects:
- The MI interpreter enabled= value when reporting =breakpoint-modified
- A Python event handler for breakpoint_modified using the "enabled"
member of its parameter
- insight: breakpoint GUI window is not properly updated upon auto-disable
This patch moves the observer notification after the auto-disabling
code and implements corresponding tests for the MI and Python cases.
Fixes https://sourceware.org/bugzilla/show_bug.cgi?id=23336
Change-Id: I0c50df4789334071e5390cb46b3ca0d4a7f83c61
The test steps into func2 and than does an up to get back to the previous
frame. The test checks that the line number you are at after the up command
is greater than the line where the function was called from. The
assembly/codegen for the powerpc target includes a NOP after the
branch-link.
func2 (); /* Break at func2 call site. /
10000694: 59 00 00 48 bl 100006ec
10000698: 00 00 00 60 nop
return 0; / Break to end. */
1000069c: 00 00 20 39 li r9,0
The PC at the instruction following the branch-link is 0x10000698 which
GDB.find_pc_line() maps to the same line number as the bl instruction.
GDB did move past the branch-link location thus making forward progress.
The following proposed fix adds an additional PC check to see if forward
progress was made. The line test is changed from greater than to greater
than or equal.
This patch addresses a design problem with the symbol_needs_eval_context
class. It exposes the problem by introducing two new testsuite test
cases.
To explain the issue, I first need to explain the dwarf_expr_context
class that the symbol_needs_eval_context class derives from.
The intention behind the dwarf_expr_context class is to commonize the
DWARF expression evaluation mechanism for different evaluation
contexts. Currently in gdb, the evaluation context can contain some or
all of the following information: architecture, object file, frame and
compilation unit.
Depending on the information needed to evaluate a given expression,
there are currently three distinct DWARF expression evaluators:
- Frame: designed to evaluate an expression in the context of a call
frame information (dwarf_expr_executor class). This evaluator doesn't
need a compilation unit information.
- Location description: designed to evaluate an expression in the
context of a source level information (dwarf_evaluate_loc_desc
class). This evaluator expects all information needed for the
evaluation of the given expression to be present.
- Symbol needs: designed to answer a question about the parts of the
context information required to evaluate a DWARF expression behind a
given symbol (symbol_needs_eval_context class). This evaluator
doesn't need a frame information.
The functional difference between the symbol needs evaluator and the
others is that this evaluator is not meant to interact with the actual
target. Instead, it is supposed to check which parts of the context
information are needed for the given DWARF expression to be evaluated by
the location description evaluator.
The idea is to take advantage of the existing dwarf_expr_context
evaluation mechanism and to fake all required interactions with the
actual target, by returning back dummy values. The evaluation result is
returned as one of three possible values, based on operations found in a
given expression:
- SYMBOL_NEEDS_NONE,
- SYMBOL_NEEDS_REGISTERS and
- SYMBOL_NEEDS_FRAME.
The problem here is that faking results of target interactions can yield
an incorrect evaluation result.
For example, if we have a conditional DWARF expression, where the
condition depends on a value read from an actual target, and the true
branch of the condition requires a frame information to be evaluated,
while the false branch doesn't, fake target reads could conclude that a
frame information is not needed, where in fact it is. This wrong
information would then cause the expression to be actually evaluated (by
the location description evaluator) with a missing frame information.
This would then crash the debugger.
The gdb.dwarf2/symbol_needs_eval_fail.exp test introduces this
scenario, with the following DWARF expression:
DW_OP_addr $some_variable
DW_OP_deref
# conditional jump to DW_OP_bregx
DW_OP_bra 4
DW_OP_lit0
# jump to DW_OP_stack_value
DW_OP_skip 3
DW_OP_bregx $dwarf_regnum 0
DW_OP_stack_value
This expression describes a case where some variable dictates the
location of another variable. Depending on a value of some_variable, the
variable whose location is described by this expression is either read
from a register or it is defined as a constant value 0. In both cases,
the value will be returned as an implicit location description on the
DWARF stack.
Currently, when the symbol needs evaluator fakes a memory read from the
address behind the some_variable variable, the constant value 0 is used
as the value of the variable A, and the check returns the
SYMBOL_NEEDS_NONE result.
This is clearly a wrong result and it causes the debugger to crash.
The scenario might sound strange to some people, but it comes from a
SIMD/SIMT architecture where $some_variable is an execution mask. In
any case, it is a valid DWARF expression, and GDB shouldn't crash while
evaluating it. Also, a similar example could be made based on a
condition of the frame base value, where if that value is concluded to
be 0, the variable location could be defaulted to a TLS based memory
address.
The gdb.dwarf2/symbol_needs_eval_timeout.exp test introduces a second
scenario. This scenario is a bit more abstract due to the DWARF
assembler lacking the CFI support, but it exposes a different
manifestation of the same problem. Like in the previous scenario, the
DWARF expression used in the test is valid:
DW_OP_lit1
DW_OP_addr $some_variable
DW_OP_deref
# jump to DW_OP_fbreg
DW_OP_skip 4
DW_OP_drop
DW_OP_fbreg 0
DW_OP_dup
DW_OP_lit0
DW_OP_eq
# conditional jump to DW_OP_drop
DW_OP_bra -9
DW_OP_stack_value
Similarly to the previous scenario, the location of a variable A is an
implicit location description with a constant value that depends on a
value held by a global variable. The difference from the previous case
is that DWARF expression contains a loop instead of just one branch. The
end condition of that loop depends on the expectation that a frame base
value is never zero. Currently, the act of faking the target reads will
cause the symbol needs evaluator to get stuck in an infinite loop.
Somebody could argue that we could change the fake reads to return
something else, but that would only hide the real problem.
The general impression seems to be that the desired design is to have
one class that deals with parsing of the DWARF expression, while there
are virtual methods that deal with specifics of some operations.
Using an evaluator mechanism here doesn't seem to be correct, because
the act of evaluation relies on accessing the data from the actual
target with the possibility of skipping the evaluation of some parts of
the expression.
To better explain the proposed solution for the issue, I first need to
explain a couple more details behind the current design:
There are multiple places in gdb that handle DWARF expression parsing
for different purposes. Some are in charge of converting the expression
to some other internal representation (decode_location_expression,
disassemble_dwarf_expression and dwarf2_compile_expr_to_ax), some are
analysing the expression for specific information
(compute_stack_depth_worker) and some are in charge of evaluating the
expression in a given context (dwarf_expr_context::execute_stack_op
and decode_locdesc).
The problem is that all those functions have a similar (large) switch
statement that handles each DWARF expression operation. The result of
this is a code duplication and harder maintenance.
As a step into the right direction to solve this problem (at least for
the purpose of a DWARF expression evaluation) the expression parsing was
commonized inside of an evaluator base class (dwarf_expr_context). This
makes sense for all derived classes, except for the symbol needs
evaluator (symbol_needs_eval_context) class.
As described previously the problem with this evaluator is that if the
evaluator is not allowed to access the actual target, it is not really
evaluating.
Instead, the desired function of a symbol needs evaluator seems to fall
more into expression analysis category. This means that a more natural
fit for this evaluator is to be a symbol needs analysis, similar to the
existing compute_stack_depth_worker analysis.
Another problem is that using a heavyweight mechanism of an evaluator
to do an expression analysis seems to be an unneeded overhead. It also
requires a more complicated design of the parent class to support fake
target reads.
The reality is that the whole symbol_needs_eval_context class can be
replaced with a lightweight recursive analysis function, that will give
more correct result without compromising the design of the
dwarf_expr_context class. The analysis treats the expression byte
stream as a DWARF operation graph, where each graph node can be
visited only once and each operation can decide if the frame context
is needed for their evaluation.
The downside of this approach is adding of one more similar switch
statement, but at least this way the new symbol needs analysis will be
a lightweight mechnism and it will provide a correct result for any
given DWARF expression.
A more desired long term design would be to have one class that deals
with parsing of the DWARF expression, while there would be a virtual
methods that deal with specifics of some DWARF operations. Then that
class would be used as a base for all DWARF expression parsing mentioned
at the beginning.
This however, requires a far bigger changes that are out of the scope
of this patch series.
The new analysis requires the DWARF location description for the
argc argument of the main function to change in the assembly file
gdb.python/amd64-py-framefilter-invalidarg.S. Originally, expression
ended with a 0 value byte, which was never reached by the symbol needs
evaluator, because it was detecting a stack underflow when evaluating
the operation before. The new approach does not simulate a DWARF
stack anymore, so the 0 value byte needs to be removed because it
makes the DWARF expression invalid.
gdb/ChangeLog:
* dwarf2/loc.c (class symbol_needs_eval_context): Remove.
(dwarf2_get_symbol_read_needs): New function.
(dwarf2_loc_desc_get_symbol_read_needs): Remove.
(locexpr_get_symbol_read_needs): Use
dwarf2_get_symbol_read_needs.
gdb/testsuite/ChangeLog:
* gdb.python/amd64-py-framefilter-invalidarg.S : Update argc
DWARF location expression.
* lib/dwarf.exp (_location): Handle DW_OP_fbreg.
* gdb.dwarf2/symbol_needs_eval.c: New file.
* gdb.dwarf2/symbol_needs_eval_fail.exp: New file.
* gdb.dwarf2/symbol_needs_eval_timeout.exp: New file.
PR varobj/28131 points out a crash in the varobj deletion code. It
took a while to reproduce this, but essentially what happens is that a
top-level varobj deletes its root object, then deletes the "dynamic"
object. However, deletion of the dynamic object may cause
~py_varobj_iter to run, which in turn uses gdbpy_enter_varobj:
gdbpy_enter_varobj::gdbpy_enter_varobj (const struct varobj *var)
: gdbpy_enter (var->root->exp->gdbarch, var->root->exp->language_defn)
{
}
However, because var->root has already been destroyed, this is
invalid.
I've added a new test case. This doesn't reliably crash, but the
problem can easily be seen under valgrind (and, I presume, with ASAN,
though I did not try this).
Tested on x86-64 Fedora 32. I also propose putting this on the GDB 11
branch, with a suitable ChangeLog entry of course.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28131
Split the file into multiple independent test procs, where each proc
starts with a fresh GDB. I find it easier to understand what a test is
doing when each part of the test is isolated and self-contained. It
makes it easier to comment out some parts of the test while working /
debugging a specific part. It also makes it easier to add new things
(which a subsequent patch will do) without fear of impacting another part
of the test.
Change-Id: I8b4d52ac82b1492d79b679e13914ed177d8a836d
Not all systems have hardware breakpoint support. Add a check
to see if the system supports hardware breakpoints.
gdb/testsuite/ChangeLog
* gdb.python/py-breakpoint.exp (test_hardware_breakpoints): Add
check for hardware breakpoint support.
When building gdb with --disable-tui, we run into:
...
(gdb) frame apply all -- -^M
Undefined command: "-". Try "help".^M
(gdb) ERROR: Undefined command "frame apply all -- -".
UNRESOLVED: gdb.base/options.exp: test-frame-apply: frame apply all -- -
...
Fix this by detecting whether tui is supported, and skipping the tui-related
tests otherwise. Same in some gdb.tui test-cases.
Tested on x86_64-linux.
gdb/testsuite/ChangeLog:
2021-07-13 Tom de Vries <tdevries@suse.de>
* gdb.base/options.exp: Skip tui-related tests when tui is not
supported.
* gdb.python/tui-window-disabled.exp: Same.
* gdb.python/tui-window.exp: Same.
This commit adds initial support for catchpoints to the python
breakpoint API.
This commit adds a BP_CATCHPOINT constant which corresponds to
GDB's internal bp_catchpoint. The new constant is documented in the
manual.
The user can't create breakpoints with type BP_CATCHPOINT after this
commit, but breakpoints that already exist, obtained with the
`gdb.breakpoints` function, can now have this type. Additionally,
when a stop event is reported for hitting a catchpoint, GDB will now
report a BreakpointEvent with the attached breakpoint being of type
BP_CATCHPOINT - previously GDB would report a generic StopEvent in
this situation.
gdb/ChangeLog:
* NEWS: Mention Python BP_CATCHPOINT feature.
* python/py-breakpoint.c (pybp_codes): Add bp_catchpoint support.
(bppy_init): Likewise.
(gdbpy_breakpoint_created): Likewise.
gdb/doc/ChangeLog:
* python.texinfo (Breakpoints In Python): Add BP_CATCHPOINT
description.
gdb/testsuite/ChangeLog:
* gdb.python/py-breakpoint.c (do_throw): New function.
(main): Call do_throw.
* gdb.python/py-breakpoint.exp (test_catchpoints): New proc.
Add new methods to the PendingFrame and Frame classes to obtain the
stack frame level for each object.
The use of 'level' as the method name is consistent with the existing
attribute RecordFunctionSegment.level (though this is an attribute
rather than a method).
For Frame/PendingFrame I went with methods as these classes currently
only use methods, including for simple data like architecture, so I
want to be consistent with this interface.
gdb/ChangeLog:
* NEWS: Mention the two new methods.
* python/py-frame.c (frapy_level): New function.
(frame_object_methods): Register 'level' method.
* python/py-unwind.c (pending_framepy_level): New function.
(pending_frame_object_methods): Register 'level' method.
gdb/doc/ChangeLog:
* python.texi (Unwinding Frames in Python): Mention
PendingFrame.level.
(Frames In Python): Mention Frame.level.
gdb/testsuite/ChangeLog:
* gdb.python/py-frame.exp: Add Frame.level tests.
* gdb.python/py-pending-frame-level.c: New file.
* gdb.python/py-pending-frame-level.exp: New file.
* gdb.python/py-pending-frame-level.py: New file.
This patch came about because I wanted to write a frame unwinder that
would corrupt the backtrace in a particular way. In order to achieve
what I wanted I ended up trying to write an unwinder like this:
class FrameId(object):
.... snip class definition ....
class TestUnwinder(Unwinder):
def __init__(self):
Unwinder.__init__(self, "some name")
def __call__(self, pending_frame):
pc_desc = pending_frame.architecture().registers().find("pc")
pc = pending_frame.read_register(pc_desc)
sp_desc = pending_frame.architecture().registers().find("sp")
sp = pending_frame.read_register(sp_desc)
# ... snip code to decide if this unwinder applies or not.
fid = FrameId(pc, sp)
unwinder = pending_frame.create_unwind_info(fid)
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
return unwinder
The important things here are the two calls:
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
On x86-64 these would fail with an assertion error:
gdb/regcache.c:168: internal-error: int register_size(gdbarch*, int): Assertion `regnum >= 0 && regnum < gdbarch_num_cooked_regs (gdbarch)' failed.
What happens is that in unwind_infopy_add_saved_register (py-unwind.c)
we call register_size, as register_size should only be called on
cooked (real or pseudo) registers, and 'pc' and 'sp' are implemented
as user registers (at least on x86-64), we trigger the assertion.
A simple fix would be to check in unwind_infopy_add_saved_register if
the register number we are handling is a cooked register or not, if
not we can throw a 'Bad register' error back to the Python code.
However, I think we can do better.
Consider that at the CLI we can do this:
(gdb) set $pc=0x1234
This works because GDB first evaluates '$pc' to get a register value,
then evaluates '0x1234' to create a value encapsulating the
immediate. The contents of the immediate value are then copied back
to the location of the register value representing '$pc'.
The value location for a user-register will (usually) be the location
of the real register that was accessed, so on x86-64 we'd expect this
to be $rip.
So, in this patch I propose that in the unwinder code, when
add_saved_register is called, if it is passed a
user-register (i.e. non-cooked) then we first fetch the register,
extract the real register number from the value's location, and use
that new register number when handling the add_saved_register call.
If either the value location that we get for the user-register is not
a cooked register then we can throw a 'Bad register' error back to the
Python code, but in most cases this will not happen.
gdb/ChangeLog:
* python/py-unwind.c (unwind_infopy_add_saved_register): Handle
saving user registers.
gdb/testsuite/ChangeLog:
* gdb.python/py-unwind-user-regs.c: New file.
* gdb.python/py-unwind-user-regs.exp: New file.
* gdb.python/py-unwind-user-regs.py: New file.
While reviewing a patch sent to the mailing list, I noticed there are few
places where python code checks if a variable is 'None' or not by using the
comparison operators '==' and '!='. PEP8[1], which is used as coding standard
in GDB coding standards, recommends using 'is' / 'is not' when comparing to a
singleton such as 'None'.
This patch proposes to change the instances of '== None' by 'is None' and
'!= None' by 'is not None'.
[1] https://www.python.org/dev/peps/pep-0008/
gdb/doc/ChangeLog:
* python.texi (Writing a Pretty-Printer): Use 'is None' instead of
'== None'.
gdb/ChangeLog:
* python/lib/gdb/FrameDecorator.py (FrameDecorator): Use 'is None' instead of
'== None'.
(FrameVars): Use 'is not None' instead of '!= None'.
* python/lib/gdb/command/frame_filters.py (SetFrameFilterPriority): Use 'is None'
instead of '== None' and 'is not None' instead of '!= None'.
gdb/testsuite/ChangeLog:
* gdb.base/premature-dummy-frame-removal.py (TestUnwinder): Use
'is None' instead of '== None' and 'is not None' instead of
'!= None'.
* gdb.python/py-frame-args.py (lookup_function): Same.
* gdb.python/py-framefilter-invalidarg.py (Reverse_Function): Same.
* gdb.python/py-framefilter.py (Reverse_Function): Same.
* gdb.python/py-nested-maps.py (lookup_function): Same.
* gdb.python/py-objfile-script-gdb.py (lookup_function): Same.
* gdb.python/py-prettyprint.py (lookup_function): Same.
* gdb.python/py-section-script.py (lookup_function): Same.
* gdb.python/py-unwind-inline.py (dummy_unwinder): Same.
* gdb.python/python.exp: Same.
* gdb.rust/pp.py (lookup_function): Same.
Evaluating expressions from within an inferior exit event handler can
cause a crash:
echo "int main() { return 0; }" > repro.c
gcc -g repro.c -o repro
./gdb -q --ex "set language c++" --ex "python gdb.events.exited.connect(lambda _: gdb.execute('set \$_a=0'))" --ex "run" repro
Reading symbols from repro...
Starting program: /home/mhov/repos/binutils-gdb-master/install-bad/bin/repro
[Inferior 1 (process 1974779) exited normally]
../../gdb/thread.c:72: internal-error: thread_info* inferior_thread(): Assertion `current_thread_ != nullptr' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n) [answered Y; input not from terminal]
This is a bug, please report it. For instructions, see:
<https://www.gnu.org/software/gdb/bugs/>.
Backtrace
0 in internal_error of ../../gdbsupport/errors.cc:51
1 in inferior_thread of ../../gdb/thread.c:72
2 in expression::evaluate of ../../gdb/eval.c:98
3 in evaluate_expression of ../../gdb/eval.c:115
4 in set_command of ../../gdb/printcmd.c:1502
5 in do_const_cfunc of ../../gdb/cli/cli-decode.c:101
6 in cmd_func of ../../gdb/cli/cli-decode.c:2181
7 in execute_command of ../../gdb/top.c:670
...
22 in python_inferior_exit of ../../gdb/python/py-inferior.c:182
In `expression::evaluate (...)' there is a call to `inferior_thread
()' that is guarded by `target_has_execution ()':
struct value *
expression::evaluate (struct type *expect_type, enum noside noside)
{
gdb::optional<enable_thread_stack_temporaries> stack_temporaries;
if (target_has_execution ()
&& language_defn->la_language == language_cplus
&& !thread_stack_temporaries_enabled_p (inferior_thread ()))
stack_temporaries.emplace (inferior_thread ());
The `target_has_execution ()' guard maps onto `inf->pid' and the
`inferior_thread ()' call assumes that `current_thread_' is set to
something meaningful:
struct thread_info*
inferior_thread (void)
{
gdb_assert (current_thread_ != nullptr);
return current_thread_;
}
In other words, it is assumed that if `inf->pid' is set then
`current_thread_' must also be set. This does not hold at the point
where inferior exit observers are notified:
- `generic_mourn_inferior (...)'
- `switch_to_no_thread ()'
- `current_thread_ = nullptr;'
- `exit_inferior (...)'
- `gdb::observers::inferior_exit.notify (...)'
- `inf->pid = 0'
The inferior exit notification means that a Python handler can get a
chance to run while `current_thread' has been cleared and the
`inf->pid' has not been cleared. Since the Python handler can call any
GDB command with `gdb.execute(...)' (in my case `gdb.execute("set
$_a=0")' we can end up evaluating expressions and asserting in
`evaluate_subexp (...)'.
This patch adds a test in `evaluate_subexp (...)' to check the global
`inferior_ptid' which is reset at the same time as `current_thread_'.
Checking `inferior_ptid' at the same time as `target_has_execution ()'
seems to be a common pattern:
$ git grep -n -e inferior_ptid --and -e target_has_execution
gdb/breakpoint.c:2998: && (inferior_ptid == null_ptid || !target_has_execution ()))
gdb/breakpoint.c:3054: && (inferior_ptid == null_ptid || !target_has_execution ()))
gdb/breakpoint.c:4587: if (inferior_ptid == null_ptid || !target_has_execution ())
gdb/infcmd.c:360: if (inferior_ptid != null_ptid && target_has_execution ())
gdb/infcmd.c:2380: /* FIXME: This should not really be inferior_ptid (or target_has_execution).
gdb/infrun.c:3438: if (!target_has_execution () || inferior_ptid == null_ptid)
gdb/remote.c:11961: if (!target_has_execution () || inferior_ptid == null_ptid)
gdb/solib.c:725: if (target_has_execution () && inferior_ptid != null_ptid)
The testsuite has been run on 5.4.0-59-generic x86_64 GNU/Linux:
- Ubuntu 20.04.1 LTS
- gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
- DejaGnu version 1.6.2
- Expect version 5.45.4
- Tcl version 8.6
- Native configuration: x86_64-pc-linux-gnu
- Target: unix
Results show a few XFAIL in
gdb.threads/attach-many-short-lived-threads.exp. The existing
py-events.exp tests are skipped for native-gdbserver and fail for
native-extended-gdbserver, but the new tests pass with
native-extended-gdbserver when run without the existing tests.
gdb/ChangeLog:
2021-06-03 Magne Hov <mhov@undo.io>
PR python/27841
* eval.c (expression::evaluate): Check inferior_ptid.
gdb/testsuite/ChangeLog:
2021-06-03 Magne Hov <mhov@undo.io>
PR python/27841
* gdb.python/py-events.exp: Extend inferior exit tests.
* gdb.python/py-events.py: Print inferior exit PID.
It was removed (probably by mistake) in
51e78fc5fa.
gdb/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* python/py-symbol.c (gdbpy_initialize_symbols): Restore
gdb.SYMBOL_LABEL_DOMAIN constant.
gdb/testsuite/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* gdb.python/py-symbol.exp: Test symbol constants.
I noticed these files because they weren't considered by black for
reformatting, prior to adding pyproject.toml, because their extension is
not .py. I don't think they specifically need to be named .py.in, so I
suggest renaming them to .py. This will make it nicer to edit them, as
editors will recognize them more easily as Python files.
Perhaps this was needed before, when the testsuite didn't always put
output files in the output directory. Then, a different name for the
source and destination file might have been desirable to avoid
overwriting a file with itself (perhaps that wasn't well handled). But
in any case, it doesn't see to cause any problem now.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-gdb.py.in: Rename to:
* gdb.python/py-framefilter-gdb.py: ... this.
* gdb.python/py-framefilter-invalidarg-gdb.py.in: Rename to:
* gdb.python/py-framefilter-invalidarg-gdb.py: ... this.
Change-Id: I63bb94010bbbc33434ee1c91a386c91fc1ff80bc
When running black to format Python files, files with extension .py.in
are ignored, because they don't end in .py. Add a pyproject.toml file
to instruct black to pick up these files too.
gdb/ChangeLog:
* py-project.toml: New.
* gdb-gdb.py.in: Re-format.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-gdb.py.in: Re-format.
* gdb.python/py-framefilter-invalidarg-gdb.py.in: Re-format.
Change-Id: I9b88faec3360ea24788f44c8b89fe0b2a5f4eb97
Define a 'connection_num' attribute for Inferior objects. The
read-only attribute is the ID of the connection of an inferior, as
printed by "info inferiors". In GDB's internal terminology, that's
the process stratum target of the inferior. If the inferior has no
target connection, the attribute is None.
gdb/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python/py-inferior.c (infpy_get_connection_num): New function.
(inferior_object_getset): Add a new element for 'connection_num'.
* NEWS: Mention the 'connection_num' attribute of Inferior objects.
gdb/doc/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python.texi (Inferiors In Python): Mention the 'connection_num'
attribute.
gdb/testsuite/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.python/py-inferior.exp: Add test cases for 'connection_num'.
The 'print max-depth' feature incorrectly causes GDB to skip printing
the string representation of pretty printed variables if the variable
is stored at a nested depth corresponding to the set max-depth value.
This change ensures that it is always printed before checking whether
the maximum print depth has been reached.
Regression tested with GCC 7.3.0 on x86_64, ppc64le, aarch64.
gdb/ChangeLog:
* cp-valprint.c (cp_print_value): Replaced duplicate code.
* guile/scm-pretty-print.c (ppscm_print_children): Check max_depth
just before printing child values.
(gdbscm_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
* python/py-prettyprint.c (print_children): Check max_depth just
before printing child values.
(gdbpy_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
gdb/testsuite/ChangeLog:
* gdb.python/py-format-string.c: Added a variable to test.
* gdb.python/py-format-string.exp: Check string representation is
printed at appropriate max_depth settings.
* gdb.python/py-nested-maps.exp: Likewise.
* gdb.guile/scm-pretty-print.exp: Add additional tests.
Re-format all Python files using black [1] version 21.4b0. The goal is
that from now on, we keep all Python files formatted using black. And
that we never have to discuss formatting during review (for these files
at least) ever again.
One change is needed in gdb.python/py-prettyprint.exp, because it
matches the string representation of an exception, which shows source
code. So the change in formatting must be replicated in the expected
regexp.
To document our usage of black I plan on adding this to the "GDB Python
Coding Standards" wiki page [2]:
--8<--
All Python source files under the `gdb/` directory must be formatted
using black version 21.4b0.
This specific version can be installed using:
$ pip3 install 'black == 21.4b0'
All you need to do to re-format files is run `black <file/directory>`,
and black will re-format any Python file it finds in there. It runs
quite fast, so the simplest is to do:
$ black gdb/
from the top-level.
If you notice that black produces changes unrelated to your patch, it's
probably because someone forgot to run it before you. In this case,
don't include unrelated hunks in your patch. Push an obvious patch
fixing the formatting and rebase your work on top of that.
-->8--
Once this is merged, I plan on setting a up an `ignoreRevsFile`
config so that git-blame ignores this commit, as described here:
https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame
I also plan on working on a git commit hook (checked in the repo) to
automatically check the formatting of the Python files on commit.
[1] https://pypi.org/project/black/
[2] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
gdb/ChangeLog:
* Re-format all Python files using black.
gdb/testsuite/ChangeLog:
* Re-format all Python files using black.
* gdb.python/py-prettyprint.exp (run_lang_tests): Adjust.
Change-Id: I28588a22c2406afd6bc2703774ddfff47cd61919
I noticed two errors in the Type.fields documentation:
1. It is possible to call `fields` on an array type, in which case it
returns one field representing the array's range. It is not
mentioned.
2. When calling `fields` on a type that doesn't have fields (by nature,
like an int), GDB raises a TypeError. It does not return an empty
sequence, as currently documented.
Fix these, and change the text into a bullet list. I find it easier to
read than one big paragraph.
The first issue is already tested in gdb.python/py-type.exp, but the
second one doesn't seem tested. Add a test in gdb.python/py-type.exp
for it.
gdb/doc/ChangeLog:
* python.texi (Types In Python): Re-organize Type.fields doc.
Mention handling of array types. Correct doc for when calling
the method on another type.
gdb/testsuite/ChangeLog:
* gdb.python/py-type.exp (test_fields): Test calling fields on
an int type.
Change-Id: I11c688177504cb070b81a4446ac91dec50b56a22
The `Type.range ()` tests in gdb.python/flexible-array-member.exp pass
when the test is compiled with gcc 9 or later, but not with gcc 8 or
earlier:
$ make check TESTS="gdb.python/flexible-array-member.exp" RUNTESTFLAGS="CC_FOR_TARGET='gcc-8'"
python print(zs['items'].type.range())^M
(0, 0)^M
(gdb) FAIL: gdb.python/flexible-array-member.exp: python print(zs['items'].type.range())
python print(zso['items'].type.range())^M
(0, 0)^M
(gdb) FAIL: gdb.python/flexible-array-member.exp: python print(zso['items'].type.range())
The value that we get for the upper bound of a flexible array member
declared with a "0" size is 0 with gcc <= 8 and is -1 for gcc >= 9.
This is due to different debug info. For this member, gcc 8 does:
0x000000d5: DW_TAG_array_type
DW_AT_type [DW_FORM_ref4] (0x00000034 "int")
DW_AT_sibling [DW_FORM_ref4] (0x000000e4)
0x000000de: DW_TAG_subrange_type
DW_AT_type [DW_FORM_ref4] (0x0000002d "long unsigned int")
For the same type, gcc 9 does:
0x000000d5: DW_TAG_array_type
DW_AT_type [DW_FORM_ref4] (0x00000034 "int")
DW_AT_sibling [DW_FORM_ref4] (0x000000e5)
0x000000de: DW_TAG_subrange_type
DW_AT_type [DW_FORM_ref4] (0x0000002d "long unsigned int")
DW_AT_count [DW_FORM_data1] (0x00)
Ideally, GDB would present a consistent and documented value for an
array member declared with size 0, regardless of how the debug info
looks like. But for now, just change the test to accept the two
values, to get rid of the failure and make the test in sync
I also realized (by looking at the py-type.exp test) that calling the
fields method on an array type yields one field representing the "index"
of the array. The type of that field is of type range
(gdb.TYPE_CODE_RANGE). When calling `.range()` on that range type, it
yields the same range tuple as when calling `.range()` on the array type
itself. For completeness, add some tests to access the range tuple
through that range type as well.
gdb/testsuite/ChangeLog:
* gdb.python/flexible-array-member.exp: Adjust expected range
value for member declared with 0 size. Test accessing range
tuple through range type.
Change-Id: Ie4e06d99fe9315527f04577888f48284d649ca4c
The test gdb.python/py-startup-opt.exp checks the behaviour of GDB's:
set python dont-write-bytecode on
This flag (when on) stops Python creating .pyc files. The test first
checks that .pyc files will be created, then turns this option on and
checks .pyc files will not be created.
However, if the user has PYTHONDONTWRITEBYTECODE set in their
environment then this will prevent Python from creating .pyc files, as
such the first test, that .pyc files are being created, currently
fails.
We could unset PYTHONDONTWRITEBYTECODE, however, until Python 3.8
there is no way to control where Python writes the .pyc files. As the
GDB developer clearly doesn't want .pyc files created in their
file-system it feels wrong to silently unset this environment
variable.
My proposal then, is that we just spot when this environment variable
is set and adjust the expected results. My hope is that across all
GDB developers some will be running with PYTHONDONTWRITEBYTECODE
unset, so this feature will be fully tested at least some of the time.
gdb/testsuite/ChangeLog:
PR testsuite/27788
* gdb.python/py-startup-opt.exp (test_python_settings): Change the
expected results when environment variable PYTHONDONTWRITEBYTECODE
is set.
Add two new commands to GDB that can be placed into the early
initialization to control how Python starts up. The new options are:
set python ignore-environment on|off
set python dont-write-bytecode auto|on|off
show python ignore-environment
show python dont-write-bytecode
These can be used from GDB's startup file to control how the Python
extension language behaves. These options are equivalent to the -E
and -B flags to python respectively, their descriptions from the
Python man page:
-E Ignore environment variables like PYTHONPATH and PYTHONHOME
that modify the behavior of the interpreter.
-B Don't write .pyc files on import.
gdb/ChangeLog:
* NEWS: Mention new commands.
* python/python.c (python_ignore_environment): New static global.
(show_python_ignore_environment): New function.
(set_python_ignore_environment): New function.
(python_dont_write_bytecode): New static global.
(show_python_dont_write_bytecode): New function.
(set_python_dont_write_bytecode): New function.
(_initialize_python): Register new commands.
gdb/doc/ChangeLog:
* python.texinfo (Python Commands): Mention new commands.
gdb/testsuite/ChangeLog:
* gdb.python/py-startup-opt.exp: New file.
Without any explicit dependencies specified, the observers attached
to the 'gdb::observers::new_objfile' observable are always notified
in the order in which they have been attached.
The new_objfile observer callback to auto-load scripts is attached in
'_initialize_auto_load'.
The new_objfile observer callback that propagates the new_objfile event
to the Python side is attached in 'gdbpy_initialize_inferior', which is
called via '_initialize_python'.
With '_initialize_python' happening before '_initialize_auto_load',
the consequence was that the new_objfile event was emitted on the Python
side before autoloaded scripts had been executed when a new objfile was
loaded.
As a result, trying to access the objfile's pretty printers (defined in
the autoloaded script) from a handler for the Python-side
'new_objfile' event would fail. Those would only be initialized later on
(when the 'auto_load_new_objfile' callback was called).
To make sure that the objfile passed to the Python event handler
is properly initialized (including its 'pretty_printers' member),
make sure that the 'auto_load_new_objfile' observer is notified
before the 'python_new_objfile' one that propagates the event
to the Python side.
To do this, make use of the mechanism to explicitly specify
dependencies between observers (introduced in a preparatory commit).
Add a corresponding testcase that involves a test library with an autoloaded
Python script and a handler for the Python 'new_objfile' event.
(The real world use case where I came across this issue was in an attempt
to extend handling for GDB pretty printers for dynamically loaded
objfiles in the Qt Creator IDE, s. [1] and [2] for more background.)
[1] https://bugreports.qt.io/browse/QTCREATORBUG-25339
[2] https://codereview.qt-project.org/c/qt-creator/qt-creator/+/333857/1
Tested on x86_64-linux (Debian testing).
gdb/ChangeLog:
* gdb/auto-load.c (_initialize_auto_load): 'Specify token
when attaching the 'auto_load_new_objfile' observer, so
other observers can specify it as a dependency.
* gdb/auto-load.h (struct token): Declare
'auto_load_new_objfile_observer_token' as token to be used
for the 'auto_load_new_objfile' observer.
* gdb/python/py-inferior.c (gdbpy_initialize_inferior): Make
'python_new_objfile' observer depend on 'auto_load_new_objfile'
observer, so it gets notified after the latter.
gdb/testsuite/ChangeLog:
* gdb.python/libpy-autoloaded-pretty-printers-in-newobjfile-event.so-gdb.py: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.h: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-main.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.exp: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.py: New test.
Change-Id: I8275b3f4c3bec32e56dd7892f9a59d89544edf89
We don't want to execute this test if Python support is not compiled in
GDB, add the necessary check.
gdb/testsuite/ChangeLog:
* gdb.python/flexible-array-member.exp: Add check for Python
support.
Change-Id: I853b937d2a193a0bb216566bef1a35354264b1c5
As reported in bug 27757, we get an internal error when doing:
$ cat test.c
struct foo {
int len;
int items[];
};
struct foo *p;
int main() {
return 0;
}
$ gcc test.c -g -O0 -o test
$ ./gdb -q -nx --data-directory=data-directory ./test -ex 'python gdb.parse_and_eval("p").type.target()["items"].type.range()'
Reading symbols from ./test...
/home/simark/src/binutils-gdb/gdb/gdbtypes.h:435: internal-error: LONGEST dynamic_prop::const_val() const: Assertion `m_kind == PROP_CONST' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n)
This is because the Python code (typy_range) blindly reads the high
bound of the type of `items` as a constant value. Since it is a
flexible array member, it has no high bound, the property is undefined.
Since commit 8c2e4e0689 ("gdb: add accessors to struct dynamic_prop"),
the getters check that you are not getting a property value of the wrong
kind, so this causes a failed assertion.
Fix it by checking if the property is indeed a constant value before
accessing it as such. Otherwise, use 0. This restores the previous GDB
behavior: because the structure was zero-initialized, this is what was
returned before. But now this behavior is explicit and not accidental.
Add a test, gdb.python/flexible-array-member.exp, that is derived from
gdb.base/flexible-array-member.exp. It tests the same things, but
through the Python API. It also specifically tests getting the range
from the various kinds of flexible array member types (AFAIK it wasn't
possible to do the equivalent through the CLI).
gdb/ChangeLog:
PR gdb/27757
* python/py-type.c (typy_range): Check that bounds are constant
before accessing them as such.
* guile/scm-type.c (gdbscm_type_range): Likewise.
gdb/testsuite/ChangeLog:
PR gdb/27757
* gdb.python/flexible-array-member.c: New test.
* gdb.python/flexible-array-member.exp: New test.
* gdb.guile/scm-type.exp (test_range): Add test for flexible
array member.
* gdb.guile/scm-type.c (struct flex_member): New.
(main): Use it.
Change-Id: Ibef92ee5fd871ecb7c791db2a788f203dff2b841
Give a test a proper name in order to avoid including a path in the
test name.
gdb/testsuite/ChangeLog:
* gdb.python/py-parameter.exp: Give a test a proper name to avoid
including a path in the test name.
It was reported on IRC that using gdb.parameter('data-directory')
doesn't work correctly.
The problem is that the data directory is stored in 'gdb_datadir',
however the set/show command is associated with a temporary
'staged_gdb_datadir'.
When the user does 'set data-directory VALUE', the VALUE is stored in
'staged_gdb_datadir' by GDB, then set_gdb_datadir is called. This in
turn calls set_gdb_data_directory to copy the value from
staged_gdb_datadir into gdb_datadir.
However, set_gdb_data_directory will resolve relative paths, so the
value stored in gdb_datadir might not match the value in
staged_gdb_datadir.
The Python gdb.parameter API fetches the parameter values by accessing
the variable associated with the show command, so in this case
staged_gdb_datadir. This causes two problems:
1. Initially staged_gdb_datadir is NULL, and remains as such until the
user does 'set data-directory VALUE' (which might never happen), but
gdb_datadir starts with GDB's default data-directory value. So
initially from Python gdb.parameter('data-directory') will return the
empty string, even though at GDB's CLI 'show data-directory' prints a
real path.
2. If the user does 'set data-directory ./some/relative/path', GDB
will resolve the relative path, thus, 'show data-directory' at the CLI
will print an absolute path. However, the value is staged_gdb_datadir
will still be the relative path, and gdb.parameter('data-directory')
from Python will return the relative path.
In this commit I fix both of these issues by:
1. Initialising the value in staged_gdb_datadir based on the initial
value in gdb_datadir, and
2. In set_gdb_datadir, after calling set_gdb_data_directory, I copy
the value in gdb_datadir back into staged_gdb_datadir.
With these two changes in place the value in staged_gdb_datadir should
always match the value in gdb_datadir, and accessing data-directory
from Python should now work correctly.
gdb/ChangeLog:
* top.c (staged_gdb_datadir): Update comment.
(set_gdb_datadir): Copy the value of gdb_datadir back into
staged_datadir.
(init_main): Initialise staged_gdb_datadir.
gdb/testsuite/ChangeLog:
* gdb.python/py-parameter.exp: Add test for reading data-directory
using gdb.parameter API.
This commit adds a couple of tests to the python pretty printer
testing.
I've added a test for the 'array' display hint. This display hint is
tested by gdb.python/py-mi.exp, however, the MI testing is done via
the varobj interface, and this code makes its own direct calls to the
Python pretty printers from gdb/varobj.c. What this means is that the
interface to the pretty printers in gdb/python/py-prettyprint.c is not
tested for the 'array' display hint path.
I also added a test for what happens when the display_hint method
raises an exception. There wasn't a bug that inspired this test, just
while adding the previous test I thought, I wonder what happens if...
The current behaviour of GDB seems reasonable, GDB displays the Python
exception, and then continues printing the value as if display_hint
had returned None. I added a test to lock in this behaviour.
gdb/testsuite/ChangeLog:
* gdb.python/py-prettyprint.c (struct container): Add 'is_array_p'
member.
(make_container): Initialise is_array_p.
* gdb.python/py-prettyprint.exp: Add new tests.
* gdb.python/py-prettyprint.py (ContainerPrinter.display_hint):
Check is_array_p and possibly return 'array'.