With test-case gdb.python/py-events.exp on ubuntu 18.04.5 we run into:
...
(gdb) info threads^M
Id Target Id Frame ^M
* 1 Thread 0x7ffff7fc3740 (LWP 31467) "py-events" do_nothing () at \
src/gdb/testsuite/gdb.python/py-events-shlib.c:19^M
(gdb) FAIL: gdb.python/py-events.exp: get current thread
...
The info thread commands uses "Thread" instead of "process" because
libpthread is already loaded:
...
new objfile name: /lib/x86_64-linux-gnu/libdl.so.2^M
[Thread debugging using libthread_db enabled]^M
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".^M
event type: new_objfile^M
new objfile name: /lib/x86_64-linux-gnu/libpthread.so.0^M
...
and consequently thread_db_target::pid_to_str is used.
Fix this by parsing the "Thread" expression.
Tested on x86_64-linux.
The documentation of mi_gdb_test states that the command, pattern and message
arguments are mandatory:
...
# mi_gdb_test COMMAND PATTERN MESSAGE [IPATTERN] -- send a command to gdb;
# test the result.
...
However, this is not checked, and when mi_gdb_test is called with less than 3
arguments, it passes or fails silently.
Fix this by using the following semantics:
- if there are 1 or 2 arguments, use the command as the message.
- if there is 1 argument, use ".*" as the pattern.
- if there are no or too much arguments, error out.
Fix a PATH issue in gdb.mi/mi-logging.exp, introduced by using the command as
message. Fix a few other trivial-looking FAILs.
There are 11 less trivial-looking FAILs left in gdb.mi in test-cases:
- mi-nsmoribund.exp
- mi-breakpoint-changed.exp
- mi-break.exp.
Tested on x86_64-linux.
The guile API has (history-append! <value>) to add values into GDB's
history list. There is currently no equivalent in the Python API.
This commit adds gdb.add_history(<value>) to the Python API, this
function takes <value> a gdb.Value (or anything that can be passed to
the constructor of gdb.Value), and adds the value it represents to
GDB's history list. The index of the newly added value is returned.
As breakpoint_modified observer is currently notified upon breakpoint stop
before handling auto-disabling when enable count is reached, the observer
is never notified of the disabling.
The problem affects:
- The MI interpreter enabled= value when reporting =breakpoint-modified
- A Python event handler for breakpoint_modified using the "enabled"
member of its parameter
- insight: breakpoint GUI window is not properly updated upon auto-disable
This patch moves the observer notification after the auto-disabling
code and implements corresponding tests for the MI and Python cases.
Fixes https://sourceware.org/bugzilla/show_bug.cgi?id=23336
Change-Id: I0c50df4789334071e5390cb46b3ca0d4a7f83c61
The test steps into func2 and than does an up to get back to the previous
frame. The test checks that the line number you are at after the up command
is greater than the line where the function was called from. The
assembly/codegen for the powerpc target includes a NOP after the
branch-link.
func2 (); /* Break at func2 call site. /
10000694: 59 00 00 48 bl 100006ec
10000698: 00 00 00 60 nop
return 0; / Break to end. */
1000069c: 00 00 20 39 li r9,0
The PC at the instruction following the branch-link is 0x10000698 which
GDB.find_pc_line() maps to the same line number as the bl instruction.
GDB did move past the branch-link location thus making forward progress.
The following proposed fix adds an additional PC check to see if forward
progress was made. The line test is changed from greater than to greater
than or equal.
This patch addresses a design problem with the symbol_needs_eval_context
class. It exposes the problem by introducing two new testsuite test
cases.
To explain the issue, I first need to explain the dwarf_expr_context
class that the symbol_needs_eval_context class derives from.
The intention behind the dwarf_expr_context class is to commonize the
DWARF expression evaluation mechanism for different evaluation
contexts. Currently in gdb, the evaluation context can contain some or
all of the following information: architecture, object file, frame and
compilation unit.
Depending on the information needed to evaluate a given expression,
there are currently three distinct DWARF expression evaluators:
- Frame: designed to evaluate an expression in the context of a call
frame information (dwarf_expr_executor class). This evaluator doesn't
need a compilation unit information.
- Location description: designed to evaluate an expression in the
context of a source level information (dwarf_evaluate_loc_desc
class). This evaluator expects all information needed for the
evaluation of the given expression to be present.
- Symbol needs: designed to answer a question about the parts of the
context information required to evaluate a DWARF expression behind a
given symbol (symbol_needs_eval_context class). This evaluator
doesn't need a frame information.
The functional difference between the symbol needs evaluator and the
others is that this evaluator is not meant to interact with the actual
target. Instead, it is supposed to check which parts of the context
information are needed for the given DWARF expression to be evaluated by
the location description evaluator.
The idea is to take advantage of the existing dwarf_expr_context
evaluation mechanism and to fake all required interactions with the
actual target, by returning back dummy values. The evaluation result is
returned as one of three possible values, based on operations found in a
given expression:
- SYMBOL_NEEDS_NONE,
- SYMBOL_NEEDS_REGISTERS and
- SYMBOL_NEEDS_FRAME.
The problem here is that faking results of target interactions can yield
an incorrect evaluation result.
For example, if we have a conditional DWARF expression, where the
condition depends on a value read from an actual target, and the true
branch of the condition requires a frame information to be evaluated,
while the false branch doesn't, fake target reads could conclude that a
frame information is not needed, where in fact it is. This wrong
information would then cause the expression to be actually evaluated (by
the location description evaluator) with a missing frame information.
This would then crash the debugger.
The gdb.dwarf2/symbol_needs_eval_fail.exp test introduces this
scenario, with the following DWARF expression:
DW_OP_addr $some_variable
DW_OP_deref
# conditional jump to DW_OP_bregx
DW_OP_bra 4
DW_OP_lit0
# jump to DW_OP_stack_value
DW_OP_skip 3
DW_OP_bregx $dwarf_regnum 0
DW_OP_stack_value
This expression describes a case where some variable dictates the
location of another variable. Depending on a value of some_variable, the
variable whose location is described by this expression is either read
from a register or it is defined as a constant value 0. In both cases,
the value will be returned as an implicit location description on the
DWARF stack.
Currently, when the symbol needs evaluator fakes a memory read from the
address behind the some_variable variable, the constant value 0 is used
as the value of the variable A, and the check returns the
SYMBOL_NEEDS_NONE result.
This is clearly a wrong result and it causes the debugger to crash.
The scenario might sound strange to some people, but it comes from a
SIMD/SIMT architecture where $some_variable is an execution mask. In
any case, it is a valid DWARF expression, and GDB shouldn't crash while
evaluating it. Also, a similar example could be made based on a
condition of the frame base value, where if that value is concluded to
be 0, the variable location could be defaulted to a TLS based memory
address.
The gdb.dwarf2/symbol_needs_eval_timeout.exp test introduces a second
scenario. This scenario is a bit more abstract due to the DWARF
assembler lacking the CFI support, but it exposes a different
manifestation of the same problem. Like in the previous scenario, the
DWARF expression used in the test is valid:
DW_OP_lit1
DW_OP_addr $some_variable
DW_OP_deref
# jump to DW_OP_fbreg
DW_OP_skip 4
DW_OP_drop
DW_OP_fbreg 0
DW_OP_dup
DW_OP_lit0
DW_OP_eq
# conditional jump to DW_OP_drop
DW_OP_bra -9
DW_OP_stack_value
Similarly to the previous scenario, the location of a variable A is an
implicit location description with a constant value that depends on a
value held by a global variable. The difference from the previous case
is that DWARF expression contains a loop instead of just one branch. The
end condition of that loop depends on the expectation that a frame base
value is never zero. Currently, the act of faking the target reads will
cause the symbol needs evaluator to get stuck in an infinite loop.
Somebody could argue that we could change the fake reads to return
something else, but that would only hide the real problem.
The general impression seems to be that the desired design is to have
one class that deals with parsing of the DWARF expression, while there
are virtual methods that deal with specifics of some operations.
Using an evaluator mechanism here doesn't seem to be correct, because
the act of evaluation relies on accessing the data from the actual
target with the possibility of skipping the evaluation of some parts of
the expression.
To better explain the proposed solution for the issue, I first need to
explain a couple more details behind the current design:
There are multiple places in gdb that handle DWARF expression parsing
for different purposes. Some are in charge of converting the expression
to some other internal representation (decode_location_expression,
disassemble_dwarf_expression and dwarf2_compile_expr_to_ax), some are
analysing the expression for specific information
(compute_stack_depth_worker) and some are in charge of evaluating the
expression in a given context (dwarf_expr_context::execute_stack_op
and decode_locdesc).
The problem is that all those functions have a similar (large) switch
statement that handles each DWARF expression operation. The result of
this is a code duplication and harder maintenance.
As a step into the right direction to solve this problem (at least for
the purpose of a DWARF expression evaluation) the expression parsing was
commonized inside of an evaluator base class (dwarf_expr_context). This
makes sense for all derived classes, except for the symbol needs
evaluator (symbol_needs_eval_context) class.
As described previously the problem with this evaluator is that if the
evaluator is not allowed to access the actual target, it is not really
evaluating.
Instead, the desired function of a symbol needs evaluator seems to fall
more into expression analysis category. This means that a more natural
fit for this evaluator is to be a symbol needs analysis, similar to the
existing compute_stack_depth_worker analysis.
Another problem is that using a heavyweight mechanism of an evaluator
to do an expression analysis seems to be an unneeded overhead. It also
requires a more complicated design of the parent class to support fake
target reads.
The reality is that the whole symbol_needs_eval_context class can be
replaced with a lightweight recursive analysis function, that will give
more correct result without compromising the design of the
dwarf_expr_context class. The analysis treats the expression byte
stream as a DWARF operation graph, where each graph node can be
visited only once and each operation can decide if the frame context
is needed for their evaluation.
The downside of this approach is adding of one more similar switch
statement, but at least this way the new symbol needs analysis will be
a lightweight mechnism and it will provide a correct result for any
given DWARF expression.
A more desired long term design would be to have one class that deals
with parsing of the DWARF expression, while there would be a virtual
methods that deal with specifics of some DWARF operations. Then that
class would be used as a base for all DWARF expression parsing mentioned
at the beginning.
This however, requires a far bigger changes that are out of the scope
of this patch series.
The new analysis requires the DWARF location description for the
argc argument of the main function to change in the assembly file
gdb.python/amd64-py-framefilter-invalidarg.S. Originally, expression
ended with a 0 value byte, which was never reached by the symbol needs
evaluator, because it was detecting a stack underflow when evaluating
the operation before. The new approach does not simulate a DWARF
stack anymore, so the 0 value byte needs to be removed because it
makes the DWARF expression invalid.
gdb/ChangeLog:
* dwarf2/loc.c (class symbol_needs_eval_context): Remove.
(dwarf2_get_symbol_read_needs): New function.
(dwarf2_loc_desc_get_symbol_read_needs): Remove.
(locexpr_get_symbol_read_needs): Use
dwarf2_get_symbol_read_needs.
gdb/testsuite/ChangeLog:
* gdb.python/amd64-py-framefilter-invalidarg.S : Update argc
DWARF location expression.
* lib/dwarf.exp (_location): Handle DW_OP_fbreg.
* gdb.dwarf2/symbol_needs_eval.c: New file.
* gdb.dwarf2/symbol_needs_eval_fail.exp: New file.
* gdb.dwarf2/symbol_needs_eval_timeout.exp: New file.
PR varobj/28131 points out a crash in the varobj deletion code. It
took a while to reproduce this, but essentially what happens is that a
top-level varobj deletes its root object, then deletes the "dynamic"
object. However, deletion of the dynamic object may cause
~py_varobj_iter to run, which in turn uses gdbpy_enter_varobj:
gdbpy_enter_varobj::gdbpy_enter_varobj (const struct varobj *var)
: gdbpy_enter (var->root->exp->gdbarch, var->root->exp->language_defn)
{
}
However, because var->root has already been destroyed, this is
invalid.
I've added a new test case. This doesn't reliably crash, but the
problem can easily be seen under valgrind (and, I presume, with ASAN,
though I did not try this).
Tested on x86-64 Fedora 32. I also propose putting this on the GDB 11
branch, with a suitable ChangeLog entry of course.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=28131
Split the file into multiple independent test procs, where each proc
starts with a fresh GDB. I find it easier to understand what a test is
doing when each part of the test is isolated and self-contained. It
makes it easier to comment out some parts of the test while working /
debugging a specific part. It also makes it easier to add new things
(which a subsequent patch will do) without fear of impacting another part
of the test.
Change-Id: I8b4d52ac82b1492d79b679e13914ed177d8a836d
Not all systems have hardware breakpoint support. Add a check
to see if the system supports hardware breakpoints.
gdb/testsuite/ChangeLog
* gdb.python/py-breakpoint.exp (test_hardware_breakpoints): Add
check for hardware breakpoint support.
When building gdb with --disable-tui, we run into:
...
(gdb) frame apply all -- -^M
Undefined command: "-". Try "help".^M
(gdb) ERROR: Undefined command "frame apply all -- -".
UNRESOLVED: gdb.base/options.exp: test-frame-apply: frame apply all -- -
...
Fix this by detecting whether tui is supported, and skipping the tui-related
tests otherwise. Same in some gdb.tui test-cases.
Tested on x86_64-linux.
gdb/testsuite/ChangeLog:
2021-07-13 Tom de Vries <tdevries@suse.de>
* gdb.base/options.exp: Skip tui-related tests when tui is not
supported.
* gdb.python/tui-window-disabled.exp: Same.
* gdb.python/tui-window.exp: Same.
This commit adds initial support for catchpoints to the python
breakpoint API.
This commit adds a BP_CATCHPOINT constant which corresponds to
GDB's internal bp_catchpoint. The new constant is documented in the
manual.
The user can't create breakpoints with type BP_CATCHPOINT after this
commit, but breakpoints that already exist, obtained with the
`gdb.breakpoints` function, can now have this type. Additionally,
when a stop event is reported for hitting a catchpoint, GDB will now
report a BreakpointEvent with the attached breakpoint being of type
BP_CATCHPOINT - previously GDB would report a generic StopEvent in
this situation.
gdb/ChangeLog:
* NEWS: Mention Python BP_CATCHPOINT feature.
* python/py-breakpoint.c (pybp_codes): Add bp_catchpoint support.
(bppy_init): Likewise.
(gdbpy_breakpoint_created): Likewise.
gdb/doc/ChangeLog:
* python.texinfo (Breakpoints In Python): Add BP_CATCHPOINT
description.
gdb/testsuite/ChangeLog:
* gdb.python/py-breakpoint.c (do_throw): New function.
(main): Call do_throw.
* gdb.python/py-breakpoint.exp (test_catchpoints): New proc.
Add new methods to the PendingFrame and Frame classes to obtain the
stack frame level for each object.
The use of 'level' as the method name is consistent with the existing
attribute RecordFunctionSegment.level (though this is an attribute
rather than a method).
For Frame/PendingFrame I went with methods as these classes currently
only use methods, including for simple data like architecture, so I
want to be consistent with this interface.
gdb/ChangeLog:
* NEWS: Mention the two new methods.
* python/py-frame.c (frapy_level): New function.
(frame_object_methods): Register 'level' method.
* python/py-unwind.c (pending_framepy_level): New function.
(pending_frame_object_methods): Register 'level' method.
gdb/doc/ChangeLog:
* python.texi (Unwinding Frames in Python): Mention
PendingFrame.level.
(Frames In Python): Mention Frame.level.
gdb/testsuite/ChangeLog:
* gdb.python/py-frame.exp: Add Frame.level tests.
* gdb.python/py-pending-frame-level.c: New file.
* gdb.python/py-pending-frame-level.exp: New file.
* gdb.python/py-pending-frame-level.py: New file.
This patch came about because I wanted to write a frame unwinder that
would corrupt the backtrace in a particular way. In order to achieve
what I wanted I ended up trying to write an unwinder like this:
class FrameId(object):
.... snip class definition ....
class TestUnwinder(Unwinder):
def __init__(self):
Unwinder.__init__(self, "some name")
def __call__(self, pending_frame):
pc_desc = pending_frame.architecture().registers().find("pc")
pc = pending_frame.read_register(pc_desc)
sp_desc = pending_frame.architecture().registers().find("sp")
sp = pending_frame.read_register(sp_desc)
# ... snip code to decide if this unwinder applies or not.
fid = FrameId(pc, sp)
unwinder = pending_frame.create_unwind_info(fid)
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
return unwinder
The important things here are the two calls:
unwinder.add_saved_register(pc_desc, pc)
unwinder.add_saved_register(sp_desc, sp)
On x86-64 these would fail with an assertion error:
gdb/regcache.c:168: internal-error: int register_size(gdbarch*, int): Assertion `regnum >= 0 && regnum < gdbarch_num_cooked_regs (gdbarch)' failed.
What happens is that in unwind_infopy_add_saved_register (py-unwind.c)
we call register_size, as register_size should only be called on
cooked (real or pseudo) registers, and 'pc' and 'sp' are implemented
as user registers (at least on x86-64), we trigger the assertion.
A simple fix would be to check in unwind_infopy_add_saved_register if
the register number we are handling is a cooked register or not, if
not we can throw a 'Bad register' error back to the Python code.
However, I think we can do better.
Consider that at the CLI we can do this:
(gdb) set $pc=0x1234
This works because GDB first evaluates '$pc' to get a register value,
then evaluates '0x1234' to create a value encapsulating the
immediate. The contents of the immediate value are then copied back
to the location of the register value representing '$pc'.
The value location for a user-register will (usually) be the location
of the real register that was accessed, so on x86-64 we'd expect this
to be $rip.
So, in this patch I propose that in the unwinder code, when
add_saved_register is called, if it is passed a
user-register (i.e. non-cooked) then we first fetch the register,
extract the real register number from the value's location, and use
that new register number when handling the add_saved_register call.
If either the value location that we get for the user-register is not
a cooked register then we can throw a 'Bad register' error back to the
Python code, but in most cases this will not happen.
gdb/ChangeLog:
* python/py-unwind.c (unwind_infopy_add_saved_register): Handle
saving user registers.
gdb/testsuite/ChangeLog:
* gdb.python/py-unwind-user-regs.c: New file.
* gdb.python/py-unwind-user-regs.exp: New file.
* gdb.python/py-unwind-user-regs.py: New file.
While reviewing a patch sent to the mailing list, I noticed there are few
places where python code checks if a variable is 'None' or not by using the
comparison operators '==' and '!='. PEP8[1], which is used as coding standard
in GDB coding standards, recommends using 'is' / 'is not' when comparing to a
singleton such as 'None'.
This patch proposes to change the instances of '== None' by 'is None' and
'!= None' by 'is not None'.
[1] https://www.python.org/dev/peps/pep-0008/
gdb/doc/ChangeLog:
* python.texi (Writing a Pretty-Printer): Use 'is None' instead of
'== None'.
gdb/ChangeLog:
* python/lib/gdb/FrameDecorator.py (FrameDecorator): Use 'is None' instead of
'== None'.
(FrameVars): Use 'is not None' instead of '!= None'.
* python/lib/gdb/command/frame_filters.py (SetFrameFilterPriority): Use 'is None'
instead of '== None' and 'is not None' instead of '!= None'.
gdb/testsuite/ChangeLog:
* gdb.base/premature-dummy-frame-removal.py (TestUnwinder): Use
'is None' instead of '== None' and 'is not None' instead of
'!= None'.
* gdb.python/py-frame-args.py (lookup_function): Same.
* gdb.python/py-framefilter-invalidarg.py (Reverse_Function): Same.
* gdb.python/py-framefilter.py (Reverse_Function): Same.
* gdb.python/py-nested-maps.py (lookup_function): Same.
* gdb.python/py-objfile-script-gdb.py (lookup_function): Same.
* gdb.python/py-prettyprint.py (lookup_function): Same.
* gdb.python/py-section-script.py (lookup_function): Same.
* gdb.python/py-unwind-inline.py (dummy_unwinder): Same.
* gdb.python/python.exp: Same.
* gdb.rust/pp.py (lookup_function): Same.
Evaluating expressions from within an inferior exit event handler can
cause a crash:
echo "int main() { return 0; }" > repro.c
gcc -g repro.c -o repro
./gdb -q --ex "set language c++" --ex "python gdb.events.exited.connect(lambda _: gdb.execute('set \$_a=0'))" --ex "run" repro
Reading symbols from repro...
Starting program: /home/mhov/repos/binutils-gdb-master/install-bad/bin/repro
[Inferior 1 (process 1974779) exited normally]
../../gdb/thread.c:72: internal-error: thread_info* inferior_thread(): Assertion `current_thread_ != nullptr' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n) [answered Y; input not from terminal]
This is a bug, please report it. For instructions, see:
<https://www.gnu.org/software/gdb/bugs/>.
Backtrace
0 in internal_error of ../../gdbsupport/errors.cc:51
1 in inferior_thread of ../../gdb/thread.c:72
2 in expression::evaluate of ../../gdb/eval.c:98
3 in evaluate_expression of ../../gdb/eval.c:115
4 in set_command of ../../gdb/printcmd.c:1502
5 in do_const_cfunc of ../../gdb/cli/cli-decode.c:101
6 in cmd_func of ../../gdb/cli/cli-decode.c:2181
7 in execute_command of ../../gdb/top.c:670
...
22 in python_inferior_exit of ../../gdb/python/py-inferior.c:182
In `expression::evaluate (...)' there is a call to `inferior_thread
()' that is guarded by `target_has_execution ()':
struct value *
expression::evaluate (struct type *expect_type, enum noside noside)
{
gdb::optional<enable_thread_stack_temporaries> stack_temporaries;
if (target_has_execution ()
&& language_defn->la_language == language_cplus
&& !thread_stack_temporaries_enabled_p (inferior_thread ()))
stack_temporaries.emplace (inferior_thread ());
The `target_has_execution ()' guard maps onto `inf->pid' and the
`inferior_thread ()' call assumes that `current_thread_' is set to
something meaningful:
struct thread_info*
inferior_thread (void)
{
gdb_assert (current_thread_ != nullptr);
return current_thread_;
}
In other words, it is assumed that if `inf->pid' is set then
`current_thread_' must also be set. This does not hold at the point
where inferior exit observers are notified:
- `generic_mourn_inferior (...)'
- `switch_to_no_thread ()'
- `current_thread_ = nullptr;'
- `exit_inferior (...)'
- `gdb::observers::inferior_exit.notify (...)'
- `inf->pid = 0'
The inferior exit notification means that a Python handler can get a
chance to run while `current_thread' has been cleared and the
`inf->pid' has not been cleared. Since the Python handler can call any
GDB command with `gdb.execute(...)' (in my case `gdb.execute("set
$_a=0")' we can end up evaluating expressions and asserting in
`evaluate_subexp (...)'.
This patch adds a test in `evaluate_subexp (...)' to check the global
`inferior_ptid' which is reset at the same time as `current_thread_'.
Checking `inferior_ptid' at the same time as `target_has_execution ()'
seems to be a common pattern:
$ git grep -n -e inferior_ptid --and -e target_has_execution
gdb/breakpoint.c:2998: && (inferior_ptid == null_ptid || !target_has_execution ()))
gdb/breakpoint.c:3054: && (inferior_ptid == null_ptid || !target_has_execution ()))
gdb/breakpoint.c:4587: if (inferior_ptid == null_ptid || !target_has_execution ())
gdb/infcmd.c:360: if (inferior_ptid != null_ptid && target_has_execution ())
gdb/infcmd.c:2380: /* FIXME: This should not really be inferior_ptid (or target_has_execution).
gdb/infrun.c:3438: if (!target_has_execution () || inferior_ptid == null_ptid)
gdb/remote.c:11961: if (!target_has_execution () || inferior_ptid == null_ptid)
gdb/solib.c:725: if (target_has_execution () && inferior_ptid != null_ptid)
The testsuite has been run on 5.4.0-59-generic x86_64 GNU/Linux:
- Ubuntu 20.04.1 LTS
- gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
- DejaGnu version 1.6.2
- Expect version 5.45.4
- Tcl version 8.6
- Native configuration: x86_64-pc-linux-gnu
- Target: unix
Results show a few XFAIL in
gdb.threads/attach-many-short-lived-threads.exp. The existing
py-events.exp tests are skipped for native-gdbserver and fail for
native-extended-gdbserver, but the new tests pass with
native-extended-gdbserver when run without the existing tests.
gdb/ChangeLog:
2021-06-03 Magne Hov <mhov@undo.io>
PR python/27841
* eval.c (expression::evaluate): Check inferior_ptid.
gdb/testsuite/ChangeLog:
2021-06-03 Magne Hov <mhov@undo.io>
PR python/27841
* gdb.python/py-events.exp: Extend inferior exit tests.
* gdb.python/py-events.py: Print inferior exit PID.
It was removed (probably by mistake) in
51e78fc5fa.
gdb/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* python/py-symbol.c (gdbpy_initialize_symbols): Restore
gdb.SYMBOL_LABEL_DOMAIN constant.
gdb/testsuite/ChangeLog:
2021-06-03 Hannes Domani <ssbssa@yahoo.de>
* gdb.python/py-symbol.exp: Test symbol constants.
I noticed these files because they weren't considered by black for
reformatting, prior to adding pyproject.toml, because their extension is
not .py. I don't think they specifically need to be named .py.in, so I
suggest renaming them to .py. This will make it nicer to edit them, as
editors will recognize them more easily as Python files.
Perhaps this was needed before, when the testsuite didn't always put
output files in the output directory. Then, a different name for the
source and destination file might have been desirable to avoid
overwriting a file with itself (perhaps that wasn't well handled). But
in any case, it doesn't see to cause any problem now.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-gdb.py.in: Rename to:
* gdb.python/py-framefilter-gdb.py: ... this.
* gdb.python/py-framefilter-invalidarg-gdb.py.in: Rename to:
* gdb.python/py-framefilter-invalidarg-gdb.py: ... this.
Change-Id: I63bb94010bbbc33434ee1c91a386c91fc1ff80bc
When running black to format Python files, files with extension .py.in
are ignored, because they don't end in .py. Add a pyproject.toml file
to instruct black to pick up these files too.
gdb/ChangeLog:
* py-project.toml: New.
* gdb-gdb.py.in: Re-format.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-gdb.py.in: Re-format.
* gdb.python/py-framefilter-invalidarg-gdb.py.in: Re-format.
Change-Id: I9b88faec3360ea24788f44c8b89fe0b2a5f4eb97
Define a 'connection_num' attribute for Inferior objects. The
read-only attribute is the ID of the connection of an inferior, as
printed by "info inferiors". In GDB's internal terminology, that's
the process stratum target of the inferior. If the inferior has no
target connection, the attribute is None.
gdb/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python/py-inferior.c (infpy_get_connection_num): New function.
(inferior_object_getset): Add a new element for 'connection_num'.
* NEWS: Mention the 'connection_num' attribute of Inferior objects.
gdb/doc/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* python.texi (Inferiors In Python): Mention the 'connection_num'
attribute.
gdb/testsuite/ChangeLog:
2021-05-14 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.python/py-inferior.exp: Add test cases for 'connection_num'.
The 'print max-depth' feature incorrectly causes GDB to skip printing
the string representation of pretty printed variables if the variable
is stored at a nested depth corresponding to the set max-depth value.
This change ensures that it is always printed before checking whether
the maximum print depth has been reached.
Regression tested with GCC 7.3.0 on x86_64, ppc64le, aarch64.
gdb/ChangeLog:
* cp-valprint.c (cp_print_value): Replaced duplicate code.
* guile/scm-pretty-print.c (ppscm_print_children): Check max_depth
just before printing child values.
(gdbscm_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
* python/py-prettyprint.c (print_children): Check max_depth just
before printing child values.
(gdbpy_apply_val_pretty_printer): Don't check max_depth before
printing string representation.
gdb/testsuite/ChangeLog:
* gdb.python/py-format-string.c: Added a variable to test.
* gdb.python/py-format-string.exp: Check string representation is
printed at appropriate max_depth settings.
* gdb.python/py-nested-maps.exp: Likewise.
* gdb.guile/scm-pretty-print.exp: Add additional tests.
Re-format all Python files using black [1] version 21.4b0. The goal is
that from now on, we keep all Python files formatted using black. And
that we never have to discuss formatting during review (for these files
at least) ever again.
One change is needed in gdb.python/py-prettyprint.exp, because it
matches the string representation of an exception, which shows source
code. So the change in formatting must be replicated in the expected
regexp.
To document our usage of black I plan on adding this to the "GDB Python
Coding Standards" wiki page [2]:
--8<--
All Python source files under the `gdb/` directory must be formatted
using black version 21.4b0.
This specific version can be installed using:
$ pip3 install 'black == 21.4b0'
All you need to do to re-format files is run `black <file/directory>`,
and black will re-format any Python file it finds in there. It runs
quite fast, so the simplest is to do:
$ black gdb/
from the top-level.
If you notice that black produces changes unrelated to your patch, it's
probably because someone forgot to run it before you. In this case,
don't include unrelated hunks in your patch. Push an obvious patch
fixing the formatting and rebase your work on top of that.
-->8--
Once this is merged, I plan on setting a up an `ignoreRevsFile`
config so that git-blame ignores this commit, as described here:
https://github.com/psf/black#migrating-your-code-style-without-ruining-git-blame
I also plan on working on a git commit hook (checked in the repo) to
automatically check the formatting of the Python files on commit.
[1] https://pypi.org/project/black/
[2] https://sourceware.org/gdb/wiki/Internals%20GDB-Python-Coding-Standards
gdb/ChangeLog:
* Re-format all Python files using black.
gdb/testsuite/ChangeLog:
* Re-format all Python files using black.
* gdb.python/py-prettyprint.exp (run_lang_tests): Adjust.
Change-Id: I28588a22c2406afd6bc2703774ddfff47cd61919
I noticed two errors in the Type.fields documentation:
1. It is possible to call `fields` on an array type, in which case it
returns one field representing the array's range. It is not
mentioned.
2. When calling `fields` on a type that doesn't have fields (by nature,
like an int), GDB raises a TypeError. It does not return an empty
sequence, as currently documented.
Fix these, and change the text into a bullet list. I find it easier to
read than one big paragraph.
The first issue is already tested in gdb.python/py-type.exp, but the
second one doesn't seem tested. Add a test in gdb.python/py-type.exp
for it.
gdb/doc/ChangeLog:
* python.texi (Types In Python): Re-organize Type.fields doc.
Mention handling of array types. Correct doc for when calling
the method on another type.
gdb/testsuite/ChangeLog:
* gdb.python/py-type.exp (test_fields): Test calling fields on
an int type.
Change-Id: I11c688177504cb070b81a4446ac91dec50b56a22
The `Type.range ()` tests in gdb.python/flexible-array-member.exp pass
when the test is compiled with gcc 9 or later, but not with gcc 8 or
earlier:
$ make check TESTS="gdb.python/flexible-array-member.exp" RUNTESTFLAGS="CC_FOR_TARGET='gcc-8'"
python print(zs['items'].type.range())^M
(0, 0)^M
(gdb) FAIL: gdb.python/flexible-array-member.exp: python print(zs['items'].type.range())
python print(zso['items'].type.range())^M
(0, 0)^M
(gdb) FAIL: gdb.python/flexible-array-member.exp: python print(zso['items'].type.range())
The value that we get for the upper bound of a flexible array member
declared with a "0" size is 0 with gcc <= 8 and is -1 for gcc >= 9.
This is due to different debug info. For this member, gcc 8 does:
0x000000d5: DW_TAG_array_type
DW_AT_type [DW_FORM_ref4] (0x00000034 "int")
DW_AT_sibling [DW_FORM_ref4] (0x000000e4)
0x000000de: DW_TAG_subrange_type
DW_AT_type [DW_FORM_ref4] (0x0000002d "long unsigned int")
For the same type, gcc 9 does:
0x000000d5: DW_TAG_array_type
DW_AT_type [DW_FORM_ref4] (0x00000034 "int")
DW_AT_sibling [DW_FORM_ref4] (0x000000e5)
0x000000de: DW_TAG_subrange_type
DW_AT_type [DW_FORM_ref4] (0x0000002d "long unsigned int")
DW_AT_count [DW_FORM_data1] (0x00)
Ideally, GDB would present a consistent and documented value for an
array member declared with size 0, regardless of how the debug info
looks like. But for now, just change the test to accept the two
values, to get rid of the failure and make the test in sync
I also realized (by looking at the py-type.exp test) that calling the
fields method on an array type yields one field representing the "index"
of the array. The type of that field is of type range
(gdb.TYPE_CODE_RANGE). When calling `.range()` on that range type, it
yields the same range tuple as when calling `.range()` on the array type
itself. For completeness, add some tests to access the range tuple
through that range type as well.
gdb/testsuite/ChangeLog:
* gdb.python/flexible-array-member.exp: Adjust expected range
value for member declared with 0 size. Test accessing range
tuple through range type.
Change-Id: Ie4e06d99fe9315527f04577888f48284d649ca4c
The test gdb.python/py-startup-opt.exp checks the behaviour of GDB's:
set python dont-write-bytecode on
This flag (when on) stops Python creating .pyc files. The test first
checks that .pyc files will be created, then turns this option on and
checks .pyc files will not be created.
However, if the user has PYTHONDONTWRITEBYTECODE set in their
environment then this will prevent Python from creating .pyc files, as
such the first test, that .pyc files are being created, currently
fails.
We could unset PYTHONDONTWRITEBYTECODE, however, until Python 3.8
there is no way to control where Python writes the .pyc files. As the
GDB developer clearly doesn't want .pyc files created in their
file-system it feels wrong to silently unset this environment
variable.
My proposal then, is that we just spot when this environment variable
is set and adjust the expected results. My hope is that across all
GDB developers some will be running with PYTHONDONTWRITEBYTECODE
unset, so this feature will be fully tested at least some of the time.
gdb/testsuite/ChangeLog:
PR testsuite/27788
* gdb.python/py-startup-opt.exp (test_python_settings): Change the
expected results when environment variable PYTHONDONTWRITEBYTECODE
is set.
Add two new commands to GDB that can be placed into the early
initialization to control how Python starts up. The new options are:
set python ignore-environment on|off
set python dont-write-bytecode auto|on|off
show python ignore-environment
show python dont-write-bytecode
These can be used from GDB's startup file to control how the Python
extension language behaves. These options are equivalent to the -E
and -B flags to python respectively, their descriptions from the
Python man page:
-E Ignore environment variables like PYTHONPATH and PYTHONHOME
that modify the behavior of the interpreter.
-B Don't write .pyc files on import.
gdb/ChangeLog:
* NEWS: Mention new commands.
* python/python.c (python_ignore_environment): New static global.
(show_python_ignore_environment): New function.
(set_python_ignore_environment): New function.
(python_dont_write_bytecode): New static global.
(show_python_dont_write_bytecode): New function.
(set_python_dont_write_bytecode): New function.
(_initialize_python): Register new commands.
gdb/doc/ChangeLog:
* python.texinfo (Python Commands): Mention new commands.
gdb/testsuite/ChangeLog:
* gdb.python/py-startup-opt.exp: New file.
Without any explicit dependencies specified, the observers attached
to the 'gdb::observers::new_objfile' observable are always notified
in the order in which they have been attached.
The new_objfile observer callback to auto-load scripts is attached in
'_initialize_auto_load'.
The new_objfile observer callback that propagates the new_objfile event
to the Python side is attached in 'gdbpy_initialize_inferior', which is
called via '_initialize_python'.
With '_initialize_python' happening before '_initialize_auto_load',
the consequence was that the new_objfile event was emitted on the Python
side before autoloaded scripts had been executed when a new objfile was
loaded.
As a result, trying to access the objfile's pretty printers (defined in
the autoloaded script) from a handler for the Python-side
'new_objfile' event would fail. Those would only be initialized later on
(when the 'auto_load_new_objfile' callback was called).
To make sure that the objfile passed to the Python event handler
is properly initialized (including its 'pretty_printers' member),
make sure that the 'auto_load_new_objfile' observer is notified
before the 'python_new_objfile' one that propagates the event
to the Python side.
To do this, make use of the mechanism to explicitly specify
dependencies between observers (introduced in a preparatory commit).
Add a corresponding testcase that involves a test library with an autoloaded
Python script and a handler for the Python 'new_objfile' event.
(The real world use case where I came across this issue was in an attempt
to extend handling for GDB pretty printers for dynamically loaded
objfiles in the Qt Creator IDE, s. [1] and [2] for more background.)
[1] https://bugreports.qt.io/browse/QTCREATORBUG-25339
[2] https://codereview.qt-project.org/c/qt-creator/qt-creator/+/333857/1
Tested on x86_64-linux (Debian testing).
gdb/ChangeLog:
* gdb/auto-load.c (_initialize_auto_load): 'Specify token
when attaching the 'auto_load_new_objfile' observer, so
other observers can specify it as a dependency.
* gdb/auto-load.h (struct token): Declare
'auto_load_new_objfile_observer_token' as token to be used
for the 'auto_load_new_objfile' observer.
* gdb/python/py-inferior.c (gdbpy_initialize_inferior): Make
'python_new_objfile' observer depend on 'auto_load_new_objfile'
observer, so it gets notified after the latter.
gdb/testsuite/ChangeLog:
* gdb.python/libpy-autoloaded-pretty-printers-in-newobjfile-event.so-gdb.py: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.h: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-main.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.exp: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.py: New test.
Change-Id: I8275b3f4c3bec32e56dd7892f9a59d89544edf89
We don't want to execute this test if Python support is not compiled in
GDB, add the necessary check.
gdb/testsuite/ChangeLog:
* gdb.python/flexible-array-member.exp: Add check for Python
support.
Change-Id: I853b937d2a193a0bb216566bef1a35354264b1c5
As reported in bug 27757, we get an internal error when doing:
$ cat test.c
struct foo {
int len;
int items[];
};
struct foo *p;
int main() {
return 0;
}
$ gcc test.c -g -O0 -o test
$ ./gdb -q -nx --data-directory=data-directory ./test -ex 'python gdb.parse_and_eval("p").type.target()["items"].type.range()'
Reading symbols from ./test...
/home/simark/src/binutils-gdb/gdb/gdbtypes.h:435: internal-error: LONGEST dynamic_prop::const_val() const: Assertion `m_kind == PROP_CONST' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n)
This is because the Python code (typy_range) blindly reads the high
bound of the type of `items` as a constant value. Since it is a
flexible array member, it has no high bound, the property is undefined.
Since commit 8c2e4e0689 ("gdb: add accessors to struct dynamic_prop"),
the getters check that you are not getting a property value of the wrong
kind, so this causes a failed assertion.
Fix it by checking if the property is indeed a constant value before
accessing it as such. Otherwise, use 0. This restores the previous GDB
behavior: because the structure was zero-initialized, this is what was
returned before. But now this behavior is explicit and not accidental.
Add a test, gdb.python/flexible-array-member.exp, that is derived from
gdb.base/flexible-array-member.exp. It tests the same things, but
through the Python API. It also specifically tests getting the range
from the various kinds of flexible array member types (AFAIK it wasn't
possible to do the equivalent through the CLI).
gdb/ChangeLog:
PR gdb/27757
* python/py-type.c (typy_range): Check that bounds are constant
before accessing them as such.
* guile/scm-type.c (gdbscm_type_range): Likewise.
gdb/testsuite/ChangeLog:
PR gdb/27757
* gdb.python/flexible-array-member.c: New test.
* gdb.python/flexible-array-member.exp: New test.
* gdb.guile/scm-type.exp (test_range): Add test for flexible
array member.
* gdb.guile/scm-type.c (struct flex_member): New.
(main): Use it.
Change-Id: Ibef92ee5fd871ecb7c791db2a788f203dff2b841
Give a test a proper name in order to avoid including a path in the
test name.
gdb/testsuite/ChangeLog:
* gdb.python/py-parameter.exp: Give a test a proper name to avoid
including a path in the test name.
It was reported on IRC that using gdb.parameter('data-directory')
doesn't work correctly.
The problem is that the data directory is stored in 'gdb_datadir',
however the set/show command is associated with a temporary
'staged_gdb_datadir'.
When the user does 'set data-directory VALUE', the VALUE is stored in
'staged_gdb_datadir' by GDB, then set_gdb_datadir is called. This in
turn calls set_gdb_data_directory to copy the value from
staged_gdb_datadir into gdb_datadir.
However, set_gdb_data_directory will resolve relative paths, so the
value stored in gdb_datadir might not match the value in
staged_gdb_datadir.
The Python gdb.parameter API fetches the parameter values by accessing
the variable associated with the show command, so in this case
staged_gdb_datadir. This causes two problems:
1. Initially staged_gdb_datadir is NULL, and remains as such until the
user does 'set data-directory VALUE' (which might never happen), but
gdb_datadir starts with GDB's default data-directory value. So
initially from Python gdb.parameter('data-directory') will return the
empty string, even though at GDB's CLI 'show data-directory' prints a
real path.
2. If the user does 'set data-directory ./some/relative/path', GDB
will resolve the relative path, thus, 'show data-directory' at the CLI
will print an absolute path. However, the value is staged_gdb_datadir
will still be the relative path, and gdb.parameter('data-directory')
from Python will return the relative path.
In this commit I fix both of these issues by:
1. Initialising the value in staged_gdb_datadir based on the initial
value in gdb_datadir, and
2. In set_gdb_datadir, after calling set_gdb_data_directory, I copy
the value in gdb_datadir back into staged_gdb_datadir.
With these two changes in place the value in staged_gdb_datadir should
always match the value in gdb_datadir, and accessing data-directory
from Python should now work correctly.
gdb/ChangeLog:
* top.c (staged_gdb_datadir): Update comment.
(set_gdb_datadir): Copy the value of gdb_datadir back into
staged_datadir.
(init_main): Initialise staged_gdb_datadir.
gdb/testsuite/ChangeLog:
* gdb.python/py-parameter.exp: Add test for reading data-directory
using gdb.parameter API.
This commit adds a couple of tests to the python pretty printer
testing.
I've added a test for the 'array' display hint. This display hint is
tested by gdb.python/py-mi.exp, however, the MI testing is done via
the varobj interface, and this code makes its own direct calls to the
Python pretty printers from gdb/varobj.c. What this means is that the
interface to the pretty printers in gdb/python/py-prettyprint.c is not
tested for the 'array' display hint path.
I also added a test for what happens when the display_hint method
raises an exception. There wasn't a bug that inspired this test, just
while adding the previous test I thought, I wonder what happens if...
The current behaviour of GDB seems reasonable, GDB displays the Python
exception, and then continues printing the value as if display_hint
had returned None. I added a test to lock in this behaviour.
gdb/testsuite/ChangeLog:
* gdb.python/py-prettyprint.c (struct container): Add 'is_array_p'
member.
(make_container): Initialise is_array_p.
* gdb.python/py-prettyprint.exp: Add new tests.
* gdb.python/py-prettyprint.py (ContainerPrinter.display_hint):
Check is_array_p and possibly return 'array'.
This commit:
commit d1cab9876d
Date: Tue Sep 15 11:08:56 2020 -0600
Don't use gdb_py_long_from_ulongest
Introduced a regression when GDB is compiled with Python 2. The frame
filter API expects the gdb.FrameDecorator.function () method to return
either a string (the name of a function) or an address, which GDB then
uses to lookup a msymbol.
If the address returned from gdb.FrameDecorator.function () comes from
gdb.Frame.pc () then before the above commit we would always expect to
see a PyLong object.
After the above commit we might (on Python 2) get a PyInt object.
The GDB code does not expect to see a PyInt, and only checks for a
PyLong, we then see an error message like:
RuntimeError: FrameDecorator.function: expecting a String, integer or None.
This commit just adds an additional call to PyInt_Check which handle
the missing case.
I had already written a test case to cover this issue before spotting
that the gdb.python/py-framefilter.exp test also triggers this
failure. As the new test case is slightly different I have kept it
in.
The new test forces the behaviour of gdb.FrameDecorator.function
returning an address. The reason the existing test case hits this is
due to the behaviour of the builtin gdb.FrameDecorator base class. If
the base class behaviour ever changed then the return an address case
would only be tested by the new test case.
gdb/ChangeLog:
* python/py-framefilter.c (py_print_frame): Use PyInt_Check as
well as PyLong_Check for Python 2.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-addr.c: New file.
* gdb.python/py-framefilter-addr.exp: New file.
* gdb.python/py-framefilter-addr.py: New file.
The current mechanism by which the Python gdb.current_objfile is
maintained does not allow for nested auto-load events. It is assumed
that once an auto-load script has finished loading then the current
objfile should be set back to NULL. In a nested situation, we should
be restoring the previous value.
We already have an RAII class to handle save/restore type behaviour,
so lets just switch to use that.
The test is a little contrived, but is simple enough, and triggers the
bug. The real use case might involve the auto-load script calling
functions (either in the just-loaded object file, or in the main
executable), which in turn trigger further auto-loads to occur.
gdb/ChangeLog:
* python/python.c (gdbpy_source_objfile_script): Use
make_scoped_restore to restore gdbpy_current_objfile.
(gdbpy_execute_objfile_script): Likewise.
gdb/testsuite/ChangeLog:
* gdb.python/py-auto-load-chaining-f1.c: New file.
* gdb.python/py-auto-load-chaining-f1.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining-f2.c: New file.
* gdb.python/py-auto-load-chaining-f2.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining.c: New file.
* gdb.python/py-auto-load-chaining.exp: New file.
This commit resolves the remaining duplicate test names in the
gdb.python/ directory, there's 1 duplicate per test script. In each
case I have just extended some test names to make them more
descriptive.
gdb/testsuite/ChangeLog:
* gdb.python/py-bad-printers.exp: Extend test names to make them
unique.
* gdb.python/py-events.exp: Likewise.
* gdb.python/py-finish-breakpoint2.exp: Likewise.
* gdb.python/py-frame-inline.exp: Likewise.
* gdb.python/py-frame.exp: Likewise.
* gdb.python/py-infthread.exp: Likewise.
While squashing duplicate test names I spotted an actual duplicate
test, I suspect a copy & paste error in an earlier patch. I can see
no reason why we should need to duplicate this test, so I'm removing
one copy of it.
gdb/testsuite/ChangeLog:
* gdb.python/py-value-cc.exp: Remove a duplicate test.
While squashing duplicate test names I spotted what looked like a copy
& paste error. During this test a Python variable is created, and
then we call the type method on that variable. In one case we create
a variable and then call the type method on a variable created for a
previous test. I can see no reason why this should be what we want,
it doesn't line up with the comments in the test script, so I've
updated the test. Note, the expected result doesn't change, just the
command issued (the test relates to stripping typedefs).
gdb/testsuite/ChangeLog:
* gdb.python/lib-types.exp: Update the test to check the correct
python variable.
Add additional text to some test names to make them unique. In one
case, correct the test name (copy & paste error) to make it correctly
reflect what the test is doing.
gdb/testsuite/ChangeLog:
* gdb.python/py-explore-cc.exp: Extend test names to make them
unique.
I spotted a duplicate test name in this test script. Turns out it's
an actual duplicate test. Delete one copy of this test.
gdb/testsuite/ChangeLog:
* gdb.python/py-lookup-type.exp: Remove duplicate test.
Extend the test names with additional text to make them unique.
gdb/testsuite/ChangeLog:
* gdb.python/py-pp-maint.exp: Extend test names to make them
unique.
Add a with_test_prefix to make test names unique.
gdb/testsuite/ChangeLog:
* gdb.python/py-explore.exp: Add with_test_prefix to make test
names unique.
Make test names unique by just adding additional text to the test
names. As this is a Python test that repeatedly imports the Python
script I've just numbered the test names in this case rather than
trying to come up with anything better, hence we have:
import python scripts, 1
import python scripts, 2
...
import python scripts, 6
Not great, but hopefully good enough. Everything else has a slightly
more descriptive test name.
gdb/testsuite/ChangeLog:
* gdb.python/py-finish-breakpoint.exp: Make test names unique.
Wrap some code in `with_test_prefix` to make test names unique.
gdb/testsuite/ChangeLog:
* gdb.python/py-strfns.exp: Use with_test_prefix to make test
names unique.
Make use of `proc_with_prefix` for every test_* proc in order to make
the test names unique within this test file.
gdb/testsuite/ChangeLog:
* gdb.python/py-format-string.exp: Use proc_with_prefix to make
test names unique.
If the user implements a TUI window in Python, and this window
responds to GDB events and then redraws its window contents then there
is currently an edge case which can lead to problems.
The Python API documentation suggests that calling methods like erase
or write on a TUI window (from Python code) will raise an exception if
the window is not valid.
And the description for is_valid says:
This method returns True when this window is valid. When the user
changes the TUI layout, windows no longer visible in the new layout
will be destroyed. At this point, the gdb.TuiWindow will no longer
be valid, and methods (and attributes) other than is_valid will
throw an exception.
From this I, as a user, would expect that if I did 'tui disable' to
switch back to CLI mode, then the window would no longer be valid.
However, this is not the case.
When the TUI is disabled the windows in the TUI are not deleted, they
are simply hidden. As such, currently, the is_valid method continues
to return true.
This means that if the users Python code does something like:
def event_handler (e):
global tui_window_object
if tui_window_object->is_valid ():
tui_window_object->erase ()
tui_window_object->write ("Hello World")
gdb.events.stop.connect (event_handler)
Then when a stop event arrives GDB will try to draw the TUI window,
even when the TUI is disabled.
This exposes two bugs. First, is_valid should be returning false in
this case, second, if the user forgot to add the is_valid call, then I
believe the erase and write calls should be throwing an
exception (when the TUI is disabled).
The solution to both of these issues is I think bound together, as it
depends on having a working 'is_valid' check.
There's a rogue assert added into tui-layout.c as part of this
commit. While working on this commit I managed to break GDB such that
TUI_CMD_WIN was nullptr, this was causing GDB to abort. I'm leaving
the assert in as it might help people catch issues in the future.
This patch is inspired by the work done here:
https://sourceware.org/pipermail/gdb-patches/2020-December/174338.html
gdb/ChangeLog:
* python/py-tui.c (gdbpy_tui_window) <is_valid>: New member
function.
(REQUIRE_WINDOW): Call is_valid member function.
(REQUIRE_WINDOW_FOR_SETTER): New define.
(gdbpy_tui_is_valid): Call is_valid member function.
(gdbpy_tui_set_title): Call REQUIRE_WINDOW_FOR_SETTER instead.
* tui/tui-data.h (struct tui_win_info) <is_visible>: Check
tui_active too.
* tui/tui-layout.c (tui_apply_current_layout): Add an assert.
* tui/tui.c (tui_enable): Move setting of tui_active earlier in
the function.
gdb/doc/ChangeLog:
* python.texinfo (TUI Windows In Python): Extend description of
TuiWindow.is_valid.
gdb/testsuite/ChangeLog:
* gdb.python/tui-window-disabled.c: New file.
* gdb.python/tui-window-disabled.exp: New file.
* gdb.python/tui-window-disabled.py: New file.