Now that both Python and Guile are fully initialized from their
respective finish_initialization methods, the "finish" in the method
name doesn't really make sense; initialization starts _and_ finishes
with that method.
As such, this commit renames finish_initialization to just initialize.
There should be no user visible changes after this commit.
gdb/ChangeLog:
* extension-priv.h (struct extension_language_ops): Rename
'finish_initialization' to 'initialize'.
* extension.c (finish_ext_lang_initialization): Renamed to...
(ext_lang_initialization): ...this, update comment, and updated
the calls to reflect the change in struct extension_language_ops.
* extension.h (finish_ext_lang_initialization): Renamed to...
(ext_lang_initialization): ...this.
* guile/guile.c (gdbscm_finish_initialization): Renamed to...
(gdbscm_initialize): ...this, update comment at definition.
(guile_extension_ops): Update.
* main.c (captured_main_1): Update call to
finish_ext_lang_initialization.
* python/python.c (gdbpy_finish_initialization): Rename to...
(gdbpy_initialize): ...this, update comment at definition, and
update call to do_finish_initialization.
(python_extension_ops): Update.
(do_finish_initialization): Rename to...
(do_initialize): ...this, and update comment.
Delay Python initialisation until gdbpy_finish_initialization.
This is mostly about splitting the existing gdbpy_initialize_*
functions in two, all the calls to register_objfile_data_with_cleanup,
gdbarch_data_register_post_init, etc are moved into new _initialize_*
functions, but everything else is left in the gdbpy_initialize_*
functions.
Then the call to do_start_initialization (in python/python.c) is moved
from the _initialize_python function into gdbpy_finish_initialization.
There should be no user visible changes after this commit.
gdb/ChangeLog:
* python/py-arch.c (_initialize_py_arch): New function.
(gdbpy_initialize_arch): Move code to _initialize_py_arch.
* python/py-block.c (_initialize_py_block): New function.
(gdbpy_initialize_blocks): Move code to _initialize_py_block.
* python/py-inferior.c (_initialize_py_inferior): New function.
(gdbpy_initialize_inferior): Move code to _initialize_py_inferior.
* python/py-objfile.c (_initialize_py_objfile): New function.
(gdbpy_initialize_objfile): Move code to _initialize_py_objfile.
* python/py-progspace.c (_initialize_py_progspace): New function.
(gdbpy_initialize_pspace): Move code to _initialize_py_progspace.
* python/py-registers.c (_initialize_py_registers): New function.
(gdbpy_initialize_registers): Move code to
_initialize_py_registers.
* python/py-symbol.c (_initialize_py_symbol): New function.
(gdbpy_initialize_symbols): Move code to _initialize_py_symbol.
* python/py-symtab.c (_initialize_py_symtab): New function.
(gdbpy_initialize_symtabs): Move code to _initialize_py_symtab.
* python/py-type.c (_initialize_py_type): New function.
(gdbpy_initialize_types): Move code to _initialize_py_type.
* python/py-unwind.c (_initialize_py_unwind): New function.
(gdbpy_initialize_unwind): Move code to _initialize_py_unwind.
* python/python.c (_initialize_python): Move call to
do_start_initialization to gdbpy_finish_initialization.
(gdbpy_finish_initialization): Add call to
do_start_initialization.
Without any explicit dependencies specified, the observers attached
to the 'gdb::observers::new_objfile' observable are always notified
in the order in which they have been attached.
The new_objfile observer callback to auto-load scripts is attached in
'_initialize_auto_load'.
The new_objfile observer callback that propagates the new_objfile event
to the Python side is attached in 'gdbpy_initialize_inferior', which is
called via '_initialize_python'.
With '_initialize_python' happening before '_initialize_auto_load',
the consequence was that the new_objfile event was emitted on the Python
side before autoloaded scripts had been executed when a new objfile was
loaded.
As a result, trying to access the objfile's pretty printers (defined in
the autoloaded script) from a handler for the Python-side
'new_objfile' event would fail. Those would only be initialized later on
(when the 'auto_load_new_objfile' callback was called).
To make sure that the objfile passed to the Python event handler
is properly initialized (including its 'pretty_printers' member),
make sure that the 'auto_load_new_objfile' observer is notified
before the 'python_new_objfile' one that propagates the event
to the Python side.
To do this, make use of the mechanism to explicitly specify
dependencies between observers (introduced in a preparatory commit).
Add a corresponding testcase that involves a test library with an autoloaded
Python script and a handler for the Python 'new_objfile' event.
(The real world use case where I came across this issue was in an attempt
to extend handling for GDB pretty printers for dynamically loaded
objfiles in the Qt Creator IDE, s. [1] and [2] for more background.)
[1] https://bugreports.qt.io/browse/QTCREATORBUG-25339
[2] https://codereview.qt-project.org/c/qt-creator/qt-creator/+/333857/1
Tested on x86_64-linux (Debian testing).
gdb/ChangeLog:
* gdb/auto-load.c (_initialize_auto_load): 'Specify token
when attaching the 'auto_load_new_objfile' observer, so
other observers can specify it as a dependency.
* gdb/auto-load.h (struct token): Declare
'auto_load_new_objfile_observer_token' as token to be used
for the 'auto_load_new_objfile' observer.
* gdb/python/py-inferior.c (gdbpy_initialize_inferior): Make
'python_new_objfile' observer depend on 'auto_load_new_objfile'
observer, so it gets notified after the latter.
gdb/testsuite/ChangeLog:
* gdb.python/libpy-autoloaded-pretty-printers-in-newobjfile-event.so-gdb.py: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-lib.h: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event-main.cc: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.exp: New test.
* gdb.python/py-autoloaded-pretty-printers-in-newobjfile-event.py: New test.
Change-Id: I8275b3f4c3bec32e56dd7892f9a59d89544edf89
Give a name to each observer, this will help produce more meaningful
debug message.
gdbsupport/ChangeLog:
* observable.h (class observable) <struct observer> <observer>:
Add name parameter.
<name>: New field.
<attach>: Add name parameter, update all callers.
Change-Id: Ie0cc4664925215b8d2b09e026011b7803549fba0
As reported in bug 27757, we get an internal error when doing:
$ cat test.c
struct foo {
int len;
int items[];
};
struct foo *p;
int main() {
return 0;
}
$ gcc test.c -g -O0 -o test
$ ./gdb -q -nx --data-directory=data-directory ./test -ex 'python gdb.parse_and_eval("p").type.target()["items"].type.range()'
Reading symbols from ./test...
/home/simark/src/binutils-gdb/gdb/gdbtypes.h:435: internal-error: LONGEST dynamic_prop::const_val() const: Assertion `m_kind == PROP_CONST' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n)
This is because the Python code (typy_range) blindly reads the high
bound of the type of `items` as a constant value. Since it is a
flexible array member, it has no high bound, the property is undefined.
Since commit 8c2e4e0689 ("gdb: add accessors to struct dynamic_prop"),
the getters check that you are not getting a property value of the wrong
kind, so this causes a failed assertion.
Fix it by checking if the property is indeed a constant value before
accessing it as such. Otherwise, use 0. This restores the previous GDB
behavior: because the structure was zero-initialized, this is what was
returned before. But now this behavior is explicit and not accidental.
Add a test, gdb.python/flexible-array-member.exp, that is derived from
gdb.base/flexible-array-member.exp. It tests the same things, but
through the Python API. It also specifically tests getting the range
from the various kinds of flexible array member types (AFAIK it wasn't
possible to do the equivalent through the CLI).
gdb/ChangeLog:
PR gdb/27757
* python/py-type.c (typy_range): Check that bounds are constant
before accessing them as such.
* guile/scm-type.c (gdbscm_type_range): Likewise.
gdb/testsuite/ChangeLog:
PR gdb/27757
* gdb.python/flexible-array-member.c: New test.
* gdb.python/flexible-array-member.exp: New test.
* gdb.guile/scm-type.exp (test_range): Add test for flexible
array member.
* gdb.guile/scm-type.c (struct flex_member): New.
(main): Use it.
Change-Id: Ibef92ee5fd871ecb7c791db2a788f203dff2b841
The 'create_breakpoint' function takes a 'parse_extra' argument that
determines whether the condition, thread, and force-condition
specifiers should be parsed from the extra string or be used from the
function arguments. However, for the case when 'parse_extra' is
false, there is no way to pass the force-condition specifier. This
patch adds it as a new argument.
Also, in the case when parse_extra is false, the current behavior is
as if the condition is being forced. This is a bug. The default
behavior should reject the breakpoint. See below for a demo of this
incorrect behavior. (The MI command '-break-insert' uses the
'create_breakpoint' function with parse_extra=0.)
$ gdb -q --interpreter=mi3 /tmp/simple
=thread-group-added,id="i1"
=cmd-param-changed,param="history save",value="on"
=cmd-param-changed,param="auto-load safe-path",value="/"
~"Reading symbols from /tmp/simple...\n"
(gdb)
-break-insert -c junk -f main
&"warning: failed to validate condition at location 1, disabling:\n "
&"No symbol \"junk\" in current context.\n"
^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="<MULTIPLE>",cond="junk",times="0",original-location="main",locations=[{number="1.1",enabled="N",addr="0x000000000000114e",func="main",file="/tmp/simple.c",fullname="/tmp/simple.c",line="2",thread-groups=["i1"]}]}
(gdb)
break main if junk
&"break main if junk\n"
&"No symbol \"junk\" in current context.\n"
^error,msg="No symbol \"junk\" in current context."
(gdb)
break main -force-condition if junk
&"break main -force-condition if junk\n"
~"Note: breakpoint 1 also set at pc 0x114e.\n"
&"warning: failed to validate condition at location 1, disabling:\n "
&"No symbol \"junk\" in current context.\n"
~"Breakpoint 2 at 0x114e: file /tmp/simple.c, line 2.\n"
=breakpoint-created,bkpt={number="2",type="breakpoint",disp="keep",enabled="y",addr="<MULTIPLE>",cond="junk",times="0",original-location="main",locations=[{number="2.1",enabled="N",addr="0x000000000000114e",func="main",file="/tmp/simple.c",fullname="/tmp/simple.c",line="2",thread-groups=["i1"]}]}
^done
(gdb)
After applying this patch, we get the behavior below:
(gdb)
-break-insert -c junk -f main
^error,msg="No symbol \"junk\" in current context."
This restores the behavior that is present in the existing releases.
gdb/ChangeLog:
2021-04-21 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* breakpoint.h (create_breakpoint): Add a new parameter,
'force_condition'.
* breakpoint.c (create_breakpoint): Use the 'force_condition'
argument when 'parse_extra' is false to check if the condition
is invalid at all of the breakpoint locations.
Update the users below.
(break_command_1)
(dprintf_command)
(trace_command)
(ftrace_command)
(strace_command)
(create_tracepoint_from_upload): Update.
* guile/scm-breakpoint.c (gdbscm_register_breakpoint_x): Update.
* mi/mi-cmd-break.c (mi_cmd_break_insert_1): Update.
* python/py-breakpoint.c (bppy_init): Update.
* python/py-finishbreakpoint.c (bpfinishpy_init): Update.
gdb/testsuite/ChangeLog:
2021-04-21 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.mi/mi-break.exp: Extend with checks for invalid breakpoint
conditions.
Python 3.4 has deprecated the imp module in favour of importlib. This
patch avoids the DeprecationWarning. This warning is visible to users
whose libpython.so has been compiled with --with-pydebug.
Considering that even python 3.5 has reached end of life, would it be
better to just use importlib and drop support for python 3.0 to 3.3?
2021-02-28 Boris Staletic <boris.staletic@gmail.com>
* gdb/python/lib/gdb/__init__.py: Use importlib on python 3.4+
to avoid deprecation warnings.
The small example for gdb.Parameter.get_set_string does not return a
string. The documentation is very clear that this method must return
a string, and indeed, inspecting the code in gdb/python/py-param.c
shows that a string return value is required (if an exception is not
thrown).
While inspecting the code in gdb/python/py-param.c I noticed that the
comment for the C++ code that invokes the Python get_set_string method
is wrong, so I updated that too.
gdb/ChangeLog:
* python/py-param.c (get_set_value): Update header comment.
gdb/doc/ChangeLog:
* python.texinfo (Parameters In Python): Return empty string in
small example code.
This commit:
commit d1cab9876d
Date: Tue Sep 15 11:08:56 2020 -0600
Don't use gdb_py_long_from_ulongest
Introduced a regression when GDB is compiled with Python 2. The frame
filter API expects the gdb.FrameDecorator.function () method to return
either a string (the name of a function) or an address, which GDB then
uses to lookup a msymbol.
If the address returned from gdb.FrameDecorator.function () comes from
gdb.Frame.pc () then before the above commit we would always expect to
see a PyLong object.
After the above commit we might (on Python 2) get a PyInt object.
The GDB code does not expect to see a PyInt, and only checks for a
PyLong, we then see an error message like:
RuntimeError: FrameDecorator.function: expecting a String, integer or None.
This commit just adds an additional call to PyInt_Check which handle
the missing case.
I had already written a test case to cover this issue before spotting
that the gdb.python/py-framefilter.exp test also triggers this
failure. As the new test case is slightly different I have kept it
in.
The new test forces the behaviour of gdb.FrameDecorator.function
returning an address. The reason the existing test case hits this is
due to the behaviour of the builtin gdb.FrameDecorator base class. If
the base class behaviour ever changed then the return an address case
would only be tested by the new test case.
gdb/ChangeLog:
* python/py-framefilter.c (py_print_frame): Use PyInt_Check as
well as PyLong_Check for Python 2.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter-addr.c: New file.
* gdb.python/py-framefilter-addr.exp: New file.
* gdb.python/py-framefilter-addr.py: New file.
The current mechanism by which the Python gdb.current_objfile is
maintained does not allow for nested auto-load events. It is assumed
that once an auto-load script has finished loading then the current
objfile should be set back to NULL. In a nested situation, we should
be restoring the previous value.
We already have an RAII class to handle save/restore type behaviour,
so lets just switch to use that.
The test is a little contrived, but is simple enough, and triggers the
bug. The real use case might involve the auto-load script calling
functions (either in the just-loaded object file, or in the main
executable), which in turn trigger further auto-loads to occur.
gdb/ChangeLog:
* python/python.c (gdbpy_source_objfile_script): Use
make_scoped_restore to restore gdbpy_current_objfile.
(gdbpy_execute_objfile_script): Likewise.
gdb/testsuite/ChangeLog:
* gdb.python/py-auto-load-chaining-f1.c: New file.
* gdb.python/py-auto-load-chaining-f1.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining-f2.c: New file.
* gdb.python/py-auto-load-chaining-f2.o-gdb.py: New file.
* gdb.python/py-auto-load-chaining.c: New file.
* gdb.python/py-auto-load-chaining.exp: New file.
If the user implements a TUI window in Python, and this window
responds to GDB events and then redraws its window contents then there
is currently an edge case which can lead to problems.
The Python API documentation suggests that calling methods like erase
or write on a TUI window (from Python code) will raise an exception if
the window is not valid.
And the description for is_valid says:
This method returns True when this window is valid. When the user
changes the TUI layout, windows no longer visible in the new layout
will be destroyed. At this point, the gdb.TuiWindow will no longer
be valid, and methods (and attributes) other than is_valid will
throw an exception.
From this I, as a user, would expect that if I did 'tui disable' to
switch back to CLI mode, then the window would no longer be valid.
However, this is not the case.
When the TUI is disabled the windows in the TUI are not deleted, they
are simply hidden. As such, currently, the is_valid method continues
to return true.
This means that if the users Python code does something like:
def event_handler (e):
global tui_window_object
if tui_window_object->is_valid ():
tui_window_object->erase ()
tui_window_object->write ("Hello World")
gdb.events.stop.connect (event_handler)
Then when a stop event arrives GDB will try to draw the TUI window,
even when the TUI is disabled.
This exposes two bugs. First, is_valid should be returning false in
this case, second, if the user forgot to add the is_valid call, then I
believe the erase and write calls should be throwing an
exception (when the TUI is disabled).
The solution to both of these issues is I think bound together, as it
depends on having a working 'is_valid' check.
There's a rogue assert added into tui-layout.c as part of this
commit. While working on this commit I managed to break GDB such that
TUI_CMD_WIN was nullptr, this was causing GDB to abort. I'm leaving
the assert in as it might help people catch issues in the future.
This patch is inspired by the work done here:
https://sourceware.org/pipermail/gdb-patches/2020-December/174338.html
gdb/ChangeLog:
* python/py-tui.c (gdbpy_tui_window) <is_valid>: New member
function.
(REQUIRE_WINDOW): Call is_valid member function.
(REQUIRE_WINDOW_FOR_SETTER): New define.
(gdbpy_tui_is_valid): Call is_valid member function.
(gdbpy_tui_set_title): Call REQUIRE_WINDOW_FOR_SETTER instead.
* tui/tui-data.h (struct tui_win_info) <is_visible>: Check
tui_active too.
* tui/tui-layout.c (tui_apply_current_layout): Add an assert.
* tui/tui.c (tui_enable): Move setting of tui_active earlier in
the function.
gdb/doc/ChangeLog:
* python.texinfo (TUI Windows In Python): Extend description of
TuiWindow.is_valid.
gdb/testsuite/ChangeLog:
* gdb.python/tui-window-disabled.c: New file.
* gdb.python/tui-window-disabled.exp: New file.
* gdb.python/tui-window-disabled.py: New file.
There's a bug in the python tui API. If the user tries to delete the
window title attribute then this will trigger undefined behaviour in
GDB due to a missing nullptr check.
gdb/ChangeLog:
* python/py-tui.c (gdbpy_tui_set_title): Check that the new value
for the title is not nullptr.
gdb/testsuite/ChangeLog:
* gdb.python/tui-window.exp: Add new tests.
* gdb.python/tui-window.py (TestWindow) <__init__>: Store
TestWindow object into global the_window.
<remote_title>: New method.
(delete_window_title): New function.
While working on another patch I noticed an oddly formatted error
message in the Python code.
When 'set python print-stack message' is in effect then consider this
Python script:
class TestCommand (gdb.Command):
def __init__ (self):
gdb.Command.__init__ (self, "test-cmd", gdb.COMMAND_DATA)
def invoke(self, args, from_tty):
raise RuntimeError ("bad")
TestCommand ()
And this GDB session:
(gdb) source path/to/python/script.py
(gdb) test-cmd
Python Exception <class 'RuntimeError'> bad:
Error occurred in Python: bad
The line 'Python Exception <class 'RuntimeError'> bad:' doesn't look
terrible in this situation, the colon at the end of the first line
makes sense given the second line.
However, there are places in GDB where there is no second line
printed, for example consider this python script:
def stop_listener (e):
raise RuntimeError ("bad")
gdb.events.stop.connect (stop_listener)
Then this GDB session:
(gdb) file helloworld.exe
(gdb) start
Temporary breakpoint 1 at 0x40112a: file hello.c, line 6.
Starting program: helloworld.exe
Temporary breakpoint 1, main () at hello.c:6
6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'> bad:
(gdb) si
0x000000000040112f 6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'> bad:
In this case there is no auxiliary information displayed after the
warning, and the line ending in the colon looks weird to me.
A quick survey of the code seems to indicate that it is not uncommon
for there to be no auxiliary information line printed, its not just
the one case I found above.
I propose that the line that currently looks like this:
Python Exception <class 'RuntimeError'> bad:
Be reformatted like this:
Python Exception <class 'RuntimeError'>: bad
I think this looks fine then in either situation. The first now looks
like this:
(gdb) test-cmd
Python Exception <class 'RuntimeError'>: bad
Error occurred in Python: bad
And the second like this:
(gdb) si
0x000000000040112f 6 printf ("Hello World\n");
Python Exception <class 'RuntimeError'>: bad
There's just two tests that needed updating. Errors are checked for
in many more tests, but most of the time the pattern doesn't care
about the colon.
gdb/ChangeLog:
* python/python.c (gdbpy_print_stack): Reformat an error message.
gdb/testsuite/ChangeLog:
* gdb.python/py-framefilter.exp: Update expected results.
* gdb.python/python.exp: Update expected results.
The last frame in a corrupt stack stores the frame_id of the next frame,
so these two frames currently compare as equal.
So if you have a backtrace where the oldest frame is corrupt, this happens:
(gdb) py
>f = gdb.selected_frame()
>while f.older():
> f = f.older()
>print(f == f.newer())
>end
True
With this change, that same example returns False.
gdb/ChangeLog:
2021-02-07 Hannes Domani <ssbssa@yahoo.de>
* python/py-frame.c (frapy_richcompare): Compare frame_id_is_next.
I think this makes the names of the methods clearer, especially for the
arch. The type::arch method (which gets the arch owner, or NULL if the
type is not arch owned) is easily confused with the get_type_arch method
(which returns an arch no matter what). The name "arch_owner" will make
it intuitive that the method returns NULL if the type is not arch-owned.
Also, this frees the type::arch name, so we will be able to morph the
get_type_arch function into the type::arch method.
gdb/ChangeLog:
* gdbtypes.h (struct type) <arch>: Rename to...
<arch_owner>: ... this, update all users.
<objfile>: Rename to...
<objfile_owner>: ... this, update all users.
Change-Id: Ie7c28684c7b565adec05a7619c418c69429bd8c0
Change all users to use the type::objfile method instead.
gdb/ChangeLog:
* gdbtypes.h (TYPE_OBJFILE): Remove, change all users to use the
type::objfile method instead.
Change-Id: I6b3f580913fb1fb0cf986b176dba8db68e1fabf9
This allows the creation of hardware breakpoints in Python with
gdb.Breakpoint(type=gdb.BP_HARDWARE_BREAKPOINT)
And they are included in the sequence returned by gdb.breakpoints().
gdb/ChangeLog:
2021-01-21 Hannes Domani <ssbssa@yahoo.de>
PR python/19151
* python/py-breakpoint.c (bppy_get_location): Handle
bp_hardware_breakpoint.
(bppy_init): Likewise.
(gdbpy_breakpoint_created): Likewise.
gdb/doc/ChangeLog:
2021-01-21 Hannes Domani <ssbssa@yahoo.de>
PR python/19151
* python.texi (Breakpoints In Python): Document
gdb.BP_HARDWARE_BREAKPOINT.
gdb/testsuite/ChangeLog:
2021-01-21 Hannes Domani <ssbssa@yahoo.de>
PR python/19151
* gdb.python/py-breakpoint.exp: Add tests for hardware breakpoints.
This commits the result of running gdb/copyright.py as per our Start
of New Year procedure...
gdb/ChangeLog
Update copyright year range in copyright header of all GDB files.
This commit removes some, but not all, uses of LA_PRINT_STRING. In
this commit I've removed those uses where there is an obvious language
object on which I can instead call the printstr method.
In the remaining 3 uses it is harder to know if the correct thing is
to call printstr on the current language, or on a specific language.
Currently obviously, we always call on the current language (as that's
what LA_PRINT_STRING does), and clearly this behaviour is good enough
right now, but is it "right"? I've left them for now and will give
them more thought in the future.
gdb/ChangeLog:
* expprint.c (print_subexp_standard): Replace uses of
LA_PRINT_STRING.
* f-valprint.c (f_language::value_print_inner): Likewise.
* guile/scm-pretty-print.c (ppscm_print_string_repr): Likewise.
* p-valprint.c (pascal_language::value_print_inner): Likewise.
* python/py-prettyprint.c (print_string_repr): Likewise.
This makes it possible to disable the address in the result string:
const char *str = "alpha";
(gdb) py print(gdb.parse_and_eval("str").format_string())
0x404000 "alpha"
(gdb) py print(gdb.parse_and_eval("str").format_string(address=False))
"alpha"
gdb/ChangeLog:
2020-12-18 Hannes Domani <ssbssa@yahoo.de>
* python/py-value.c (valpy_format_string): Implement address keyword.
gdb/doc/ChangeLog:
2020-12-18 Hannes Domani <ssbssa@yahoo.de>
* python.texi (Values From Inferior): Document the address keyword.
gdb/testsuite/ChangeLog:
2020-12-18 Hannes Domani <ssbssa@yahoo.de>
* gdb.python/py-format-string.exp: Add tests for address keyword.
Considering this example:
struct C
{
int func() { return 1; }
} c;
int main()
{
return c.func();
}
Accessing the fields of C::func, when requesting the function by its
type, works:
(gdb) py print(gdb.parse_and_eval('C::func').type.fields()[0].type)
C * const
But when trying to do the same via a class instance, it fails:
(gdb) py print(gdb.parse_and_eval('c')['func'].type.fields()[0].type)
Traceback (most recent call last):
File "<string>", line 1, in <module>
TypeError: Type is not a structure, union, enum, or function type.
Error while executing Python code.
The difference is that in the former the function type is TYPE_CODE_FUNC:
(gdb) py print(gdb.parse_and_eval('C::func').type.code == gdb.TYPE_CODE_FUNC)
True
And in the latter the function type is TYPE_CODE_METHOD:
(gdb) py print(gdb.parse_and_eval('c')['func'].type.code == gdb.TYPE_CODE_METHOD)
True
So this adds the functionality for TYPE_CODE_METHOD as well.
gdb/ChangeLog:
2020-12-18 Hannes Domani <ssbssa@yahoo.de>
* python/py-type.c (typy_get_composite): Add TYPE_CODE_METHOD.
gdb/testsuite/ChangeLog:
2020-12-18 Hannes Domani <ssbssa@yahoo.de>
* gdb.python/py-type.exp: Add tests for TYPE_CODE_METHOD.
This changes varobj_item::value to be a value_ref_ptr, removing some
manual management.
gdb/ChangeLog
2020-12-11 Tom Tromey <tom@tromey.com>
* varobj.c (install_dynamic_child, varobj_clear_saved_item)
(update_dynamic_varobj_children, create_child)
(create_child_with_value): Update.
* varobj-iter.h (struct varobj_item) <value>: Now a
value_ref_ptr.
* python/py-varobj.c (py_varobj_iter::next): Call release_value.
This changes varobj_iter::next to return a unique_ptr. This fits in
with the ongoing theme of trying to express these ownership transfers
via the type system.
gdb/ChangeLog
2020-12-11 Tom Tromey <tom@tromey.com>
* varobj.c (update_dynamic_varobj_children): Update.
* varobj-iter.h (struct varobj_iter) <next>: Change return type.
* python/py-varobj.c (struct py_varobj_iter) <next>: Change return
type.
(py_varobj_iter::next): Likewise.
This changes the varobj iteration code to use a C++ class rather than
a C struct with a separate "ops" structure. The only implementation
is updated to use inheritance. This simplifies the code quite nicely.
gdb/ChangeLog
2020-12-11 Tom Tromey <tom@tromey.com>
* varobj.c (update_dynamic_varobj_children, install_visualizer)
(varobj::~varobj): Update.
* varobj-iter.h (struct varobj_iter): Change to interface class.
(struct varobj_iter_ops): Remove.
(varobj_iter_next, varobj_iter_delete): Remove.
* python/py-varobj.c (struct py_varobj_iter): Derive from
varobj_iter. Add constructor, destructor. Rename members.
(py_varobj_iter::~py_varobj_iter): Rename from
py_varobj_iter_dtor.
(py_varobj_iter::next): Rename from py_varobj_iter_next.
(py_varobj_iter_ops): Remove.
(py_varobj_iter): Rename from py_varobj_iter_ctor.
(py_varobj_iter_new): Remove.
(py_varobj_get_iterator): Update.
I noticed that a few "#if"s could be removed from the Python code.
This patch is the result.
gdb/ChangeLog
2020-11-02 Tom Tromey <tromey@adacore.com>
* python/python.c: Consolidate two HAVE_PYTHON blocks.
(python_GdbModuleDef): Move earlier. Now static.
(do_start_initialization): Consolidate some IS_PY3K blocks.
The previous patch made it possible to define a condition if it's
valid at some locations. If the condition is invalid at all of the
locations, it's rejected. However, there may be cases where the user
knows the condition *will* be valid at a location in the future,
e.g. due to a shared library load.
To make it possible that such condition can be defined, this patch
adds an optional '-force' flag to the 'condition' command, and,
respectively, a '-force-condition' flag to the 'break'command. When
the force flag is passed, the condition is not rejected even when it
is invalid for all the current locations (note that all the locations
would be internally disabled in this case).
For instance:
(gdb) break test.c:5
Breakpoint 1 at 0x1155: file test.c, line 5.
(gdb) cond 1 foo == 42
No symbol "foo" in current context.
Defining the condition was not possible because 'foo' is not
available. The user can override this behavior with the '-force'
flag:
(gdb) cond -force 1 foo == 42
warning: failed to validate condition at location 1.1, disabling:
No symbol "foo" in current context.
(gdb) info breakpoints
Num Type Disp Enb Address What
1 breakpoint keep y <MULTIPLE>
stop only if foo == 42
1.1 N 0x0000000000001155 in main at test.c:5
Now the condition is accepted, but the location is automatically
disabled. If a future location has a context in which 'foo' is
available, that location would be enabled.
For the 'break' command, -force-condition has the same result:
(gdb) break test.c:5 -force-condition if foo == 42
warning: failed to validate condition at location 0x1169, disabling:
No symbol "foo" in current context.
Breakpoint 1 at 0x1169: file test.c, line 5.
gdb/ChangeLog:
2020-10-27 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* breakpoint.h (set_breakpoint_condition): Add a new bool parameter.
* breakpoint.c: Update the help text of the 'condition' and 'break'
commands.
(set_breakpoint_condition): Take a new bool parameter
to control whether condition definition should be forced even when
the condition expression is invalid in all of the current locations.
(condition_command): Update the call to 'set_breakpoint_condition'.
(find_condition_and_thread): Take the "-force-condition" flag into
account.
* linespec.c (linespec_keywords): Add "-force-condition" as an
element.
(FORCE_KEYWORD_INDEX): New #define.
(linespec_lexer_lex_keyword): Update to consider "-force-condition"
as a keyword.
* ada-lang.c (create_ada_exception_catchpoint): Ditto.
* guile/scm-breakpoint.c (gdbscm_set_breakpoint_condition_x): Ditto.
* python/py-breakpoint.c (bppy_set_condition): Ditto.
* NEWS: Mention the changes to the 'break' and 'condition' commands.
gdb/testsuite/ChangeLog:
2020-10-27 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.base/condbreak-multi-context.exp: Expand to test forcing
the condition.
* gdb.linespec/cpcompletion.exp: Update to consider the
'-force-condition' keyword.
* gdb.linespec/explicit.exp: Ditto.
* lib/completion-support.exp: Ditto.
gdb/doc/ChangeLog:
2020-10-27 Tankut Baris Aktemur <tankut.baris.aktemur@intel.com>
* gdb.texinfo (Set Breaks): Document the '-force-condition' flag
of the 'break'command.
* gdb.texinfo (Conditions): Document the '-force' flag of the
'condition' command.
This changes tui_py_window to create an inner curses window. This
greatly simplifies tui_py_window::output, beacuse it no longer needs
to be careful to avoid overwriting the window's border. This patch
also makes it a bit easier for a later patch to rewrite
tui_copy_source_line.
gdb/ChangeLog
2020-09-27 Tom Tromey <tom@tromey.com>
* python/py-tui.c (class tui_py_window) <refresh_window>: New
method.
<erase>: Update.
<cursor_x, cursor_y>: Remove.
<m_inner_window>: New member.
(tui_py_window::rerender): Create inner window.
(tui_py_window::output): Write to inner window.
Avoid the use of PyInt_FromLong, preferring gdb_py_object_from_longest
instead. I found found another spot that was incorrectly handling
errors (see gdbpy_create_ptid_object) while writing this patch; it is
fixed here.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python-internal.h (PyInt_FromLong): Remove define.
* python/py-value.c (convert_value_from_python): Use
gdb_py_object_from_longest.
* python/py-type.c (typy_get_code): Use
gdb_py_object_from_longest.
* python/py-symtab.c (salpy_get_line): Use
gdb_py_object_from_longest.
* python/py-symbol.c (sympy_get_addr_class, sympy_line): Use
gdb_py_object_from_longest.
* python/py-record.c (recpy_gap_reason_code): Use
gdb_py_object_from_longest.
* python/py-record-btrace.c (recpy_bt_insn_size)
(recpy_bt_func_level, btpy_list_count): Use
gdb_py_object_from_longest.
* python/py-infthread.c (gdbpy_create_ptid_object): Use
gdb_py_object_from_longest. Fix error handling.
* python/py-framefilter.c (bootstrap_python_frame_filters): Use
gdb_py_object_from_longest.
* python/py-frame.c (frapy_type, frapy_unwind_stop_reason): Use
gdb_py_object_from_longest.
* python/py-breakpoint.c (bppy_get_type, bppy_get_number)
(bppy_get_thread, bppy_get_task, bppy_get_hit_count)
(bppy_get_ignore_count): Use gdb_py_object_from_longest.
This changes gdb to avoid PyLong_FromUnsignedLong, preferring
gdb_py_object_from_ulongest instead.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python.c (gdbpy_parameter_value): Use
gdb_py_object_from_ulongest.
This changes gdb to avoid PyLong_FromLongLong, preferring to use
gdb_py_object_from_longest instead.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/py-infevents.c (create_register_changed_event_object):
Use gdb_py_object_from_longest.
* python/py-exitedevent.c (create_exited_event_object): Use
gdb_py_object_from_longest.
This changes gdb to avoid PyLong_FromLong, preferring to
gdb_py_object_from_longest instead.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python.c (gdbpy_parameter_value): Use
gdb_py_object_from_longest.
* python/py-type.c (convert_field, typy_range): Use
gdb_py_object_from_longest.
* python/py-tui.c (gdbpy_tui_width, gdbpy_tui_height): Use
gdb_py_object_from_longest.
* python/py-lazy-string.c (stpy_get_length): Use
gdb_py_object_from_longest.
* python/py-infthread.c (thpy_get_num, thpy_get_global_num): Use
gdb_py_object_from_longest.
* python/py-infevents.c (create_memory_changed_event_object): Use
gdb_py_object_from_longest.
* python/py-inferior.c (infpy_get_num): Use
gdb_py_object_from_longest.
(infpy_get_pid): Likewise.
Remove the gdb_py_long_from_ulongest defines and change the Python
layer to prefer gdb_py_object_from_ulongest. While writing this I
noticed that the error handling in archpy_disassemble was incorrect --
it could call PyDict_SetItemString with a NULL value. This patch also
fixes this bug.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python-internal.h (gdb_py_long_from_ulongest): Remove
defines.
* python/py-value.c (valpy_long): Use
gdb_py_object_from_ulongest.
* python/py-symtab.c (salpy_get_pc): Use
gdb_py_object_from_ulongest.
(salpy_get_last): Likewise.
* python/py-record-btrace.c (recpy_bt_insn_pc): Use
gdb_py_object_from_ulongest.
* python/py-lazy-string.c (stpy_get_address): Use
gdb_py_object_from_ulongest.
* python/py-frame.c (frapy_pc): Use gdb_py_object_from_ulongest.
* python/py-arch.c (archpy_disassemble): Use
gdb_py_object_from_ulongest and gdb_py_object_from_longest. Fix
error handling.
Change the Python layer to avoid gdb_py_long_from_longest, and remove
the defines.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python-internal.h (gdb_py_long_from_longest): Remove
defines.
* python/py-value.c (valpy_long): Use gdb_py_object_from_longest.
* python/py-type.c (convert_field, typy_get_sizeof): Use
gdb_py_object_from_longest.
* python/py-record-btrace.c (btpy_list_index): Use
gdb_py_object_from_longest.
Change the Python layer to avoid PyInt_FromSsize_t, and remove the
compatibility define.
gdb/ChangeLog
2020-09-15 Tom Tromey <tromey@adacore.com>
* python/python-internal.h (PyInt_FromSsize_t): Remove define.
* python/py-record.c (recpy_element_number): Use
gdb_py_object_from_longest.
(recpy_gap_number): Likewise.
An internal test failed on a riscv64-elf cross build because
Inferior.search_memory returned a negative value. I tracked this down
to to use of PyLong_FromLong in infpy_search_memory. Then, I looked
at other conversions of CORE_ADDR to Python and fixed these as well.
I don't think there is a good way to write a test for this.
gdb/ChangeLog
2020-08-17 Tom Tromey <tromey@adacore.com>
* python/py-inferior.c (infpy_search_memory): Use
gdb_py_object_from_ulongest.
* python/py-infevents.c (create_inferior_call_event_object)
(create_memory_changed_event_object): Use
gdb_py_object_from_ulongest.
* python/py-linetable.c (ltpy_entry_get_pc): Use
gdb_py_object_from_ulongest.
This commit unifies all of the Python register lookup code (used by
Frame.read_register, PendingFrame.read_register, and
gdb.UnwindInfo.add_saved_register), and adds support for using a
gdb.RegisterDescriptor for register lookup.
Currently the register unwind code (PendingFrame and UnwindInfo) allow
registers to be looked up either by name, or by GDB's internal
number. I suspect the number was added for performance reasons, when
unwinding we don't want to repeatedly map from name to number for
every unwind. However, this kind-of sucks, it means Python scripts
could include GDB's internal register numbers, and if we ever change
this numbering in the future users scripts will break in unexpected
ways.
Meanwhile, the Frame.read_register method only supports accessing
registers using a string, the register name.
This commit unifies all of the register to register-number lookup code
in our Python bindings, and adds a third choice into the mix, the use
of gdb.RegisterDescriptor.
The register descriptors can be looked up by name, but once looked up,
they contain GDB's register number, and so provide all of the
performance benefits of using a register number directly. However, as
they are looked up by name we are no longer tightly binding the Python
API to GDB's internal numbering scheme.
As we may already have scripts in the wild that are using the register
numbers directly I have kept support for this in the API, but I have
listed this method last in the manual, and I have tried to stress that
this is NOT a good method to use and that users should use either a
string or register descriptor approach.
After this commit all existing Python code should function as before,
but users now have new options for how to identify registers.
gdb/ChangeLog:
* python/py-frame.c: Remove 'user-regs.h' include.
(frapy_read_register): Rewrite to make use of
gdbpy_parse_register_id.
* python/py-registers.c (gdbpy_parse_register_id): New function,
moved here from python/py-unwind.c. Updated the return type, and
also accepts register descriptor objects.
* python/py-unwind.c: Remove 'user-regs.h' include.
(pyuw_parse_register_id): Moved to python/py-registers.c.
(unwind_infopy_add_saved_register): Update to use
gdbpy_parse_register_id.
(pending_framepy_read_register): Likewise.
* python/python-internal.h (gdbpy_parse_register_id): Declare.
gdb/testsuite/ChangeLog:
* gdb.python/py-unwind.py: Update to make use of a register
descriptor.
gdb/doc/ChangeLog:
* python.texi (Unwinding Frames in Python): Update descriptions
for PendingFrame.read_register and
gdb.UnwindInfo.add_saved_register.
(Frames In Python): Update description of Frame.read_register.
Adds a new method 'find' to the gdb.RegisterDescriptorIterator class,
this allows gdb.RegisterDescriptor objects to be looked up directly by
register name rather than having to iterate over all registers.
This will be of use for a later commit.
I've documented the new function in the manual, but I don't think a
NEWS entry is required here, as, since the last release, the whole
register descriptor mechanism is new, and is already mentioned in the
NEWS file.
gdb/ChangeLog:
* python/py-registers.c: Add 'user-regs.h' include.
(register_descriptor_iter_find): New function.
(register_descriptor_iterator_object_methods): New static global
methods array.
(register_descriptor_iterator_object_type): Add pointer to methods
array.
gdb/testsuite/ChangeLog:
* gdb.python/py-arch-reg-names.exp: Add additional tests.
gdb/doc/ChangeLog:
* python.texi (Registers In Python): Document new find function.
Pedro's review comments arrived after I'd already committed this
change:
commit f7306dac19
Date: Tue Jul 7 15:00:30 2020 +0100
gdb/python: Reuse gdb.RegisterDescriptor objects where possible
See:
https://sourceware.org/pipermail/gdb-patches/2020-July/170726.html
There should be no user visible changes after this commit.
gdb/ChangeLog:
* python/py-registers.c (gdbpy_register_object_data_init): Remove
redundant local variable.
(gdbpy_get_register_descriptor): Extract descriptor vector as a
reference, not pointer, update code accordingly.
Only create one gdb.RegisterGroup Python object for each of GDB's
reggroup objects.
I could have added a field into the reggroup object to hold the Python
object pointer for each reggroup, however, as reggroups are never
deleted within GDB, and are global (not per-architecture) a simpler
solution seemed to be just to hold a single global map from reggroup
pointer to a Python object representing the reggroup. Then we can
reuse the objects out of this map.
After this commit it is possible for a user to tell that two
gdb.RegisterGroup objects are now identical when previously they were
unique, however, as both these objects are read-only I don't think
this should be a problem.
There should be no other user visible changes after this commit.
gdb/ChangeLog:
* python/py-registers.c : Add 'unordered_map' include.
(gdbpy_new_reggroup): Renamed to...
(gdbpy_get_reggroup): ...this. Update to only create register
group descriptors when needed.
(gdbpy_reggroup_iter_next): Update.
gdb/testsuite/ChangeLog:
* gdb.python/py-arch-reg-groups.exp: Additional tests.