Remove gdbarch_bits_big_endian

From what I can tell, set_gdbarch_bits_big_endian has never been used.
That is, all architectures since its introduction have simply used the
default, which is simply check the architecture's byte-endianness.

Because this interferes with the scalar_storage_order code, this patch
removes this gdbarch setting entirely.  In some places,
type_byte_order is used rather than the plain gdbarch.

gdb/ChangeLog
2019-12-04  Tom Tromey  <tromey@adacore.com>

	* ada-lang.c (decode_constrained_packed_array)
	(ada_value_assign, value_assign_to_component): Update.
	* dwarf2loc.c (rw_pieced_value, access_memory)
	(dwarf2_compile_expr_to_ax): Update.
	* dwarf2read.c (dwarf2_add_field): Update.
	* eval.c (evaluate_subexp_standard): Update.
	* gdbarch.c, gdbarch.h: Rebuild.
	* gdbarch.sh (bits_big_endian): Remove.
	* gdbtypes.h (union field_location): Update comment.
	* target-descriptions.c (make_gdb_type): Update.
	* valarith.c (value_bit_index): Update.
	* value.c (struct value) <bitpos>: Update comment.
	(unpack_bits_as_long, modify_field): Update.
	* value.h (value_bitpos): Update comment.

Change-Id: I379b5e0c408ec8742f7a6c6b721108e73ed1b018
This commit is contained in:
Tom Tromey 2019-11-25 12:31:02 -07:00
parent 7ab4a236ce
commit d5a22e77b5
13 changed files with 37 additions and 55 deletions

View file

@ -1547,7 +1547,7 @@ evaluate_subexp_standard (struct type *expect_type,
{
int bit_index = (unsigned) range_low % TARGET_CHAR_BIT;
if (gdbarch_bits_big_endian (exp->gdbarch))
if (gdbarch_byte_order (exp->gdbarch) == BFD_ENDIAN_BIG)
bit_index = TARGET_CHAR_BIT - 1 - bit_index;
valaddr[(unsigned) range_low / TARGET_CHAR_BIT]
|= 1 << bit_index;