DBNZ instruction decrements its source register operand, and if
the result is non-zero it branches to the location defined by a signed
half-word displacement operand.
DBNZ instruction is in BRANCH class as other branch instrucitons
like B, Bcc, etc. However, DBNZ is the only branch instruction
that stores a branch offset in the second operand. Thus it must
be placed in a distinct class and treated differently.
For example, current logic of arc_insn_get_branch_target in GDB
assumes that a branch offset is always stored in the first operand
for BRANCH class and it's wrong for DBNZ.
include/ChangeLog:
2024-02-14 Yuriy Kolerov <ykolerov@synopsys.com>
* opcode/arc.h (enum insn_class_t): Add DBNZ class.
opcodes/ChangeLog:
2024-02-14 Yuriy Kolerov <ykolerov@synopsys.com>
* arc-tbl.h (dbnz): Use "DBNZ" class.
* arc-dis.c (arc_opcode_to_insn_type): Handle "DBNZ" class.
gas/ChangeLog:
2024-02-14 Yuriy Kolerov <ykolerov@synopsys.com>
* config/tc-arc.c (is_br_jmp_insn_p): Add check against "DBNZ".
Don't wander into three_byte_table[] when REX2 is present.
While there also eliminate related confusion when accessing
dis386_twobyte[]: There's nothing 3-byte-ish involved there. Dropping
the odd variable gets things better in sync with 1-byte handling as
well.
Interestingly unlike VROUND{P,S}{S,D} and VPERM{F,I}128 they weren't
even present in the x86-64-apx-egpr-inval testcase, hence why I
overlooked that these can actually be encoded, (again) using suitable
AVX512 counterparts.
While there also "modernize" the adjacent AVX/AVX2 entries.
Already the %bnd<N> registers used numbers beyond 127, and eGPR ones are
all out of reach for "signed char", at least when CHAR_BITS=8. Switch to
"unsigned char", covering appropriately in places where the value
returned for "none" actually matters (in tc_x86_parse_to_dw2regnum()
this is actually achieved by altering how X_op is set).
There are no legacy ldind nor ldabs BPF instructions with BPF_SIZE_DW.
For some reason we were (incorrectly) supporting these. This patch
updates the opcodes so the instructions get removed and modifies the
GAS manual and testsuite accordingly.
See discussion at
https://lore.kernel.org/bpf/110aad7a-f8a3-46ed-9fda-2f8ee54dcb89@linux.dev
Tested in bpf-uknonwn-none target, x86-64-linux-gnu host.
include/ChangeLog:
2024-01-29 Jose E. Marchesi <jose.marchesi@oracle.com>
* opcode/bpf.h (enum bpf_insn_id): Remove BPF_INSN_LDINDDW and
BPF_INSN_LDABSDW instructions.
opcodes/ChangeLog:
2024-01-29 Jose E. Marchesi <jose.marchesi@oracle.com>
* bpf-opc.c (bpf_opcodes): Remove BPF_INSN_LDINDDW and
BPF_INSN_LDABSDW instructions.
gas/ChangeLog:
2024-01-29 Jose E. Marchesi <jose.marchesi@oracle.com>
* doc/c-bpf.texi (BPF Instructions): There is no indirect 64-bit
load instruction.
(BPF Instructions): There is no absolute 64-bit load instruction.
* testsuite/gas/bpf/mem.s: Update test accordingly.
* testsuite/gas/bpf/mem-be-pseudoc.d: Likewise.
* testsuite/gas/bpf/mem-be.d: Likewise.
* testsuite/gas/bpf/mem-pseudoc.d: Likewise.
* testsuite/gas/bpf/mem-pseudoc.s: Likewise.
* testsuite/gas/bpf/mem.d: Likewise.
* testsuite/gas/bpf/mem.s: Likewise.
SHA512 instructions were added to the architecture at the same time as SHA3
instructions, but later than the SHA1 and SHA256 instructions. Furthermore,
implementations must support either both or neither of the SHA512 and SHA3
instruction sets. However, SHA512 instructions were originally (and
incorrectly) added to Binutils under the +sha2 flag.
This patch moves SHA512 instructions under the +sha3 flag, which matches the
architecture constraints and existing GCC and LLVM behaviour.
Re-using the entire VEX decode hierarchy for the respective major opcode
has led to those two also being decoded as-if valid. Follow the earlier
USE_X86_64_EVEX_{PFX,W}_TABLE approach to avoid this happening.
As suggested during review already, all such entries have their first
slot as Bad_Opcode, so by adding two more enumerators we can avoid doing
that decode step altogether.
With identical source and destination it can be covered by the NDD-to-
legacy conversion logic as well, even if in this case the original insn
doesn't use an NDD encoding. The size savings are even better here, for
the replacement (BSWAP) not having a ModR/M byte.
GCC 14 will detect when the size and count arguments of calloc are
swapped.
binutils-gdb/opcodes/tic4x-dis.c: In function ‘tic4x_disassemble’:
binutils-gdb/opcodes/tic4x-dis.c:710:32: error: ‘xcalloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Werror=calloc-transposed-args]
710 | optab = xcalloc (sizeof (tic4x_inst_t *), (1 << TIC4X_HASH_SIZE));
| ^~~~~~~~~~~~
binutils-gdb/opcodes/tic4x-dis.c:710:32: note: earlier argument should specify number of elements, later size of each element
binutils-gdb/opcodes/tic4x-dis.c:712:40: error: ‘xcalloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Werror=calloc-transposed-args]
712 | optab_special = xcalloc (sizeof (tic4x_inst_t *), TIC4X_SPESOP_SIZE);
| ^~~~~~~~~~~~
binutils-gdb/opcodes/tic4x-dis.c:712:40: note: earlier argument should specify number of elements, later size of each element
opcodes/ChangeLog:
* /tic4x-dis.c (tic4x_disassemble): Swap size and count xcalloc
arguments.
VRNDSCALE{P,S}{S,D} is the AVX512 generalization of these AVX insns. As
long as the immediate has the top 4 bits clear, they are equivalent to
the earlier VEX-encoded insns, and hence can be used to permit use of
eGPR-s in the memory operand. Since this is the normal way of using
these insns, also alter the resulting diagnostic to complain about the
immediate, not the eGPR use.
When there's a suitably disambiguating register operand, suffixes are
generally omitted (unless in suffix-always mode). All NDD insns have a
suitable register operand, so they shouldn't have suffixes by default.
Along with the relevant unit-tests, this adds the following rcpc3
instructions:
STL1 { <Vt>.D }[<index>], [<Xn|SP>]
LDAP1 { <Vt>.D }[<index>], [<Xn|SP>]
LDAPUR <Bt>, [<Xn|SP>{, #<simm>}]
LDAPUR <Ht>, [<Xn|SP>{, #<simm>}]
LDAPUR <St>, [<Xn|SP>{, #<simm>}]
LDAPUR <Dt>, [<Xn|SP>{, #<simm>}]
LDAPUR <Qt>, [<Xn|SP>{, #<simm>}]
STLUR <Bt>, [<Xn|SP>{, #<simm>}]
STLUR <Ht>, [<Xn|SP>{, #<simm>}]
STLUR <St>, [<Xn|SP>{, #<simm>}]
STLUR <Dt>, [<Xn|SP>{, #<simm>}]
STLUR <Qt>, [<Xn|SP>{, #<simm>}]
with `#<simm>' taking on a signed 8-bit integer value in the range
[-256,255] and `index' the values 0 or 1.
Co-authored-by: Srinath Parvathaneni <srinath.parvathaneni@arm.com>
Given the introduction of the new address operand types for rcpc3
instructions, this patch adds the necessary logic to teach
`general_constraint_met_p` how to proper handle these.
The particular choices of address indexing, along with their encoding
for RCPC3 instructions lead to the requirement of a new set of operand
descriptions, along with the relevant inserter/extractor set.
That is, for the integer load/stores, there is only a single valid
indexing offset quantity and offset mode is allowed - The value is
always equivalent to the amount of data read/stored by the
operation and the offset is post-indexed for Load-Acquire RCpc, and
pre-indexed with writeback for Store-Release insns.
This indexing quantity/mode pair is selected by the setting of a
single bit in the instruction. To represent these insns, we add the
following operand types:
- AARCH64_OPND_RCPC3_ADDR_OPT_POSTIND
- AARCH64_OPND_RCPC3_ADDR_OPT_PREIND_WB
In the case of loads and stores involving SIMD/FP registers, the
optional offset is encoded as an 8-bit signed immediate, but neither
post-indexing or pre-indexing with writeback is available. This
created the need for an operand type similar to
AARCH64_OPND_ADDR_OFFSET, with the difference that FLD_index should
not be checked.
We thus introduce the AARCH64_OPND_RCPC3_ADDR_OFFSET operand, a
variant of AARCH64_OPND_ADDR_OFFSET, w/o the FLD_index bitfield.
Beyond the need to encode any registers involved in data transfer and
the address base register for load/stores, it is necessary to specify
the data register addressing mode and whether the address register is
to be pre/post-indexed, whereby loads may be post-indexed and stores
pre-indexed with write-back.
The use of a single bit to specify both the indexing mode and indexing
value requires a novel function be written to accommodate this for
address operand insertion in assembly and another for extraction in
disassembly, along with the definition of two insn fields for use with
these instructions.
This therefore defines the following functions:
- aarch64_ins_rcpc3_addr_opt_offset
- aarch64_ins_rcpc3_addr_offset
- aarch64_ext_rcpc3_addr_opt_offset
- aarch64_ext_rcpc3_addr_offset
It extends the `do_special_{encoding|decoding}' functions and defines
two rcpc3 instruction fields:
- FLD_opc2
- FLD_rcpc3_size
The allowed immediate offsets in integer rcpc3 load store instructions
are not encoded explicitly in the instruction itself, being rather
implicitly equivalent to the amount of data loaded/stored by the
instruction.
This leads to the requirement that this quantity be calculated based on
the number of registers involved in the transfer, either as data
source or destination registers and their respective qualifiers.
This is done via `calc_ldst_datasize (const aarch64_opnd_info *opnds)'
implemented here, using a cumulative sum of qualifier sizes preceding
the address operand in the OPNDS operand list argument.
Indicating the presence of the Armv8.2-a feature adding further
support for the Release Consistency Model, the `+rcpc3' architectural
extension flag is added to the list of possible `-march' options in
Binutils, together with the necessary macro for encoding rcpc3
instructions.
There are some tlbi operations that don't have a corresponding tlbip operation,
but we were incorrectly using the same list for both. Add the missing tlbi
*nxs operations, and use the F_REG_128 flag to filter tlbi operations that
don't have a tlbip analogue. For increased clarity, I have also used a macro
to reduce duplication between the 'nxs' and non-'nxs' variants, and added a
test to verify that no invalid combinations are accepted.
Additionally, fix two missing checks for AARCH64_OPND_SYSREG_TLBIP that were
preventing disassembly of tlbip instructions.
Hi,
This patch add support for SVE2.1 instructions ld1q,
ld2q, ld3q and ld4q, st1q, st2q, st3q and st4q.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
Hi,
This patch add support for SVE2.1 instruction faddqv,
fmaxnmqv, fmaxqv, fminnmqv and fminqv.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
Hi,
This patch add support for SVE2.1 instruction dupq, eorqv and extq.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
Hi,
This patch add support for FEAT_SVE2p1 (SVE2.1 Extension) feature
along with +sve2p1 optional flag to enabe this feature.
Also support for following SVE2p1 instructions is added
addqv, andqv, smaxqv, sminqv, umaxqv, uminqv and uminqv.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
Hi,
This patch add support for FEAT_SME2p1 and "movaz" instructions
along with the optional flag +sme2p1.
Following "movaz" instructions are add:
Move and zero two ZA tile slices to vector registers.
Move and zero four ZA tile slices to vector registers.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
Hi,
This patch add support for SVE2.1 and SME2.1 non-widening BFloat16
(FEAT_B16B16) instructions.
Following instructions predicated, unpredicated and indexed
variants are added in this patch.
bfadd, bfclamp, bfmax bfmaxnm, bfmin,bfminnm,
bfmla,bfmls,bfmul and bfsub.
Regression testing for aarch64-none-elf target and found no regressions.
Ok for binutils-master?
Regards,
Srinath.
The ginsn representation keeps the DWARF register number of the
operands. The API ginsn_dw2_regnum relies on the the relative ordering
of these register entries in the table. Add a comment to make it clear.
opcodes/
* i386-reg.tbl: Add a comment.
Some x86 instructions affect the stack pointer implicitly. Add a new
operand constraint to reflect this. This will be useful for SCFI
implmentation to ensure its correctness.
Mark all push, pop, call, ret, enter, leave, INT, iret instructions.
opcodes/
* i386-gen.c: Update opcode_modifiers.
* i386-opc.h: Add a new constraint.
* i386-opc.tbl: Update the affected instructions.
* i386-tbl.h: Regenerated.
Rex2 is currently an operand constraint. For the upcoming SCFI
implementation in GAS, we need to identify operations which implicitly
update the stack pointer. An operand constraint enumerator for implicit
stack op seems more appropriate than an attribute. However, two opcodes
currently necessitate both Rex2 and an implicit stack op marker; this
prompts revisiting the current representations a bit.
Make Rex2 a standalone attribute, so that later a new operand constraint
may be added for IMPLICIT_STACK_OP.
ChangeLog:
* gas/config/tc-i386.c (is_apx_rex2_encoding): Update the check.
* opcodes/i386-gen.c: Add a new BITFIELD for Rex2.
* opcodes/i386-opc.h (REX2_REQUIRED): Remove.
* opcodes/i386-opc.tbl: Remove Rex2 operand constraint.
* opcodes/i386-tbl.h: Regenerated.
Additionally, change FEAT_XS tlbi variants to be gated on "+xs" instead of
"+d128". This is an incremental improvement; there are still some FEAT_XS tlbi
variants that are gated incorrectly or missing entirely.