Make gdb shorthands such as 'pr' accept an argument, in addition to
implictly taking register '$' as the thing to examine.
The 'eval ...' one-liners are used to workaround GDB bug #22466.
* gdbinit.in (help-gcc-hooks): New command.
(pp, pr, prl, pt, pct, pgg, pgq, pgs, pge, pmz, ptc, pdn, ptn, pdd, prc,
pi, pbm, pel, trt): Take $arg0 instead of $ if supplied. Update
documentation.
Had mistakenly used a target macro that was not defined and not the
relevant one instead of the macro that should be used.
TARGET_ARMV8_6 is not defined, and also not the macro we want to check.
Instead check TARGET_F64MM.
gcc/ChangeLog:
2020-01-17 Matthew Malcomson <matthew.malcomson@arm.com>
* config/aarch64/aarch64-sve.md (@aarch64_sve_ld1ro<mode>): Use
the correct target macro.
We take no action to ensure the SVE vector size is large enough. It is
left to the user to check that before compiling this intrinsic or before
running such a program on a machine.
The main difference between ld1ro and ld1rq is in the allowed offsets,
the implementation difference is that ld1ro is implemented using integer
modes since there are no pre-existing vector modes of the relevant size.
Adding new vector modes simply for this intrinsic seems to make the code
less tidy.
Specifications can be found under the "Arm C Language Extensions for
Scalable Vector Extension" title at
https://developer.arm.com/architectures/system-architectures/software-standards/acle
gcc/ChangeLog:
2020-01-17 Matthew Malcomson <matthew.malcomson@arm.com>
* config/aarch64/aarch64-protos.h
(aarch64_sve_ld1ro_operand_p): New.
* config/aarch64/aarch64-sve-builtins-base.cc
(class load_replicate): New.
(class svld1ro_impl): New.
(class svld1rq_impl): Change to inherit from load_replicate.
(svld1ro): New sve intrinsic function base.
* config/aarch64/aarch64-sve-builtins-base.def (svld1ro):
New DEF_SVE_FUNCTION.
* config/aarch64/aarch64-sve-builtins-base.h
(svld1ro): New decl.
* config/aarch64/aarch64-sve-builtins.cc
(function_expander::add_mem_operand): Modify assert to allow
OImode.
* config/aarch64/aarch64-sve.md (@aarch64_sve_ld1ro<mode>): New
pattern.
* config/aarch64/aarch64.c
(aarch64_sve_ld1rq_operand_p): Implement in terms of ...
(aarch64_sve_ld1rq_ld1ro_operand_p): This.
(aarch64_sve_ld1ro_operand_p): New.
* config/aarch64/aarch64.md (UNSPEC_LD1RO): New unspec.
* config/aarch64/constraints.md (UOb,UOh,UOw,UOd): New.
* config/aarch64/predicates.md
(aarch64_sve_ld1ro_operand_{b,h,w,d}): New.
gcc/testsuite/ChangeLog:
2020-01-17 Matthew Malcomson <matthew.malcomson@arm.com>
* gcc.target/aarch64/sve/acle/asm/ld1ro_f16.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_f32.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_f64.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_s16.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_s32.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_s64.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_s8.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_u16.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_u32.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_u64.c: New test.
* gcc.target/aarch64/sve/acle/asm/ld1ro_u8.c: New test.
This patch is necessary for sve-ld1ro intrinsic I posted in
https://gcc.gnu.org/ml/gcc-patches/2020-01/msg00466.html .
I had mistakenly thought this option was already enabled upstream.
This provides the option +f64mm, that turns on the 64 bit floating point
matrix multiply extension. This extension is only available for
AArch64. Turning on this extension also turns on the SVE extension.
This extension is optional and only available at Armv8.2-A and onward.
We also add the ACLE defined macro for this extension.
gcc/ChangeLog:
2020-01-17 Matthew Malcomson <matthew.malcomson@arm.com>
* config/aarch64/aarch64-c.c (_ARM_FEATURE_MATMUL_FLOAT64):
Introduce this ACLE specified predefined macro.
* config/aarch64/aarch64-option-extensions.def (f64mm): New.
(fp): Disabling this disables f64mm.
(simd): Disabling this disables f64mm.
(fp16): Disabling this disables f64mm.
(sve): Disabling this disables f64mm.
* config/aarch64/aarch64.h (AARCH64_FL_F64MM): New.
(AARCH64_ISA_F64MM): New.
(TARGET_F64MM): New.
* doc/invoke.texi (f64mm): Document new option.
gcc/testsuite/ChangeLog:
2020-01-17 Matthew Malcomson <matthew.malcomson@arm.com>
* gcc.target/aarch64/pragma_cpp_predefs_2.c: Check for f64mm
predef.
Enable the most basic form of compare-branch fusion since various CPUs
support it. This has no measurable effect on cores which don't support
branch fusion, but increases fusion opportunities on cores which do.
gcc/
* config/aarch64/aarch64.c (generic_tunings): Add branch fusion.
(neoversen1_tunings): Likewise.
This was failing because uses_template_parms didn't recognize LAMBDA_EXPR as
a kind of expression. Instead of trying to enumerate all the different
varieties of expression and then aborting if what's left isn't
error_mark_node, let's handle error_mark_node and then assume anything else
is an expression.
* pt.c (uses_template_parms): Don't try to enumerate all the
expression cases.
As the following testcase shows, when deprecated attribute is on a template,
we'd never print the message if any, because the attribute is not
present on the TEMPLATE_DECL with which warn_deprecated_use is called,
but on its DECL_TEMPLATE_RESULT or its type.
2020-01-17 Jakub Jelinek <jakub@redhat.com>
PR c++/93228
* parser.c (cp_parser_template_name): Look up deprecated attribute
in DECL_TEMPLATE_RESULT or its type's attributes.
* g++.dg/cpp1y/attr-deprecated-3.C: New test.
the preprocessor evaluator has a skip_eval counter, but we weren't
checking it after parsing has_include(foo), but before looking for
foo. Resulting in unnecessary io for 'FALSE_COND && has_include <foo>'
PR preprocessor/93306
* expr.c (parse_has_include): Refactor. Check skip_eval before
looking.
Various 32-bit targets show failures in gcc.dg/analyzer/data-model-1.c
with tests of the form:
__analyzer_eval (q[-2].x == 107024); /* { dg-warning "TRUE" } */
__analyzer_eval (q[-2].y == 107025); /* { dg-warning "TRUE" } */
where they emit UNKNOWN instead.
The root cause is that gimple has a byte-based twos-complement offset
of -16 expressed like this:
_55 = q_92 + 4294967280; (32-bit)
or:
_55 = q_92 + 18446744073709551600; (64-bit)
Within region_model::convert_byte_offset_to_array_index that unsigned
offset was being divided by the element size to get an offset within
an array.
This happened to work on 64-bit target and host, but not elsewhere;
the offset needs to be converted to a signed type before the division
is meaningful.
This patch does so, fixing the failures.
gcc/analyzer/ChangeLog:
PR analyzer/93281
* region-model.cc
(region_model::convert_byte_offset_to_array_index): Convert to
ssizetype before dividing by byte_size. Use fold_binary rather
than fold_build2 to avoid needlessly constructing a tree for the
non-const case.
The separate shrinkwrapping pass may insert stores in the middle
of atomics loops which can cause issues on some implementations.
Avoid this by delaying splitting atomics patterns until after
prolog/epilog generation.
gcc/
PR target/92692
* config/aarch64/aarch64.c (aarch64_split_compare_and_swap)
Add assert to ensure prolog has been emitted.
(aarch64_split_atomic_op): Likewise.
* config/aarch64/atomics.md (aarch64_compare_and_swap<mode>)
Use epilogue_completed rather than reload_completed.
(aarch64_atomic_exchange<mode>): Likewise.
(aarch64_atomic_<atomic_optab><mode>): Likewise.
(atomic_nand<mode>): Likewise.
(aarch64_atomic_fetch_<atomic_optab><mode>): Likewise.
(atomic_fetch_nand<mode>): Likewise.
(aarch64_atomic_<atomic_optab>_fetch<mode>): Likewise.
(atomic_nand_fetch<mode>): Likewise.
AIUI, the main purpose of REVERSE_CONDITION is to take advantage of
any integer vs. FP information encoded in the CC mode, particularly
when handling LT, LE, GE and GT. For integer comparisons we can
safely map LT->GE, LE->GT, GE->LT and GT->LE, but for float comparisons
this would usually be invalid without -ffinite-math-only.
The aarch64 definition of REVERSE_CONDITION used
reverse_condition_maybe_unordered for FP comparisons, which had the
effect of converting an unordered-signalling LT, LE, GE or GT into a
quiet UNGE, UNGT, UNLT or UNLE. And it would do the same in reverse:
convert a quiet UN* into an unordered-signalling comparison.
This would be safe in practice (although a little misleading) if we
always used a compare:CCFP or compare:CCFPE to do the comparison and
then used (gt (reg:CCFP/CCFPE CC_REGNUM) (const_int 0)) etc. to test
the result. In that case any signal is raised by the compare and the
choice of quiet vs. signalling relations doesn't matter when testing
the result. The problem is that we also want to use GT directly on
float registers, where any signal is raised by the comparison operation
itself and so must follow the normal rtl rules (GT signalling,
UNLE quiet).
I think the safest fix is to make REVERSIBLE_CC_MODE return false
for FP comparisons. We can then use the default REVERSE_CONDITION
for integer comparisons and the usual conservatively-correct
reversed_comparison_code_parts behaviour for FP comparisons.
Unfortunately reversed_comparison_code_parts doesn't yet handle
-ffinite-math-only, but that's probably GCC 11 material.
A downside is that:
int f (float x, float y) { return !(x < y); }
now generates:
fcmpe s0, s1
cset w0, mi
eor w0, w0, 1
ret
without -ffinite-math-only. Maybe for GCC 11 we should define rtx
codes for all IEEE comparisons, so that we don't have this kind of
representational gap.
Changing REVERSE_CONDITION itself is pretty easy. However, the macro
was also used in the ccmp handling, which relied on being able to
reverse all comparisons. The patch adds new reversed patterns for
cases in which the original condition needs to be kept.
The test is based on gcc.dg/torture/pr91323.c. It might well fail
on other targets that have similar bugs; please XFAIL as appropriate
if you don't want to fix the target for GCC 10.
2020-01-17 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* config/aarch64/aarch64.h (REVERSIBLE_CC_MODE): Return false
for FP modes.
(REVERSE_CONDITION): Delete.
* config/aarch64/iterators.md (CC_ONLY): New mode iterator.
(CCFP_CCFPE): Likewise.
(e): New mode attribute.
* config/aarch64/aarch64.md (ccmp<GPI:mode>): Rename to...
(@ccmp<CC_ONLY:mode><GPI:mode>): ...this, using CC_ONLY instead of CC.
(fccmp<GPF:mode>, fccmpe<GPF:mode>): Merge into...
(@ccmp<CCFP_CCFPE:mode><GPF:mode>): ...this combined pattern.
(@ccmp<CC_ONLY:mode><GPI:mode>_rev): New pattern.
(@ccmp<CCFP_CCFPE:mode><GPF:mode>_rev): Likewise.
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Update
name of generator from gen_ccmpdi to gen_ccmpccdi.
(aarch64_gen_ccmp_next): Use code_for_ccmp. If we want to reverse
the previous comparison but aren't able to, use the new ccmp_rev
patterns instead.
If a TARGET_EXPR has poly-int size, the gimplifier would treat it
like a VLA and use gimplify_vla_decl. gimplify_vla_decl in turn
would use an alloca and expect all references to be gimplified
via the DECL_VALUE_EXPR. This caused confusion later in
gimplify_var_or_parm_decl_1 when we (correctly) had direct rather
than indirect references.
For completeness, the patch also fixes similar tests in the RETURN_EXPR
handling and OpenMP depend clauses.
2020-01-17 Richard Sandiford <richard.sandiford@arm.com>
gcc/
* gimplify.c (gimplify_return_expr): Use poly_int_tree_p rather
than testing directly for INTEGER_CST.
(gimplify_target_expr, gimplify_omp_depend): Likewise.
gcc/testsuite/
* g++.target/aarch64/sve/acle/general-c++/gimplify_1.C: New test.
The use of -fno-automatic should not affect the save attribute of a
recursive procedure. The first test case checks unsaved variables
and the second checks saved variables.
The following testcase ICEs on powerpc64le-linux. The problem is that
get_vectype_for_scalar_type returns NULL, and while most places in
tree-vect-stmts.c handle that case, this spot doesn't and punts only
if it is non-NULL, but with different number of elts than expected.
2020-01-17 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/93292
* tree-vect-stmts.c (vectorizable_comparison): Punt also if
get_vectype_for_scalar_type returns NULL.
* g++.dg/opt/pr93292.C: New test.
Really old git versions (like 1.6.0) require
"git log --pretty=tformat:%p:%t:%H"
or else we see:
Updating GIT tree
Current branch master is up to date.
fatal: invalid --pretty format: %p:%t:%H
Adjusting file timestamps
Touching gcc/config.in...
Touching gcc/config/arm/arm-tune.md...
...and an empty revision in LAST_UPDATED and gcc/REVISION.
In its absence, for newer git versions, "tformat" is the default
qualifier, documented as such default for at least git-2.11.0.
Here we had been recursing in tsubst_copy_and_build if type2 was a TREE_LIST
because that function knew how to deal with pack expansions, and tsubst
didn't. But tsubst_copy_and_build expects to be dealing with expressions,
so we crash when trying to convert_from_reference a type.
* pt.c (tsubst) [TREE_LIST]: Handle pack expansion.
(tsubst_copy_and_build) [TRAIT_EXPR]: Always use tsubst for type2.
This patch fixes ICE on invalid code, specifically files that have
conflict-marker-like signs before EOF.
PR c/92833
gcc/c/
* c-parser.c (c_parser_consume_token): Fix peeked token stack pop
to support 4 available tokens.
gcc/testsuite/
* c-c++-common/pr92833-1.c, c-c++-common/pr92833-2.c,
c-c++-common/pr92833-3.c, c-c++-common/pr92833-4.c: New tests.
While analyzing code size regression in SPEC2k GCC binary I noticed that we
perform some inline decisions because we think that number of executions are
very high.
In particular there was inline decision inlining gen_rtx_fmt_ee to find_reloads
believing that it is called 4 billion times. This turned out to be cummulation
of roundoff errors in propagate_freq which was bit mechanically updated from
original sreals to C++ sreals and later to new probabilities.
This led us to estimate that a loopback edge is reached with probability 2.3
which was capped to 1-1/10000 and since this happened in nested loop it quickly
escalated to large values.
Originally capping to REG_BR_PROB_BASE avoided such problems but now we have
much higher range.
This patch avoids going from probabilites to REG_BR_PROB_BASE so precision is
kept. In addition it makes the propagation to not estimate more than
param-max-predicted-loop-iterations. The first change makes the cap to not
be triggered on the gcc build, but it is still better to be safe than sorry.
* ipa-fnsummary.c (estimate_calls_size_and_time): Fix formating of
dump.
* params.opt: (max-predicted-iterations): Set bounds.
* predict.c (real_almost_one, real_br_prob_base,
real_inv_br_prob_base, real_one_half, real_bb_freq_max): Remove.
(propagate_freq): Add max_cyclic_prob parameter; cap cyclic
probabilities; do not truncate to reg_br_prob_bases.
(estimate_loops_at_level): Pass max_cyclic_prob.
(estimate_loops): Compute max_cyclic_prob.
(estimate_bb_frequencies): Do not initialize real_*; update calculation
of back edge prob.
* profile-count.c (profile_probability::to_sreal): New.
* profile-count.h (class sreal): Move up in file.
(profile_probability::to_sreal): Declare.
I recently added an assert to cp-gimplify to catch any
TARGET_EXPR_DIRECT_INIT_P being expanded without a target object, and this
testcase found one. We started out with a TARGET_EXPR around the
CONSTRUCTOR, which would normally mean that the member initializer would be
used to directly initialize the appropriate member of whatever object the
TARGET_EXPR ends up initializing. But then gimplify_modify_expr_rhs
stripped the TARGET_EXPR in order to assign directly from the elements of
the CONSTRUCTOR, leaving no object for the TARGET_EXPR_DIRECT_INIT_P to
initialize. I considered setting CONSTRUCTOR_PLACEHOLDER_BOUNDARY in that
case, which implies TARGET_EXPR_NO_ELIDE, but decided that there's no
particular reason the A initializer needs to initialize a member of a B
rather than a distinct A object, so let's only set TARGET_EXPR_DIRECT_INIT_P
when we're using the DMI in a constructor.
* init.c (get_nsdmi): Set TARGET_EXPR_DIRECT_INIT_P here.
* typeck2.c (digest_nsdmi_init): Not here.
This removes support for EOL versions of NetBSD and syncs the
definitions with patches from NetBSD upstream.
The only change here that isn't from upstream is to use _CTYPE_BL for
the isblank class, which is correct but wasn't previously done either in
FSF GCC or the NetBSD packages.
2020-01-16 Kai-Uwe Eckhardt <kuehro@gmx.de>
Matthew Bauer <mjbauer95@gmail.com>
Jonathan Wakely <jwakely@redhat.com>
PR bootstrap/64271 (partial)
* config/os/bsd/netbsd/ctype_base.h (ctype_base::mask): Change type
to unsigned short.
(ctype_base::alpha, ctype_base::digit, ctype_base::xdigit)
(ctype_base::print, ctype_base::graph, ctype_base::alnum): Sync
definitions with NetBSD upstream.
(ctype_base::blank): Use _CTYPE_BL.
* config/os/bsd/netbsd/ctype_configure_char.cc (_C_ctype_): Remove
Declaration.
(ctype<char>::classic_table): Use _C_ctype_tab_ instead of _C_ctype_.
(ctype<char>::do_toupper, ctype<char>::do_tolower): Cast char
parameters to unsigned char.
* config/os/bsd/netbsd/ctype_inline.h (ctype<char>::is): Likewise.
gcc/ChangeLog:
2020-01-16 Stam Markianos-Wright <stam.markianos-wright@arm.com>
* config/arm/arm.c
(arm_invalid_conversion): New function for target hook.
(arm_invalid_unary_op): New function for target hook.
(arm_invalid_binary_op): New function for target hook.
gcc/testsuite/ChangeLog:
2020-01-16 Stam Markianos-Wright <stam.markianos-wright@arm.com>
* g++.target/arm/bfloat_cpp_typecheck.C: New test.
* gcc.target/arm/bfloat16_scalar_typecheck.c: New test.
* gcc.target/arm/bfloat16_vector_typecheck_1.c: New test.
* gcc.target/arm/bfloat16_vector_typecheck_2.c: New test.
The patch is straightforward: it redefines ARMv8_1m_main as having the
same features as ARMv8m_main (and thus as having the cmse feature) with
the extra features represented by armv8_1m_main. It also removes the
error for using -mcmse on Armv8.1-M Mainline.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm-cpus.in (ARMv8_1m_main): Redefine as an extension to
Armv8-M Mainline.
* config/arm/arm.c (arm_options_perform_arch_sanity_checks): Remove
error for using -mcmse when targeting Armv8.1-M Mainline.
This change to use BLXNS to call a nonsecure function from secure
directly (not using a libcall) is made in 2 steps:
- change nonsecure_call patterns to use blxns instead of calling
__gnu_cmse_nonsecure_call
- loosen requirement for function address to allow any register when
doing BLXNS.
The former is a straightforward check over whether instructions added in
Armv8.1-M Mainline are available while the latter consist in making the
nonsecure call pattern accept any register by using match_operand and
changing the nonsecure_call_internal expander to no force r4 when
targeting Armv8.1-M Mainline.
The tricky bit is actually in the test update, specifically how to check
that register lists for CLRM have all registers except for the one
holding parameters (already done) and the one holding the address used
by BLXNS. This is achieved with 3 scan-assembler directives.
1) The first one lists all registers that can appear in CLRM but make
each of them optional.
Property guaranteed: no wrong register is cleared and none appears
twice in the register list.
2) The second directive check that the CLRM is made of a fixed number
of the right registers to be cleared. The number used is the number
of registers that could contain a secret minus one (used to hold the
address of the function to call.
Property guaranteed: register list has the right number of registers
Cumulated property guaranteed: only registers with a potential secret
are cleared and they are all listed but ont
3) The last directive checks that we cannot find a CLRM with a register
in it that also appears in BLXNS. This is check via the use of a
back-reference on any of the allowed register in CLRM, the
back-reference enforcing that whatever register match in CLRM must be
the same in the BLXNS.
Property guaranteed: register used for BLXNS is different from
registers cleared in CLRM.
Some more care needs to happen for the gcc.target/arm/cmse/cmse-1.c
testcase due to there being two CLRM generated. To ensure the third
directive match the right CLRM to the BLXNS, a negative lookahead is
used between the CLRM register list and the BLXNS. The way negative
lookahead work is by matching the *position* where a given regular
expression does not match. In this case, since it comes after the CLRM
register list it is requesting that what comes after the register list
does not have a CLRM again followed by BLXNS. This guarantees that the
.*blxns after only matches a blxns without another CLRM before.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm.md (nonsecure_call_internal): Do not force memory
address in r4 when targeting Armv8.1-M Mainline.
(nonsecure_call_value_internal): Likewise.
* config/arm/thumb2.md (nonsecure_call_reg_thumb2): Make memory address
a register match_operand again. Emit BLXNS when targeting
Armv8.1-M Mainline.
(nonsecure_call_value_reg_thumb2): Likewise.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/cmse-1.c: Add check for BLXNS when instructions
introduced in Armv8.1-M Mainline Security Extensions are available and
restrict checks for libcall to __gnu_cmse_nonsecure_call to Armv8-M
targets only. Adapt CLRM check to verify register used for BLXNS is
not in the CLRM register list.
* gcc.target/arm/cmse/cmse-14.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-4.c: Likewise and adapt
check for LSB clearing bit to be using the same register as BLXNS when
targeting Armv8.1-M Mainline.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-6.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-9.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-and-union.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/union-1.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/union-2.c: Likewise.
* gcc.target/arm/cmse/cmse-15.c: Count BLXNS when targeting Armv8.1-M
Mainline and restrict libcall count to Armv8-M.
This patch adds two new patterns for the VLSTM and VLLDM instructions.
cmse_nonsecure_call_inline_register_clear is then modified to
generate VLSTM and VLLDM respectively before and after calls to
functions with the cmse_nonsecure_call attribute in order to have lazy
saving, clearing and restoring of VFP registers. Since these
instructions do not do writeback of the base register, the stack is adjusted
prior the lazy store and after the lazy load with appropriate frame
debug notes to describe the effect on the CFA register.
As with CLRM, VSCCLRM and VSTR/VLDR, the instruction is modeled as an
unspecified operation to the memory pointed to by the base register.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm.c (arm_add_cfa_adjust_cfa_note): Declare early.
(cmse_nonsecure_call_inline_register_clear): Define new lazy_fpclear
variable as true when floating-point ABI is not hard. Replace
check against TARGET_HARD_FLOAT_ABI by checks against lazy_fpclear.
Generate VLSTM and VLLDM instruction respectively before and
after a function call to cmse_nonsecure_call function.
* config/arm/unspecs.md (VUNSPEC_VLSTM): Define unspec.
(VUNSPEC_VLLDM): Likewise.
* config/arm/vfp.md (lazy_store_multiple_insn): New define_insn.
(lazy_load_multiple_insn): Likewise.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-13.c: Add check for VLSTM and
VLLDM.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-8.c: Likewise.
The patch is fairly straightforward in its approach and consist of the
following 3 logical changes:
- abstract the number of floating-point register to clear in
max_fp_regno
- use max_fp_regno to decide how many registers to clear so that the
same code works for Armv8-M and Armv8.1-M Mainline
- emit vpush and vpop instruction respectively before and after a
nonsecure call
Note that as in the patch to clear GPRs inline, debug information has to
be disabled for VPUSH and VPOP due to VPOP adding CFA adjustment note
for SP when R7 is sometimes used as CFA.
ChangeLog entries are as follows:
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm.c (vfp_emit_fstmd): Declare early.
(arm_emit_vfp_multi_reg_pop): Likewise.
(cmse_nonsecure_call_inline_register_clear): Abstract number of VFP
registers to clear in max_fp_regno. Emit VPUSH and VPOP to save and
restore callee-saved VFP registers.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: Add check for
VPUSH and VPOP and update expectation for VSCCLRM.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: Likewise.
Besides changing the set of registers that needs to be cleared inline,
this patch also generates the push and pop to save and restore
callee-saved registers without trusting the callee inline. To make the
code more future-proof, this (currently) Armv8.1-M specific behavior is
expressed in terms of clearing of callee-saved registers rather than
directly based on the targets.
The patch contains 1 subtlety:
Debug information is disabled for push and pop because the
REG_CFA_RESTORE notes used to describe popping of registers do not stack.
Instead, they just reset the debug state for the register to the one at
the beginning of the function, which is incorrect for a register that is
pushed twice (in prologue and before nonsecure call) and then popped for
the first time. In particular, this occasionally trips CFI note creation
code when there are two codepaths to the epilogue, one of which does not
go through the nonsecure call. Obviously this mean that debugging
between the push and pop is not reliable.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm.c (arm_emit_multi_reg_pop): Declare early.
(cmse_nonsecure_call_clear_caller_saved): Rename into ...
(cmse_nonsecure_call_inline_register_clear): This. Save and clear
callee-saved GPRs as well as clear ip register before doing a nonsecure
call then restore callee-saved GPRs after it when targeting
Armv8.1-M Mainline.
(arm_reorg): Adapt to function rename.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/cmse-1.c: Add check for PUSH and POP and update
CLRM check.
* gcc.target/arm/cmse/cmse-14.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-4.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-6.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-9.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-and-union.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/union-1.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/union-2.c: Likewise.
This patch adds a new pattern for the VSCCLRM instruction.
cmse_clear_registers () is then modified to use the new VSCCLRM
instruction when targeting Armv8.1-M Mainline, thus, making the Armv8-M
register clearing code specific to Armv8-M.
Since the VSCCLRM instruction mandates VPR in the register list, the
pattern is encoded with a parallel which only requires an unspecified
VUNSPEC_CLRM_VPR constant modelling the APSR clearing. Other expression
in the parallel are expected to be set expression for clearing the VFP
registers.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm-protos.h (clear_operation_p): Adapt prototype.
* config/arm/arm.c (clear_operation_p): Extend to be able to check a
clear_vfp_multiple pattern based on a new vfp parameter.
(cmse_clear_registers): Generate VSCCLRM to clear VFP registers when
targeting Armv8.1-M Mainline.
(cmse_nonsecure_entry_clear_before_return): Clear VFP registers
unconditionally when targeting Armv8.1-M Mainline architecture. Check
whether VFP registers are available before looking call_used_regs for a
VFP register.
* config/arm/predicates.md (clear_multiple_operation): Adapt to change
of prototype of clear_operation_p.
(clear_vfp_multiple_operation): New predicate.
* config/arm/unspecs.md (VUNSPEC_VSCCLRM_VPR): New volatile unspec.
* config/arm/vfp.md (clear_vfp_multiple): New define_insn.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/bitfield-1.c: Add check for VSCCLRM.
* gcc.target/arm/cmse/bitfield-2.c: Likewise.
* gcc.target/arm/cmse/bitfield-3.c: Likewise.
* gcc.target/arm/cmse/cmse-1.c: Likewise.
* gcc.target/arm/cmse/struct-1.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-5.c: Likewise.
This patch adds a new pattern for the CLRM instruction and guards the
current clearing code in output_return_instruction() and thumb_exit()
on Armv8.1-M Mainline instructions not being present.
cmse_clear_registers () is then modified to use the new CLRM instruction
when targeting Armv8.1-M Mainline while keeping Armv8-M register
clearing code for VFP registers.
For the CLRM instruction, which does not mandated APSR in the register
list, checking whether it is the right volatile unspec or a clearing
register is done in clear_operation_p.
Note that load/store multiple were deemed sufficiently different in
terms of RTX structure compared to the CLRM pattern for a different
function to be used to validate the match_parallel.
ChangeLog entries are as follows:
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm-protos.h (clear_operation_p): Declare.
* config/arm/arm.c (clear_operation_p): New function.
(cmse_clear_registers): Generate clear_multiple instruction pattern if
targeting Armv8.1-M Mainline or successor.
(output_return_instruction): Only output APSR register clearing if
Armv8.1-M Mainline instructions not available.
(thumb_exit): Likewise.
* config/arm/predicates.md (clear_multiple_operation): New predicate.
* config/arm/thumb2.md (clear_apsr): New define_insn.
(clear_multiple): Likewise.
* config/arm/unspecs.md (VUNSPEC_CLRM_APSR): New volatile unspec.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/bitfield-1.c: Add check for CLRM.
* gcc.target/arm/cmse/bitfield-2.c: Likewise.
* gcc.target/arm/cmse/bitfield-3.c: Likewise.
* gcc.target/arm/cmse/struct-1.c: Likewise.
* gcc.target/arm/cmse/cmse-14.c: Likewise.
* gcc.target/arm/cmse/cmse-1.c: Likewise. Restrict checks for Armv8-M
GPR clearing when CLRM is not available.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-4.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-6.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-9.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-5.c: likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: likewise.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-13.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-5.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-7.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-8.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/union-1.c: Likewise.
* gcc.target/arm/cmse/mainline/8_1m/union-2.c: Likewise.
This patch consists mainly of creating 2 new instruction patterns to
push and pop special FP registers via vldm and vstr and using them in
prologue and epilogue. The patterns are defined as push/pop with an
unspecified operation on the memory accessed, with an unspecified
constant indicating what special FP register is being saved/restored.
Other aspects of the patch include:
* defining the set of special registers that can be saved/restored and
their name
* reserving space in the stack frames for these push/pop
* preventing return via pop
* guarding the clearing of FPSCR to target architecture not having
Armv8.1-M Mainline instructions.
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm.c (fp_sysreg_names): Declare and define.
(use_return_insn): Also return false for Armv8.1-M Mainline.
(output_return_instruction): Skip FPSCR clearing if Armv8.1-M
Mainline instructions are available.
(arm_compute_frame_layout): Allocate space in frame for FPCXTNS
when targeting Armv8.1-M Mainline Security Extensions.
(arm_expand_prologue): Save FPCXTNS if this is an Armv8.1-M
Mainline entry function.
(cmse_nonsecure_entry_clear_before_return): Clear IP and r4 if
targeting Armv8.1-M Mainline or successor.
(arm_expand_epilogue): Fix indentation of caller-saved register
clearing. Restore FPCXTNS if this is an Armv8.1-M Mainline
entry function.
* config/arm/arm.h (TARGET_HAVE_FP_CMSE): New macro.
(FP_SYSREGS): Likewise.
(enum vfp_sysregs_encoding): Define enum.
(fp_sysreg_names): Declare.
* config/arm/unspecs.md (VUNSPEC_VSTR_VLDR): New volatile unspec.
* config/arm/vfp.md (push_fpsysreg_insn): New define_insn.
(pop_fpsysreg_insn): Likewise.
*** gcc/testsuite/Changelog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* gcc.target/arm/cmse/bitfield-1.c: add checks for VSTR and VLDR.
* gcc.target/arm/cmse/bitfield-2.c: Likewise.
* gcc.target/arm/cmse/bitfield-3.c: Likewise.
* gcc.target/arm/cmse/cmse-1.c: Likewise.
* gcc.target/arm/cmse/struct-1.c: Likewise.
* gcc.target/arm/cmse/cmse.exp: Run existing Armv8-M Mainline tests
from mainline/8m subdirectory and new Armv8.1-M Mainline tests from
mainline/8_1m subdirectory.
* gcc.target/arm/cmse/mainline/bitfield-4.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-4.c: This.
* gcc.target/arm/cmse/mainline/bitfield-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-5.c: This.
* gcc.target/arm/cmse/mainline/bitfield-6.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-6.c: This.
* gcc.target/arm/cmse/mainline/bitfield-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-7.c: This.
* gcc.target/arm/cmse/mainline/bitfield-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-8.c: This.
* gcc.target/arm/cmse/mainline/bitfield-9.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-9.c: This.
* gcc.target/arm/cmse/mainline/bitfield-and-union-1.c: Move and rename
into ...
* gcc.target/arm/cmse/mainline/8m/bitfield-and-union.c: This.
* gcc.target/arm/cmse/mainline/hard-sp/cmse-13.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard-sp/cmse-13.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard-sp/cmse-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard-sp/cmse-5.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard-sp/cmse-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard-sp/cmse-7.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard-sp/cmse-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard-sp/cmse-8.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard/cmse-13.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard/cmse-13.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard/cmse-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard/cmse-5.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard/cmse-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard/cmse-7.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/hard/cmse-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/hard/cmse-8.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/soft/cmse-13.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/soft/cmse-13.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/soft/cmse-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/soft/cmse-5.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/soft/cmse-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/soft/cmse-7.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/soft/cmse-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/soft/cmse-8.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp-sp/cmse-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp-sp/cmse-5.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp-sp/cmse-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp-sp/cmse-7.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp-sp/cmse-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp-sp/cmse-8.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp/cmse-13.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp/cmse-13.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp/cmse-5.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp/cmse-5.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp/cmse-7.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp/cmse-7.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/softfp/cmse-8.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/softfp/cmse-8.c: This. Clean up
dg-skip-if directive for float ABI.
* gcc.target/arm/cmse/mainline/union-1.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/union-1.c: This.
* gcc.target/arm/cmse/mainline/union-2.c: Move into ...
* gcc.target/arm/cmse/mainline/8m/union-2.c: This.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-4.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-6.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-9.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/bitfield-and-union.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-13.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard-sp/cmse-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-13.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/hard/cmse-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-13.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/soft/cmse-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp-sp/cmse-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-13.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-5.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-7.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/softfp/cmse-8.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/union-1.c: New file.
* gcc.target/arm/cmse/mainline/8_1m/union-2.c: New file.
* lib/target-supports.exp (check_effective_target_arm_cmse_clear_ok):
New procedure.
Besides the expected enabling of the new value for the -march
command-line option (-march=armv8.1-m.main) and its extensions (see
below), this patch disables support of the Security Extensions for this
newly added architecture. This is done both by not including the cmse
bit in the architecture description and by throwing an error message
when user request Armv8.1-M Mainline Security Extensions. Note that
Armv8-M Baseline and Mainline Security Extensions are still enabled.
Only extensions for already supported instructions are implemented in
this patch. Other extensions (MVE integer and float) will be added in
separate patches. The following configurations are allowed for Armv8.1-M
Mainline with regards to FPU and implemented in this patch:
+ no FPU (+nofp)
+ single precision VFPv5 with FP16 (+fp)
+ double precision VFPv5 with FP16 (+fp.dp)
ChangeLog entry are as follow:
*** gcc/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/arm-cpus.in (armv8_1m_main): New feature.
(ARMv4, ARMv4t, ARMv5t, ARMv5te, ARMv5tej, ARMv6, ARMv6j, ARMv6k,
ARMv6z, ARMv6kz, ARMv6zk, ARMv6t2, ARMv6m, ARMv7, ARMv7a, ARMv7ve,
ARMv7r, ARMv7m, ARMv7em, ARMv8a, ARMv8_1a, ARMv8_2a, ARMv8_3a,
ARMv8_4a, ARMv8_5a, ARMv8m_base, ARMv8m_main, ARMv8r): Reindent.
(ARMv8_1m_main): New feature group.
(armv8.1-m.main): New architecture.
* config/arm/arm-tables.opt: Regenerate.
* config/arm/arm.c (arm_arch8_1m_main): Define and default initialize.
(arm_option_reconfigure_globals): Initialize arm_arch8_1m_main.
(arm_options_perform_arch_sanity_checks): Error out when targeting
Armv8.1-M Mainline Security Extensions.
* config/arm/arm.h (arm_arch8_1m_main): Declare.
*** gcc/testsuite/ChangeLog ***
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* lib/target-supports.exp
(check_effective_target_arm_arch_v8_1m_main_ok): Define.
(add_options_for_arm_arch_v8_1m_main): Likewise.
(check_effective_target_arm_arch_v8_1m_main_multilib): Likewise.
This patch is part of a patch series to add support for Armv8.1-M
Mainline Security Extensions architecture.
Code to detect whether cmse.c can be buit with -mcmse checks the output
of host GCC when invoked with -mcmse. However, an error from the
compiler does not prevent some minimal output so this always holds true.
2020-01-16 Mihail-Calin Ionescu <mihail.ionescu@arm.com>
2020-01-16 Thomas Preud'homme <thomas.preudhomme@arm.com>
* config/arm/t-arm: Check return value of gcc rather than lack of
output.
Avoid comparing elements with operator== multiple times by replacing
uses of find and equal_range with equivalent inlined code that uses
operator== instead of the container's equality comparison predicate.
This is valid because the standard requires that operator== is a
refinement of the equality predicate.
Also replace the _S_is_permutation function with std::is_permutation,
which wasn't yet implemented when this code was first written.
PR libstdc++/91263
* include/bits/hashtable.h (_Hashtable<>): Make _Equality<> friend.
* include/bits/hashtable_policy.h: Include <bits/stl_algo.h>.
(_Equality_base): Remove.
(_Equality<>::_M_equal): Review implementation. Use
std::is_permutation.
* testsuite/23_containers/unordered_multiset/operators/1.cc
(Hash, Equal, test02, test03): New.
* testsuite/23_containers/unordered_set/operators/1.cc
(Hash, Equal, test02, test03): New.
As discussed on IRC, this adds a couple more checks in the
customization setup for git. If the variables user.name and
user.email are not set anywhere in the git config hierarchy, we set
some local values. We always ask about the values we detect and if
the user gives an answer that is new, we save that in the local
config: this gives the opportunity to use different values to those
configured for the global space.
Also cleaned up a couple of minor niggles, such as using $(cmd) rather
than `cmd` for subshells and some quoting issues when using eval.
* gcc-git-customization.sh: Check that user.name and user.email
are set. Use $(cmd) instead of `cmd`. Fix variable quoting when
using eval.
Hi,
While working on bit-field lowering pass, I came across this bug.
The IR looks like:
VIEW_CONVERT_EXPR<unsigned long>(var1) = _12;
_1 = BIT_FIELD_REF <var1, 64, 0>;
Where the BIT_FIELD_REF has REF_REVERSE_STORAGE_ORDER set on it
and var1's type has TYPE_REVERSE_STORAGE_ORDER set on it.
PRE/FRE would decided to prop _12 into the BFR statement
which would produce wrong code.
And yes _12 has the correct byte order already; bit-field lowering
removes the implicit byte swaps in the IR and adds the explicity
to make it easier optimize later on.
This patch adds a check for storage_order_barrier_p on the lhs tree
which returns true in the case where we had a reverse order with a VCE.
ChangeLog:
* tree-ssa-sccvn.c(vn_reference_lookup_3): Check lhs for
!storage_order_barrier_p.
In struct _dep, there is an implicit padding of 4bits. This
bit-field padding is uninitialized when init_dep_1 is being called.
This means we access uninitialized memory but never use it for
anything. Adding an unused bit-field field and initializing it
in init_dep_1 will improve code generation also as we initialize
the whole 32bits now rather than just part of it.
ChangeLog:
* sched-int.h (_dep): Add unused bit-field field for the padding.
* sched-deps.c (init_dep_1): Init unused field.
Commit g:f96bf49a0 added the target field to expand_operand.
But it leaves it uninitialized when doing a full initialization
inside create_expand_operand. This fixes the problem and improves
the code generation inside create_expand_operand too.
ChangeLog:
* optabs.h (create_expand_operand): Initialize target field also.