Fix libatomic build to support --disable-gnu-indirect-function on AArch64.
Always build atomic_16.S, add aliases to the __atomic_ functions if !HAVE_IFUNC.
Include auto-config.h in atomic_16.S to avoid having to pass defines via
makefiles. Fix build if HWCAP_ATOMICS/CPUID are not defined.
libatomic:
PR target/113986
* Makefile.in: Regenerated.
* Makefile.am: Make atomic_16.S not depend on HAVE_IFUNC.
Remove predefine of HAVE_FEAT_LSE128.
* acinclude.m4: Remove ARCH_AARCH64_HAVE_LSE128.
* configure: Regenerated.
* config/linux/aarch64/atomic_16.S: Add __atomic_ alias if !HAVE_IFUNC.
* config/linux/aarch64/host-config.h: Correctly handle !HAVE_IFUNC.
Add defines for HWCAP_ATOMICS and HWCAP_CPUID.
The exception defines in <fenv.h> do not match the exception bits
in the FPU status register on hppa-linux and hppa64-hpux11.11. On
linux, they match the trap enable bits. On 64-bit hpux, they match
the exception bits for IA64. The IA64 bits are in a different
order and location than HPPA. HP uses table look ups to reorder
the bits in code to test and raise exceptions.
All the architectures that I looked at just pass the FPU status
register to __atomic_feraiseexcept(). The simplest approach for
hppa is to define FE_INEXACT, etc, to match the status register
and not include <fenv.h>..
2024-02-03 John David Anglin <danglin@gcc.gnu.org>
libatomic/ChangeLog:
PR target/59778
* configure.tgt (hppa*): Set ARCH.
* config/pa/fenv.c: New file.
At present, Evaluation of both `has_lse2(hwcap)' and
`has_lse128(hwcap)' may require issuing an `mrs' instruction to query
a system register. This instruction, when issued from user-space
results in a trap by the kernel which then returns the value read in
by the system register. Given the undesirable nature of the
computational expense associated with the context switch, it is
important to implement mechanisms to, wherever possible, forgo the
operation.
In light of this, given how other architectural requirements serving
as prerequisites have long been assigned HWCAP bits by the kernel, we
can inexpensively query for their availability before attempting to
read any system registers. Where one of these early tests fail, we
can assert that the main feature of interest (be it LSE2 or LSE128)
cannot be present, allowing us to return from the function early and
skip the unnecessary expensive kernel-mediated access to system
registers.
libatomic/ChangeLog:
* config/linux/aarch64/host-config.h (has_lse2): Add test for LSE.
(has_lse128): Add test for LSE2.
The armv9.4-a architectural revision adds three new atomic operations
associated with the LSE128 feature:
* LDCLRP - Atomic AND NOT (bitclear) of a location with 128-bit
value held in a pair of registers, with original data loaded into
the same 2 registers.
* LDSETP - Atomic OR (bitset) of a location with 128-bit value held
in a pair of registers, with original data loaded into the same 2
registers.
* SWPP - Atomic swap of one 128-bit value with 128-bit value held
in a pair of registers.
It is worth noting that in keeping with existing 128-bit atomic
operations in `atomic_16.S', we have chosen to merge certain
less-restrictive orderings into more restrictive ones. This is done
to minimize the number of branches in the atomic functions, minimizing
both the likelihood of branch mispredictions and, in keeping code
small, limit the need for extra fetch cycles.
Past benchmarking has revealed that acquire is typically slightly
faster than release (5-10%), such that for the most frequently used
atomics (CAS and SWP) it makes sense to add support for acquire, as
well as release.
Likewise, it was identified that combining acquire and release typically
results in little to no penalty, such that it is of negligible benefit
to distinguish between release and acquire-release, making the
combining release/acq_rel/seq_cst a worthwhile design choice.
This patch adds the logic required to make use of these when the
architectural feature is present and a suitable assembler available.
In order to do this, the following changes are made:
1. Add a configure-time check to check for LSE128 support in the
assembler.
2. Edit host-config.h so that when N == 16, nifunc = 2.
3. Where available due to LSE128, implement the second ifunc, making
use of the novel instructions.
4. For atomic functions unable to make use of these new
instructions, define a new alias which causes the _i1 function
variant to point ahead to the corresponding _i2 implementation.
libatomic/ChangeLog:
* Makefile.am (AM_CPPFLAGS): add conditional setting of
-DHAVE_FEAT_LSE128.
* acinclude.m4 (LIBAT_TEST_FEAT_AARCH64_LSE128): New.
* config/linux/aarch64/atomic_16.S (LSE128): New macro
definition.
(libat_exchange_16): New LSE128 variant.
(libat_fetch_or_16): Likewise.
(libat_or_fetch_16): Likewise.
(libat_fetch_and_16): Likewise.
(libat_and_fetch_16): Likewise.
* config/linux/aarch64/host-config.h (IFUNC_COND_2): New.
(IFUNC_NCOND): Add operand size checking.
(has_lse2): Renamed from `ifunc1`.
(has_lse128): New.
(HWCAP2_LSE128): Likewise.
* configure.ac: Add call to
LIBAT_TEST_FEAT_AARCH64_LSE128.
* configure (ac_subst_vars): Regenerated via autoreconf.
* Makefile.in: Likewise.
* auto-config.h.in: Likewise.
With support for new atomic features in Armv9.4-a being indicated by
HWCAP2 bits, Libatomic's ifunc resolver must now query its second
argument, of type __ifunc_arg_t*.
We therefore make this argument known to libatomic, allowing us to
query hwcap2 bits in the following manner:
bool
resolver (unsigned long hwcap, const __ifunc_arg_t *features);
{
return (features->hwcap2 & HWCAP2_<FEAT_NAME>);
}
libatomic/ChangeLog:
* config/linux/aarch64/host-config.h (__ifunc_arg_t):
Conditionally-defined if `sys/ifunc.h' not found.
(_IFUNC_ARG_HWCAP): Likewise.
(IFUNC_COND_1): Pass __ifunc_arg_t argument to ifunc.
(ifunc1): Modify function signature to accept __ifunc_arg_t
argument.
* configure.tgt: Add second `const __ifunc_arg_t *features'
argument to IFUNC_RESOLVER_ARGS.
The introduction of further architectural-feature dependent ifuncs
for AArch64 makes hard-coding ifunc `_i<n>' suffixes to functions
cumbersome to work with. It is awkward to remember which ifunc maps
onto which arch feature and makes the code harder to maintain when new
ifuncs are added and their suffixes possibly altered.
This patch uses pre-processor `#define' statements to map each suffix to
a descriptive feature name macro, for example:
#define LSE(NAME) NAME##_i1
Where we wish to generate ifunc names with the pre-processor's token
concatenation feature, we add a level of indirection to previous macro
calls. If before we would have had`MACRO(<name>_i<n>)', we now have
`MACRO_FEAT(name, feature)'. Where we wish to refer to base
functionality (i.e., functions where ifunc suffixes are absent), the
original `MACRO(<name>)' may be used to bypass suffixing.
Consequently, for base functionality, where the ifunc suffix is
absent, the macro interface remains the same. For example, the entry
and endpoints of `libat_store_16' remain defined by:
ENTRY (libat_store_16)
and
END (libat_store_16)
For the LSE2 implementation of the same 16-byte atomic store, we now
have:
ENTRY_FEAT (libat_store_16, LSE2)
and
END_FEAT (libat_store_16, LSE2)
For the aliasing of function names, we define the following new
implementation of the ALIAS macro:
ALIAS (FN_BASE_NAME, FROM_SUFFIX, TO_SUFFIX)
Defining the `CORE(NAME)' macro to be the identity operator, it
returns the base function name unaltered and allows us to alias
target-specific ifuncs to the corresponding base implementation.
For example, we'd alias the LSE2 `libat_exchange_16' to it base
implementation with:
ALIAS (libat_exchange_16, LSE2, CORE)
libatomic/ChangeLog:
* config/linux/aarch64/atomic_16.S (CORE): New macro.
(LSE2): Likewise.
(ENTRY_FEAT): Likewise.
(ENTRY_FEAT1): Likewise.
(END_FEAT): Likewise.
(END_FEAT1): Likewise.
(ALIAS): Modify macro to take in `arch' arguments.
(ALIAS1): New.
Enable lock-free 128-bit atomics on AArch64. This is backwards compatible with
existing binaries (as for these GCC always calls into libatomic, so all 128-bit
atomic uses in a process are switched), gives better performance than locking
atomics and is what most users expect.
128-bit atomic loads use a load/store exclusive loop if LSE2 is not supported.
This results in an implicit store which is invisible to software as long as the
given address is writeable (which will be true when using atomics in real code).
This doesn't yet change __atomic_is_lock_free eventhough all atomics are finally
lock-free on AArch64.
libatomic:
* config/linux/aarch64/atomic_16.S: Implement lock-free ARMv8.0 atomics.
(libat_exchange_16): Merge RELEASE and ACQ_REL/SEQ_CST cases.
* config/linux/aarch64/host-config.h: Use atomic_16.S for baseline v8.0.
Add support for ifunc selection based on CPUID register. Neoverse N1 supports
atomic 128-bit load/store, so use the FEAT_USCAT ifunc like newer Neoverse
cores.
Reviewed-by: Kyrylo.Tkachov@arm.com
libatomic:
* config/linux/aarch64/host-config.h (ifunc1): Use CPUID in ifunc
selection.
The LSE2 ifunc for 16-byte atomic load requires a barrier before the LDP -
without it, it effectively has Load-AcquirePC semantics similar to LDAPR,
which is less restrictive than what __ATOMIC_SEQ_CST requires. This patch
fixes this and adds comments to make it easier to see which sequence is
used for each case. Use a load/store exclusive loop for store to simplify
testing memory ordering is correct (it is slightly faster too).
libatomic/
PR libgcc/108891
* config/linux/aarch64/atomic_16.S: Fix libat_load_16_i1.
Add comments describing the memory order.
This is a follow-up to commit a4c6bd0821
introducing a runtime check for alignment for 16 byte atomic
compare-exchange, load, and store.
libatomic/ChangeLog:
* config/s390/cas_n.c: New file.
* config/s390/load_n.c: New file.
* config/s390/store_n.c: New file.
Add support for AArch64 LSE and LSE2 to libatomic. Disable outline atomics,
and use LSE ifuncs for 1-8 byte atomics and LSE2 ifuncs for 16-byte atomics.
On Neoverse V1, 16-byte atomics are ~4x faster due to avoiding locks.
Note this is safe since we swap all 16-byte atomics using the same ifunc,
so they either use locks or LSE2 atomics, but never a mix. This also improves
ABI compatibility with LLVM: its inlined 16-byte atomics are compatible with
the new libatomic if LSE2 is supported.
libatomic/
* Makefile.in: Regenerated with automake 1.15.1.
* Makefile.am: Add atomic_16.S for AArch64.
* configure.tgt: Disable outline atomics in AArch64 build.
* config/linux/aarch64/atomic_16.S: New file - implementation of
ifuncs for 16-byte atomics.
* config/linux/aarch64/host-config.h: Enable ifuncs, use LSE
(HWCAP_ATOMICS) for 1-8-byte atomics and LSE2 (HWCAP_USCAT) for
16-byte atomics.
We got a response from AMD in
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104688#c10
so the following patch starts treating AMD with AVX and CMPXCHG16B
ISAs like Intel by using vmovdqa for atomic load/store in libatomic.
We still don't have confirmation from Zhaoxin and VIA (anything else
with CPUs featuring AVX and CX16?).
2022-11-15 Jakub Jelinek <jakub@redhat.com>
PR target/104688
* config/x86/init.c (__libat_feat1_init): Don't clear
bit_AVX on AMD CPUs.
Similar to AArch64 the Arm implementation of 128-bit atomics is broken.
For 128-bit atomics we rely on pthread barriers to correct guard the address
in the pointer to get correct memory ordering. However for 128-bit atomics the
address under the lock is different from the original pointer.
This means that one of the values under the atomic operation is not protected
properly and so we fail during when the user has requested sequential
consistency as there's no barrier to enforce this requirement.
As such users have resorted to adding an
#ifdef GCC
<emit barrier>
#endif
around the use of these atomics.
This corrects the issue by issuing a barrier only when __ATOMIC_SEQ_CST was
requested. I have hand verified that the barriers are inserted
for atomic seq cst.
libatomic/ChangeLog:
PR target/102218
* config/arm/host-config.h (pre_seq_barrier, post_seq_barrier,
pre_post_seq_barrier): Require barrier on __ATOMIC_SEQ_CST.
The AArch64 implementation of 128-bit atomics is broken.
For 128-bit atomics we rely on pthread barriers to correct guard the address
in the pointer to get correct memory ordering. However for 128-bit atomics the
address under the lock is different from the original pointer.
This means that one of the values under the atomic operation is not protected
properly and so we fail during when the user has requested sequential
consistency as there's no barrier to enforce this requirement.
As such users have resorted to adding an
#ifdef GCC
<emit barrier>
#endif
around the use of these atomics.
This corrects the issue by issuing a barrier only when __ATOMIC_SEQ_CST was
requested. To remedy this performance hit I think we should revisit using a
similar approach to out-line-atomics for the 128-bit atomics.
Note that I believe I need the empty file due to the include_next chain but
I am not entirely sure. I have hand verified that the barriers are inserted
for atomic seq cst.
libatomic/ChangeLog:
PR target/102218
* config/aarch64/aarch64-config.h: New file.
* config/aarch64/host-config.h: New file.
As mentioned in the PR, the latest Intel SDM has added:
"Processors that enumerate support for Intel® AVX (by setting the feature flag CPUID.01H:ECX.AVX[bit 28])
guarantee that the 16-byte memory operations performed by the following instructions will always be
carried out atomically:
• MOVAPD, MOVAPS, and MOVDQA.
• VMOVAPD, VMOVAPS, and VMOVDQA when encoded with VEX.128.
• VMOVAPD, VMOVAPS, VMOVDQA32, and VMOVDQA64 when encoded with EVEX.128 and k0 (masking disabled).
(Note that these instructions require the linear addresses of their memory operands to be 16-byte
aligned.)"
The following patch deals with it just on the libatomic library side so far,
currently (since ~ 2017) we emit all the __atomic_* 16-byte builtins as
library calls since and this is something that we can hopefully backport.
The patch simply introduces yet another ifunc variant that takes priority
over the pure CMPXCHG16B one, one that checks AVX and CMPXCHG16B bits and
on non-Intel clears the AVX bit during detection for now (if AMD comes
with the same guarantee, we could revert the config/x86/init.c hunk),
which implements 16-byte atomic load as vmovdqa and 16-byte atomic store
as vmovdqa followed by mfence.
2022-03-17 Jakub Jelinek <jakub@redhat.com>
PR target/104688
* Makefile.am (IFUNC_OPTIONS): Change on x86_64 to -mcx16 -mcx16.
(libatomic_la_LIBADD): Add $(addsuffix _16_2_.lo,$(SIZEOBJS)) for
x86_64.
* Makefile.in: Regenerated.
* config/x86/host-config.h (IFUNC_COND_1): For x86_64 define to
both AVX and CMPXCHG16B bits.
(IFUNC_COND_2): Define.
(IFUNC_NCOND): For x86_64 define to 2 * (N == 16).
(MAYBE_HAVE_ATOMIC_CAS_16, MAYBE_HAVE_ATOMIC_EXCHANGE_16,
MAYBE_HAVE_ATOMIC_LDST_16): Define to IFUNC_COND_2 rather than
IFUNC_COND_1.
(HAVE_ATOMIC_CAS_16): Redefine to 1 whenever IFUNC_ALT != 0.
(HAVE_ATOMIC_LDST_16): Redefine to 1 whenever IFUNC_ALT == 1.
(atomic_compare_exchange_n): Define whenever IFUNC_ALT != 0
on x86_64 for N == 16.
(__atomic_load_n, __atomic_store_n): Redefine whenever IFUNC_ALT == 1
on x86_64 for N == 16.
(atomic_load_n, atomic_store_n): New functions.
* config/x86/init.c (__libat_feat1_init): On x86_64 clear bit_AVX
if CPU vendor is not Intel.
Resolves:
PR bootstrap/101379 - libatomic arm build failure after r12-2132 due to -Warray-bounds on a constant address
libatomic/ChangeLog:
PR bootstrap/101379
* config/linux/arm/host-config.h (__kernel_helper_version): New
function. Adjust shadow macro.
AIX caches shared objects in archives with read-other permission.
libgomp and libatomic might be in use during the build or testing, which
may cause archiver operations on them to fail. This patch adjusts the
Makefile fragments to delete the library archives before creating fresh
archives containing both the 32 bit and 64 bit shared objects.
libatomic/ChangeLog:
2020-10-11 Clement Chigot <clement.chigot@atos.net>
* config/t-aix: Delete and recreate libatomic before creating
FAT library.
libgomp/ChangeLog:
2020-10-11 Clement Chigot <clement.chigot@atos.net>
* config/t-aix: Delete and recreate libgomp before creating
FAT library.
AIX FAT libraries should be built with the version of AR chosen by configure.
The GNU Make $(AR) variable includes the AIX -X32_64 option needed
by the default Makefile rules to accept both 32 bit and 64 bit object files.
The -X32_64 option conflicts with ar archiving objects of the same name
used to build FAT libraries.
This patch changes the Makefile fragments for AIX FAT libraries to use $(AR),
but strips the -X32_64 option from the Make variable.
libgcc/ChangeLog:
2020-09-27 Clement Chigot <clement.chigot@atos.net>
* config/rs6000/t-slibgcc-aix: Use $(AR) without -X32_64.
libatomic/ChangeLog:
2020-09-27 Clement Chigot <clement.chigot@atos.net>
* config/t-aix: Use $(AR) without -X32_64.
libgomp/ChangeLog:
2020-09-27 Clement Chigot <clement.chigot@atos.net>
* config/t-aix: Use $(AR) without -X32_64.
libstdc++-v3/ChangeLog:
2020-09-27 Clement Chigot <clement.chigot@atos.net>
* config/os/aix/t-aix: Use $(AR) without -X32_64.
libgfortran/ChangeLog:
2020-09-27 Clement Chigot <clement.chigot@atos.net>
* config/t-aix: Use $(AR) without -X32_64.
Add nvptx support to libatomic.
Given that atomic_test_and_set is not implemented for nvptx (PR96964), the
compiler translates __atomic_test_and_set falling back onto the "Failing all
else, assume a single threaded environment and simply perform the operation"
case in expand_atomic_test_and_set, so it doesn't map onto an actual atomic
operation.
Still, that counts as supported for the configure test of libatomic, so we
end up with HAVE_ATOMIC_TAS_1/2/4/8/16 == 1, and the corresponding
__atomic_test_and_set_1/2/4/8/16 in libatomic all using that non-atomic
implementation.
Fix this by adding an atomic_test_and_set expansion for nvptx, that uses
libatomics __atomic_test_and_set_1.
This again makes the configure tests for HAVE_ATOMIC_TAS_1/2/4/8/16 fail, so
instead we use this case in tas_n.c:
...
/* If this type is smaller than word-sized, fall back to a word-sized
compare-and-swap loop. */
bool
SIZE(libat_test_and_set) (UTYPE *mptr, int smodel)
...
which for __atomic_test_and_set_8 uses INVERT_MASK_8.
Add INVERT_MASK_8 in libatomic_i.h, as well as MASK_8.
Tested libatomic testsuite on nvptx.
gcc/ChangeLog:
PR target/96964
* config/nvptx/nvptx.md (define_expand "atomic_test_and_set"): New
expansion.
libatomic/ChangeLog:
PR target/96898
* configure.tgt: Add nvptx.
* libatomic_i.h (MASK_8, INVERT_MASK_8): New macro definition.
* config/nvptx/host-config.h: New file.
* config/nvptx/lock.c: New file.
The FAT libraries config fragments need to know which library is native
and which is a multilib to choose the correct multilib from which to
append the additional object file or shared object file. Testing the
top-level archive is fragile because it will fail if rebuilding. This
patch tests the compiler preprocessing macros for the 64 bit AIX specific
__64BIT__ to determine the native mode of the compiler in MULTILIBTOP.
2020-07-14 David Edelsohn <dje.gcc@gmail.com>
libatomic/ChangeLog
* config/t-aix: Set BITS from compiler cpp macro.
libgcc/ChangeLog
* config/rs6000/t-slibgcc-aix: Set BITS from compiler cpp macro.
libgfortran/ChangeLog
* config/t-aix: Set BITS from compiler cpp macro.
libgomp/ChangeLog
* config/t-aix: Set BITS from compiler cpp macro.
libstdc++-v3/ChangeLog
* config/os/aix/t-aix: Set BITS from compiler cpp macro.
This patch adds the ability to configure GCC on AIX to build as a
64 bit application and to build target libraries "FAT" libraries in both
32 bit and 64 bit mode.
The patch adds makefile fragment hooks to target libraries that allows
them to include target-specific rules. The target specific rules for
AIX place both 32 bit and 64 bit objects and shared objects
in archives at the top-level, not multilib subdirectories. The
multilibs are built in subdirectories, but must be combined during the
last parts of the target library build process. Because of the way
that GCC bootstrap works, the libraries must be combined during the
multiple stages of GCC bootstrap, not solely when installed in the
final destination, so the libraries are correct at the end of
each target library build stage, not solely an install recipe.
gcc/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* config.gcc: Use t-aix64, biarch64 and default64 for cpu_is_64bit.
* config/rs6000/aix72.h (ASM_SPEC): Remove aix64 option.
(ASM_SPEC32): New.
(ASM_SPEC64): New.
(ASM_CPU_SPEC): Remove vsx and altivec options.
(CPP_SPEC_COMMON): Rename from CPP_SPEC.
(CPP_SPEC32): New.
(CPP_SPEC64): New.
(CPLUSPLUS_CPP_SPEC): Rename to CPLUSPLUS_CPP_SPEC_COMMON..
(TARGET_DEFAULT): Only define if not BIARCH.
(LIB_SPEC_COMMON): Rename from LIB_SPEC.
(LIB_SPEC32): New.
(LIB_SPEC64): New.
(LINK_SPEC_COMMON): Rename from LINK_SPEC.
(LINK_SPEC32): New.
(LINK_SPEC64): New.
(STARTFILE_SPEC): Add 64 bit version of crtcxa and crtdbase.
(ASM_SPEC): Define 32 and 64 bit alternatives using DEFAULT_ARCH64_P.
(CPP_SPEC): Same.
(CPLUSPLUS_CPP_SPEC): Same.
(LIB_SPEC): Same.
(LINK_SPEC): Same.
(SUBTARGET_EXTRA_SPECS): Add new 32/64 specs.
* config/rs6000/defaultaix64.h: New file.
* config/rs6000/t-aix64: New file.
libgcc/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* config.host (extra_parts): Add crtcxa_64 and crtdbase_64.
* config/rs6000/t-aix-cxa: Explicitly compile 32 bit with -maix32
and 64 bit with -maix64.
* config/rs6000/t-slibgcc-aix: Remove extra @multilib_dir@ level.
Build and install AIX-style FAT libraries.
libgomp/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* Makefile.am (tmake_file): Build and install AIX-style FAT libraries.
* Makefile.in: Regenerate
* configure.ac (tmake_file): Substitute.
* configure: Regenerate.
* configure.tgt (powerpc-ibm-aix*): Define tmake_file.
* config/t-aix: New file.
libstdc++-v3/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* Makefile.am (tmake_file): Build and install AIX-style FAT libraries.
* Makefile.in: Regenerate.
* configure.ac (tmake_file): Substitute.
* configure: Regenerate.
* configure.host (aix*): Define tmake_file.
* config/os/aix/t-aix: New file.
libatomic/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* Makefile.am (tmake_file): Build and install AIX-style FAT libraries.
* Makefile.in: Regenerate.
* configure.ac (tmake_file): Substitute.
* configure: Regenerate.
* configure.tgt (powerpc-ibm-aix*): Define tmake_file.
* config/t-aix: New file.
libgfortran/ChangeLog
2020-06-21 David Edelsohn <dje.gcc@gmail.com>
* Makefile.am (tmake_file): Build and install AIX-style FAT libraries.
* Makefile.in: Regenerate.
* configure.ac (tmake_file): Substitute.
* configure: Regenerate.
* configure.host: Add system configury stanza. Define tmake_file.
* config/t-aix: New file.
Windows ABI (MinGW) is different than Linux ABI when bitfileds are involved.
The following patch adds __attribute__ ((gcc_struct)) to struct fenv in order
to match the layout of x87 state image in memory.
2020-06-01 Uroš Bizjak <ubizjak@gmail.com>
libatomic/ChangeLog:
* config/x86/fenv.c (struct fenv): Add __attribute__ ((gcc_struct)).
libgcc/ChangeLog:
* config/i386/sfp-exceptions.c (struct fenv):
Add __attribute__ ((gcc_struct)).
libgfortran/ChangeLog:
PR libfortran/95418
* config/fpu-387.h (struct fenv): Add __attribute__ ((gcc_struct)).
Introduce math_force_eval_div to use generic division to generate
INEXACT as well as INVALID and DIVZERO exceptions.
libgcc/ChangeLog:
* config/i386/sfp-exceptions.c (__math_force_eval): Remove.
(__math_force_eval_div): New define.
(__sfp_handle_exceptions): Use __math_force_eval_div to use
generic division to generate INVALID, DIVZERO and INEXACT
exceptions.
libatomic/ChangeLog:
* config/x86/fenv.c (__math_force_eval): Remove.
(__math_force_eval_div): New define.
(__atomic_deraiseexcept): Use __math_force_eval_div to use
generic division to generate INVALID, DIVZERO and INEXACT
exceptions.
libgfortran/ChangeLog:
* config/fpu-387.h (__math_force_eval): Remove.
(__math_force_eval_div): New define.
(local_feraiseexcept): Use __math_force_eval_div to use
generic division to generate INVALID, DIVZERO and INEXACT
exceptions.
(struct fenv): Define named struct instead of typedef.
Introduce math_force_eval to evaluate generic division to generate
INVALID and DIVZERO exceptions.
libgcc/ChangeLog:
* config/i386/sfp-exceptions.c (__math_force_eval): New define.
(__sfp_handle_exceptions): Use __math_force_eval to evaluete
generic division to generate INVALID and DIVZERO exceptions.
libatomic/ChangeLog:
* config/x86/fenv.c (__math_force_eval): New define.
(__atomic_feraiseexcept): Use __math_force_eval to evaluete
generic division to generate INVALID and DIVZERO exceptions.
libgfortran/ChangeLog:
* config/fpu-387.h (__math_force_eval): New define.
(local_feraiseexcept): Use __math_force_eval to evaluete
generic division to generate INVALID and DIVZERO exceptions.
According to "Intel 64 and IA32 Arch SDM, Vol. 3:
"Because SIMD floating-point exceptions are precise and occur immediately,
the situation does not arise where an x87 FPU instruction, a WAIT/FWAIT
instruction, or another SSE/SSE2/SSE3 instruction will catch a pending
unmasked SIMD floating-point exception."
Remove unneeded assignments to volatile memory.
libgcc/ChangeLog:
* config/i386/sfp-exceptions.c (__sfp_handle_exceptions) [__SSE_MATH__]:
Remove unneeded assignments to volatile memory.
libatomic/ChangeLog:
* config/x86/fenv.c (__atomic_feraiseexcept) [__SSE_MATH__]:
Remove unneeded assignments to volatile memory.
libgfortran/ChangeLog:
* config/fpu-387.h (local_feraiseexcept) [__SSE_MATH__]:
Remove unneeded assignments to volatile memory.
The compiler builtin will use the hardware instruction cdsg if the
memory operand is properly aligned and will fall back to the
library call otherwise.
In case the compiler for one part is able to detect that the
location is aligned and fails to do so for another usage of the hw
instruction and the sw fall back would be mixed on the same memory
location. To avoid this the library fall back also has to use the
hardware instruction if possible.
libatomic/ChangeLog:
2018-03-09 Andreas Krebbel <krebbel@linux.vnet.ibm.com>
* config/s390/exch_n.c: New file.
* configure.tgt: Add the config directory for s390.
From-SVN: r258384
gcc/
* builtins.c (fold_builtin_atomic_always_lock_free): Make "lock-free"
conditional on existance of a fast atomic load.
* optabs-query.c (can_atomic_load_p): New function.
* optabs-query.h (can_atomic_load_p): Declare it.
* optabs.c (expand_atomic_exchange): Always delegate to libatomic if
no fast atomic load is available for the particular size of access.
(expand_atomic_compare_and_swap): Likewise.
(expand_atomic_load): Likewise.
(expand_atomic_store): Likewise.
(expand_atomic_fetch_op): Likewise.
* testsuite/lib/target-supports.exp
(check_effective_target_sync_int_128): Remove x86 because it provides
no fast atomic load.
(check_effective_target_sync_int_128_runtime): Likewise.
libatomic/
* acinclude.m4: Add #define FAST_ATOMIC_LDST_*.
* auto-config.h.in: Regenerate.
* config/x86/host-config.h (FAST_ATOMIC_LDST_16): Define to 0.
(atomic_compare_exchange_n): New.
* glfree.c (EXACT, LARGER): Change condition and add comments.
From-SVN: r245098
ARM libatomic inline asm uses sel, uadd8, uadd16 instructions
which are only available if __ARM_FEATURE_SIMD32 is defined.
libatomic/
2017-01-30 Szabolcs Nagy <szabolcs.nagy@arm.com>
PR target/78945
* config/arm/exch_n.c (libat_exchange): Check __ARM_FEATURE_SIMD32.
From-SVN: r245023