avx512fintrin.h: Update VFIXUPIMM* intrinsics.
gcc/ 2018-11-06 Wei Xiao <wei3.xiao@intel.com> * config/i386/avx512fintrin.h: Update VFIXUPIMM* intrinsics. (_mm512_fixupimm_round_pd): Update parameters and builtin. (_mm512_maskz_fixupimm_round_pd): Ditto. (_mm512_fixupimm_round_ps): Ditto. (_mm512_maskz_fixupimm_round_ps): Ditto. (_mm_fixupimm_round_sd): Ditto. (_mm_maskz_fixupimm_round_sd): Ditto. (_mm_fixupimm_round_ss): Ditto. (_mm_maskz_fixupimm_round_ss): Ditto. (_mm512_fixupimm_pd): Ditto. (_mm512_maskz_fixupimm_pd): Ditto. (_mm512_fixupimm_ps): Ditto. (_mm512_maskz_fixupimm_ps): Ditto. (_mm_fixupimm_sd): Ditto. (_mm_maskz_fixupimm_sd): Ditto. (_mm_fixupimm_ss): Ditto. (_mm_maskz_fixupimm_ss): Ditto. (_mm512_mask_fixupimm_round_pd): Update builtin. (_mm512_mask_fixupimm_round_ps): Ditto. (_mm_mask_fixupimm_round_sd): Ditto. (_mm_mask_fixupimm_round_ss): Ditto. (_mm512_mask_fixupimm_pd): Ditto. (_mm512_mask_fixupimm_ps): Ditto. (_mm_mask_fixupimm_sd): Ditto. (_mm_mask_fixupimm_ss): Ditto. * config/i386/avx512vlintrin.h: (_mm256_fixupimm_pd): Update parameters and builtin. (_mm256_maskz_fixupimm_pd): Ditto. (_mm256_fixupimm_ps): Ditto. (_mm256_maskz_fixupimm_ps): Ditto. (_mm_fixupimm_pd): Ditto. (_mm_maskz_fixupimm_pd): Ditto. (_mm_fixupimm_ps): Ditto. (_mm_maskz_fixupimm_ps): Ditto. (_mm256_mask_fixupimm_pd): Update builtin. (_mm256_mask_fixupimm_ps): Ditto. (_mm_mask_fixupimm_pd): Ditto. (_mm_mask_fixupimm_ps): Ditto. * config/i386/i386-builtin-types.def: Add new types and remove useless ones. * config/i386/i386-builtin.def: Update builtin definitions. * config/i386/i386.c: Handle new builtin types and remove useless ones. * config/i386/sse.md: Update VFIXUPIMM* patterns. (<avx512>_fixupimm<mode>_maskz<round_saeonly_expand_name>): Update. (<avx512>_fixupimm<mode><sd_maskz_name><round_saeonly_name>): Update. (<avx512>_fixupimm<mode>_mask<round_saeonly_name>): Update. (avx512f_sfixupimm<mode>_maskz<round_saeonly_expand_name>): Update. (avx512f_sfixupimm<mode><sd_maskz_name><round_saeonly_name>): Update. (avx512f_sfixupimm<mode>_mask<round_saeonly_name>): Update. * config/i386/subst.md: (round_saeonly_sd_mask_operand4): Add new subst_attr. (round_saeonly_sd_mask_op4): Ditto. (round_saeonly_expand_operand5): Ditto. (round_saeonly_expand): Update. gcc/testsuite/ 2018-11-06 Wei Xiao <wei3.xiao@intel.com> * gcc.target/i386/avx-1.c: Update tests for VFIXUPIMM* intrinsics. * gcc.target/i386/avx512f-vfixupimmpd-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmpd-2.c: Ditto. * gcc.target/i386/avx512f-vfixupimmps-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmsd-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmsd-2.c: Ditto. * gcc.target/i386/avx512f-vfixupimmss-1.c: Ditto. * gcc.target/i386/avx512f-vfixupimmss-2.c: Ditto. * gcc.target/i386/avx512vl-vfixupimmpd-1.c: Ditto. * gcc.target/i386/avx512vl-vfixupimmps-1.c: Ditto. * gcc.target/i386/sse-13.c: Ditto. * gcc.target/i386/sse-14.c: Ditto. * gcc.target/i386/sse-22.c: Ditto. * gcc.target/i386/sse-23.c: Ditto. * gcc.target/i386/testimm-10.c: Ditto. * gcc.target/i386/testround-1.c: Ditto. From-SVN: r265827
This commit is contained in:
parent
40228b24ee
commit
ce2ad8cc8f
26 changed files with 554 additions and 463 deletions
|
@ -1,3 +1,59 @@
|
|||
2018-11-06 Wei Xiao <wei3.xiao@intel.com>
|
||||
|
||||
* config/i386/avx512fintrin.h: Update VFIXUPIMM* intrinsics.
|
||||
(_mm512_fixupimm_round_pd): Update parameters and builtin.
|
||||
(_mm512_maskz_fixupimm_round_pd): Ditto.
|
||||
(_mm512_fixupimm_round_ps): Ditto.
|
||||
(_mm512_maskz_fixupimm_round_ps): Ditto.
|
||||
(_mm_fixupimm_round_sd): Ditto.
|
||||
(_mm_maskz_fixupimm_round_sd): Ditto.
|
||||
(_mm_fixupimm_round_ss): Ditto.
|
||||
(_mm_maskz_fixupimm_round_ss): Ditto.
|
||||
(_mm512_fixupimm_pd): Ditto.
|
||||
(_mm512_maskz_fixupimm_pd): Ditto.
|
||||
(_mm512_fixupimm_ps): Ditto.
|
||||
(_mm512_maskz_fixupimm_ps): Ditto.
|
||||
(_mm_fixupimm_sd): Ditto.
|
||||
(_mm_maskz_fixupimm_sd): Ditto.
|
||||
(_mm_fixupimm_ss): Ditto.
|
||||
(_mm_maskz_fixupimm_ss): Ditto.
|
||||
(_mm512_mask_fixupimm_round_pd): Update builtin.
|
||||
(_mm512_mask_fixupimm_round_ps): Ditto.
|
||||
(_mm_mask_fixupimm_round_sd): Ditto.
|
||||
(_mm_mask_fixupimm_round_ss): Ditto.
|
||||
(_mm512_mask_fixupimm_pd): Ditto.
|
||||
(_mm512_mask_fixupimm_ps): Ditto.
|
||||
(_mm_mask_fixupimm_sd): Ditto.
|
||||
(_mm_mask_fixupimm_ss): Ditto.
|
||||
* config/i386/avx512vlintrin.h:
|
||||
(_mm256_fixupimm_pd): Update parameters and builtin.
|
||||
(_mm256_maskz_fixupimm_pd): Ditto.
|
||||
(_mm256_fixupimm_ps): Ditto.
|
||||
(_mm256_maskz_fixupimm_ps): Ditto.
|
||||
(_mm_fixupimm_pd): Ditto.
|
||||
(_mm_maskz_fixupimm_pd): Ditto.
|
||||
(_mm_fixupimm_ps): Ditto.
|
||||
(_mm_maskz_fixupimm_ps): Ditto.
|
||||
(_mm256_mask_fixupimm_pd): Update builtin.
|
||||
(_mm256_mask_fixupimm_ps): Ditto.
|
||||
(_mm_mask_fixupimm_pd): Ditto.
|
||||
(_mm_mask_fixupimm_ps): Ditto.
|
||||
* config/i386/i386-builtin-types.def: Add new types and remove useless ones.
|
||||
* config/i386/i386-builtin.def: Update builtin definitions.
|
||||
* config/i386/i386.c: Handle new builtin types and remove useless ones.
|
||||
* config/i386/sse.md: Update VFIXUPIMM* patterns.
|
||||
(<avx512>_fixupimm<mode>_maskz<round_saeonly_expand_name>): Update.
|
||||
(<avx512>_fixupimm<mode><sd_maskz_name><round_saeonly_name>): Update.
|
||||
(<avx512>_fixupimm<mode>_mask<round_saeonly_name>): Update.
|
||||
(avx512f_sfixupimm<mode>_maskz<round_saeonly_expand_name>): Update.
|
||||
(avx512f_sfixupimm<mode><sd_maskz_name><round_saeonly_name>): Update.
|
||||
(avx512f_sfixupimm<mode>_mask<round_saeonly_name>): Update.
|
||||
* config/i386/subst.md:
|
||||
(round_saeonly_sd_mask_operand4): Add new subst_attr.
|
||||
(round_saeonly_sd_mask_op4): Ditto.
|
||||
(round_saeonly_expand_operand5): Ditto.
|
||||
(round_saeonly_expand): Update.
|
||||
|
||||
2018-11-05 Max Filippov <jcmvbkbc@gmail.com>
|
||||
|
||||
* config/xtensa/uclinux.h (XTENSA_ALWAYS_PIC): Change to 0.
|
||||
|
|
|
@ -6977,140 +6977,132 @@ _mm512_maskz_shuffle_pd (__mmask8 __U, __m512d __M, __m512d __V,
|
|||
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_fixupimm_round_pd (__m512d __A, __m512d __B, __m512i __C,
|
||||
_mm512_fixupimm_round_pd (__m512d __A, __m512i __B,
|
||||
const int __imm, const int __R)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512 ((__v8df) __A,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__mmask8) -1, __R);
|
||||
__R);
|
||||
}
|
||||
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_mask_fixupimm_round_pd (__m512d __A, __mmask8 __U, __m512d __B,
|
||||
__m512i __C, const int __imm, const int __R)
|
||||
_mm512_mask_fixupimm_round_pd (__m512d __W, __mmask8 __U, __m512d __A,
|
||||
__m512i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__v8df) __W,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_maskz_fixupimm_round_pd (__mmask8 __U, __m512d __A, __m512d __B,
|
||||
__m512i __C, const int __imm, const int __R)
|
||||
_mm512_maskz_fixupimm_round_pd (__mmask8 __U, __m512d __A,
|
||||
__m512i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_maskz ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_fixupimm_round_ps (__m512 __A, __m512 __B, __m512i __C,
|
||||
_mm512_fixupimm_round_ps (__m512 __A, __m512i __B,
|
||||
const int __imm, const int __R)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
return (__m512) __builtin_ia32_fixupimmps512 ((__v16sf) __A,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__mmask16) -1, __R);
|
||||
__R);
|
||||
}
|
||||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_mask_fixupimm_round_ps (__m512 __A, __mmask16 __U, __m512 __B,
|
||||
__m512i __C, const int __imm, const int __R)
|
||||
_mm512_mask_fixupimm_round_ps (__m512 __W, __mmask16 __U, __m512 __A,
|
||||
__m512i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__v16sf) __W,
|
||||
(__mmask16) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_maskz_fixupimm_round_ps (__mmask16 __U, __m512 __A, __m512 __B,
|
||||
__m512i __C, const int __imm, const int __R)
|
||||
_mm512_maskz_fixupimm_round_ps (__mmask16 __U, __m512 __A,
|
||||
__m512i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_maskz ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__mmask16) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_round_sd (__m128d __A, __m128d __B, __m128i __C,
|
||||
_mm_fixupimm_round_sd (__m128d __A, __m128i __B,
|
||||
const int __imm, const int __R)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C, __imm,
|
||||
(__mmask8) -1, __R);
|
||||
return (__m128d) __builtin_ia32_fixupimmsd ((__v2df) __A,
|
||||
(__v2di) __B, __imm,
|
||||
__R);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_round_sd (__m128d __A, __mmask8 __U, __m128d __B,
|
||||
__m128i __C, const int __imm, const int __R)
|
||||
_mm_mask_fixupimm_round_sd (__m128d __W, __mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C, __imm,
|
||||
(__v2di) __B, __imm,
|
||||
(__v2df) __W,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_round_sd (__mmask8 __U, __m128d __A, __m128d __B,
|
||||
__m128i __C, const int __imm, const int __R)
|
||||
_mm_maskz_fixupimm_round_sd (__mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_maskz ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C,
|
||||
(__v2di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_round_ss (__m128 __A, __m128 __B, __m128i __C,
|
||||
_mm_fixupimm_round_ss (__m128 __A, __m128i __B,
|
||||
const int __imm, const int __R)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__mmask8) -1, __R);
|
||||
return (__m128) __builtin_ia32_fixupimmss ((__v4sf) __A,
|
||||
(__v4si) __B, __imm,
|
||||
__R);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_round_ss (__m128 __A, __mmask8 __U, __m128 __B,
|
||||
__m128i __C, const int __imm, const int __R)
|
||||
_mm_mask_fixupimm_round_ss (__m128 __W, __mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__v4si) __B, __imm,
|
||||
(__v4sf) __W,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A, __m128 __B,
|
||||
__m128i __C, const int __imm, const int __R)
|
||||
_mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm, const int __R)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_maskz ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__v4si) __B, __imm,
|
||||
(__mmask8) __U, __R);
|
||||
}
|
||||
|
||||
|
@ -7151,64 +7143,63 @@ _mm_maskz_fixupimm_round_ss (__mmask8 __U, __m128 __A, __m128 __B,
|
|||
(__v16sf)(__m512)_mm512_setzero_ps(),\
|
||||
(__mmask16)(U)))
|
||||
|
||||
#define _mm512_fixupimm_round_pd(X, Y, Z, C, R) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), (R)))
|
||||
#define _mm512_fixupimm_round_pd(X, Y, C, R) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512 ((__v8df)(__m512d)(X), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), (R)))
|
||||
|
||||
#define _mm512_mask_fixupimm_round_pd(X, U, Y, Z, C, R) \
|
||||
#define _mm512_mask_fixupimm_round_pd(W, U, X, Y, C, R) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), (__v8df)(__m512d)(W), \
|
||||
(__mmask8)(U), (R)))
|
||||
|
||||
#define _mm512_maskz_fixupimm_round_pd(U, X, Y, Z, C, R) \
|
||||
#define _mm512_maskz_fixupimm_round_pd(U, X, Y, C, R) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_maskz ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), \
|
||||
(__mmask8)(U), (R)))
|
||||
|
||||
#define _mm512_fixupimm_round_ps(X, Y, Z, C, R) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__mmask16)(-1), (R)))
|
||||
#define _mm512_fixupimm_round_ps(X, Y, C, R) \
|
||||
((__m512)__builtin_ia32_fixupimmps512 ((__v16sf)(__m512)(X), \
|
||||
(__v16si)(__m512i)(Y), (int)(C), \
|
||||
(R)))
|
||||
|
||||
#define _mm512_mask_fixupimm_round_ps(X, U, Y, Z, C, R) \
|
||||
#define _mm512_mask_fixupimm_round_ps(W, U, X, Y, C, R) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__mmask16)(U), (R)))
|
||||
(__v16si)(__m512i)(Y), (int)(C), \
|
||||
(__v16sf)(__m512)(W), (__mmask16)(U), (R)))
|
||||
|
||||
#define _mm512_maskz_fixupimm_round_ps(U, X, Y, Z, C, R) \
|
||||
#define _mm512_maskz_fixupimm_round_ps(U, X, Y, C, R) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_maskz ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__v16si)(__m512i)(Y), (int)(C), \
|
||||
(__mmask16)(U), (R)))
|
||||
|
||||
#define _mm_fixupimm_round_sd(X, Y, Z, C, R) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), (R)))
|
||||
#define _mm_fixupimm_round_sd(X, Y, C, R) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd ((__v2df)(__m128d)(X), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(R)))
|
||||
|
||||
#define _mm_mask_fixupimm_round_sd(X, U, Y, Z, C, R) \
|
||||
#define _mm_mask_fixupimm_round_sd(W, U, X, Y, C, R) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(U), (R)))
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(__v2df)(__m128d)(W), (__mmask8)(U), (R)))
|
||||
|
||||
#define _mm_maskz_fixupimm_round_sd(U, X, Y, Z, C, R) \
|
||||
#define _mm_maskz_fixupimm_round_sd(U, X, Y, C, R) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_maskz ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(U), (R)))
|
||||
|
||||
#define _mm_fixupimm_round_ss(X, Y, Z, C, R) \
|
||||
((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), (R)))
|
||||
#define _mm_fixupimm_round_ss(X, Y, C, R) \
|
||||
((__m128)__builtin_ia32_fixupimmss ((__v4sf)(__m128)(X), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
(R)))
|
||||
|
||||
#define _mm_mask_fixupimm_round_ss(X, U, Y, Z, C, R) \
|
||||
#define _mm_mask_fixupimm_round_ss(W, U, X, Y, C, R) \
|
||||
((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(U), (R)))
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
(__v4sf)(__m128)(W), (__mmask8)(U), (R)))
|
||||
|
||||
#define _mm_maskz_fixupimm_round_ss(U, X, Y, Z, C, R) \
|
||||
#define _mm_maskz_fixupimm_round_ss(U, X, Y, C, R) \
|
||||
((__m128)__builtin_ia32_fixupimmss_maskz ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(U), (R)))
|
||||
#endif
|
||||
|
||||
|
@ -13215,37 +13206,34 @@ _mm512_maskz_cvtepu32_ps (__mmask16 __U, __m512i __A)
|
|||
#ifdef __OPTIMIZE__
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_fixupimm_pd (__m512d __A, __m512d __B, __m512i __C, const int __imm)
|
||||
_mm512_fixupimm_pd (__m512d __A, __m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512 ((__v8df) __A,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__mmask8) -1,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_mask_fixupimm_pd (__m512d __A, __mmask8 __U, __m512d __B,
|
||||
__m512i __C, const int __imm)
|
||||
_mm512_mask_fixupimm_pd (__m512d __W, __mmask8 __U, __m512d __A,
|
||||
__m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_mask ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__v8df) __W,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m512d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A, __m512d __B,
|
||||
__m512i __C, const int __imm)
|
||||
_mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A,
|
||||
__m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512d) __builtin_ia32_fixupimmpd512_maskz ((__v8df) __A,
|
||||
(__v8df) __B,
|
||||
(__v8di) __C,
|
||||
(__v8di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
|
@ -13253,37 +13241,34 @@ _mm512_maskz_fixupimm_pd (__mmask8 __U, __m512d __A, __m512d __B,
|
|||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_fixupimm_ps (__m512 __A, __m512 __B, __m512i __C, const int __imm)
|
||||
_mm512_fixupimm_ps (__m512 __A, __m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
return (__m512) __builtin_ia32_fixupimmps512 ((__v16sf) __A,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__mmask16) -1,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_mask_fixupimm_ps (__m512 __A, __mmask16 __U, __m512 __B,
|
||||
__m512i __C, const int __imm)
|
||||
_mm512_mask_fixupimm_ps (__m512 __W, __mmask16 __U, __m512 __A,
|
||||
__m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_mask ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__v16sf) __W,
|
||||
(__mmask16) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m512
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A, __m512 __B,
|
||||
__m512i __C, const int __imm)
|
||||
_mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A,
|
||||
__m512i __B, const int __imm)
|
||||
{
|
||||
return (__m512) __builtin_ia32_fixupimmps512_maskz ((__v16sf) __A,
|
||||
(__v16sf) __B,
|
||||
(__v16si) __C,
|
||||
(__v16si) __B,
|
||||
__imm,
|
||||
(__mmask16) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
|
@ -13291,35 +13276,32 @@ _mm512_maskz_fixupimm_ps (__mmask16 __U, __m512 __A, __m512 __B,
|
|||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_sd (__m128d __A, __m128d __B, __m128i __C, const int __imm)
|
||||
_mm_fixupimm_sd (__m128d __A, __m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C, __imm,
|
||||
(__mmask8) -1,
|
||||
return (__m128d) __builtin_ia32_fixupimmsd ((__v2df) __A,
|
||||
(__v2di) __B, __imm,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_sd (__m128d __A, __mmask8 __U, __m128d __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_mask_fixupimm_sd (__m128d __W, __mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C, __imm,
|
||||
(__v2di) __B, __imm,
|
||||
(__v2df) __W,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A, __m128d __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmsd_maskz ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C,
|
||||
(__v2di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
|
@ -13327,97 +13309,94 @@ _mm_maskz_fixupimm_sd (__mmask8 __U, __m128d __A, __m128d __B,
|
|||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_ss (__m128 __A, __m128 __B, __m128i __C, const int __imm)
|
||||
_mm_fixupimm_ss (__m128 __A, __m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__mmask8) -1,
|
||||
return (__m128) __builtin_ia32_fixupimmss ((__v4sf) __A,
|
||||
(__v4si) __B, __imm,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_ss (__m128 __A, __mmask8 __U, __m128 __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_mask_fixupimm_ss (__m128 __W, __mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__v4si) __B, __imm,
|
||||
(__v4sf) __W,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_ss (__mmask8 __U, __m128 __A, __m128 __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_maskz_fixupimm_ss (__mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmss_maskz ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C, __imm,
|
||||
(__v4si) __B, __imm,
|
||||
(__mmask8) __U,
|
||||
_MM_FROUND_CUR_DIRECTION);
|
||||
}
|
||||
#else
|
||||
#define _mm512_fixupimm_pd(X, Y, Z, C) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), _MM_FROUND_CUR_DIRECTION))
|
||||
#define _mm512_fixupimm_pd(X, Y, C) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512((__v8df)(__m512d)(X), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), \
|
||||
_MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm512_mask_fixupimm_pd(X, U, Y, Z, C) \
|
||||
#define _mm512_mask_fixupimm_pd(W, U, X, Y, C) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_mask ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), (__v8df)(__m512d)(W), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm512_maskz_fixupimm_pd(U, X, Y, Z, C) \
|
||||
#define _mm512_maskz_fixupimm_pd(U, X, Y, C) \
|
||||
((__m512d)__builtin_ia32_fixupimmpd512_maskz ((__v8df)(__m512d)(X), \
|
||||
(__v8df)(__m512d)(Y), (__v8di)(__m512i)(Z), (int)(C), \
|
||||
(__v8di)(__m512i)(Y), (int)(C), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm512_fixupimm_ps(X, Y, Z, C) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__mmask16)(-1), _MM_FROUND_CUR_DIRECTION))
|
||||
#define _mm512_fixupimm_ps(X, Y, C) \
|
||||
((__m512)__builtin_ia32_fixupimmps512 ((__v16sf)(__m512)(X), \
|
||||
(__v16si)(__m512i)(Y), (int)(C), \
|
||||
_MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm512_mask_fixupimm_ps(X, U, Y, Z, C) \
|
||||
#define _mm512_mask_fixupimm_ps(W, U, X, Y, C) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_mask ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__v16si)(__m512i)(Y), (int)(C), (__v16sf)(__m512)(W), \
|
||||
(__mmask16)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm512_maskz_fixupimm_ps(U, X, Y, Z, C) \
|
||||
#define _mm512_maskz_fixupimm_ps(U, X, Y, C) \
|
||||
((__m512)__builtin_ia32_fixupimmps512_maskz ((__v16sf)(__m512)(X), \
|
||||
(__v16sf)(__m512)(Y), (__v16si)(__m512i)(Z), (int)(C), \
|
||||
(__v16si)(__m512i)(Y), (int)(C), \
|
||||
(__mmask16)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_fixupimm_sd(X, Y, Z, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), _MM_FROUND_CUR_DIRECTION))
|
||||
#define _mm_fixupimm_sd(X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd ((__v2df)(__m128d)(X), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
_MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_mask_fixupimm_sd(X, U, Y, Z, C) \
|
||||
#define _mm_mask_fixupimm_sd(W, U, X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), (__v2df)(__m128d)(W), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_maskz_fixupimm_sd(U, X, Y, Z, C) \
|
||||
#define _mm_maskz_fixupimm_sd(U, X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmsd_maskz ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), (__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_fixupimm_ss(X, Y, Z, C) \
|
||||
((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__mmask8)(-1), _MM_FROUND_CUR_DIRECTION))
|
||||
#define _mm_fixupimm_ss(X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmss ((__v4sf)(__m128)(X), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
_MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_mask_fixupimm_ss(X, U, Y, Z, C) \
|
||||
#define _mm_mask_fixupimm_ss(W, U, X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmss_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), (__v4sf)(__m128)(W), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
|
||||
#define _mm_maskz_fixupimm_ss(U, X, Y, Z, C) \
|
||||
#define _mm_maskz_fixupimm_ss(U, X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmss_maskz ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), (__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(U), _MM_FROUND_CUR_DIRECTION))
|
||||
#endif
|
||||
|
||||
|
|
|
@ -10242,143 +10242,131 @@ _mm256_maskz_shuffle_f32x4 (__mmask8 __U, __m256 __A, __m256 __B,
|
|||
|
||||
extern __inline __m256d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_fixupimm_pd (__m256d __A, __m256d __B, __m256i __C,
|
||||
_mm256_fixupimm_pd (__m256d __A, __m256i __B,
|
||||
const int __imm)
|
||||
{
|
||||
return (__m256d) __builtin_ia32_fixupimmpd256_mask ((__v4df) __A,
|
||||
(__v4df) __B,
|
||||
(__v4di) __C,
|
||||
__imm,
|
||||
(__mmask8) -1);
|
||||
return (__m256d) __builtin_ia32_fixupimmpd256 ((__v4df) __A,
|
||||
(__v4di) __B,
|
||||
__imm);
|
||||
}
|
||||
|
||||
extern __inline __m256d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_mask_fixupimm_pd (__m256d __A, __mmask8 __U, __m256d __B,
|
||||
__m256i __C, const int __imm)
|
||||
_mm256_mask_fixupimm_pd (__m256d __W, __mmask8 __U, __m256d __A,
|
||||
__m256i __B, const int __imm)
|
||||
{
|
||||
return (__m256d) __builtin_ia32_fixupimmpd256_mask ((__v4df) __A,
|
||||
(__v4df) __B,
|
||||
(__v4di) __C,
|
||||
(__v4di) __B,
|
||||
__imm,
|
||||
(__v4df) __W,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m256d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_maskz_fixupimm_pd (__mmask8 __U, __m256d __A, __m256d __B,
|
||||
__m256i __C, const int __imm)
|
||||
_mm256_maskz_fixupimm_pd (__mmask8 __U, __m256d __A,
|
||||
__m256i __B, const int __imm)
|
||||
{
|
||||
return (__m256d) __builtin_ia32_fixupimmpd256_maskz ((__v4df) __A,
|
||||
(__v4df) __B,
|
||||
(__v4di) __C,
|
||||
(__v4di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m256
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_fixupimm_ps (__m256 __A, __m256 __B, __m256i __C,
|
||||
_mm256_fixupimm_ps (__m256 __A, __m256i __B,
|
||||
const int __imm)
|
||||
{
|
||||
return (__m256) __builtin_ia32_fixupimmps256_mask ((__v8sf) __A,
|
||||
(__v8sf) __B,
|
||||
(__v8si) __C,
|
||||
__imm,
|
||||
(__mmask8) -1);
|
||||
return (__m256) __builtin_ia32_fixupimmps256 ((__v8sf) __A,
|
||||
(__v8si) __B,
|
||||
__imm);
|
||||
}
|
||||
|
||||
extern __inline __m256
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_mask_fixupimm_ps (__m256 __A, __mmask8 __U, __m256 __B,
|
||||
__m256i __C, const int __imm)
|
||||
_mm256_mask_fixupimm_ps (__m256 __W, __mmask8 __U, __m256 __A,
|
||||
__m256i __B, const int __imm)
|
||||
{
|
||||
return (__m256) __builtin_ia32_fixupimmps256_mask ((__v8sf) __A,
|
||||
(__v8sf) __B,
|
||||
(__v8si) __C,
|
||||
(__v8si) __B,
|
||||
__imm,
|
||||
(__v8sf) __W,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m256
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm256_maskz_fixupimm_ps (__mmask8 __U, __m256 __A, __m256 __B,
|
||||
__m256i __C, const int __imm)
|
||||
_mm256_maskz_fixupimm_ps (__mmask8 __U, __m256 __A,
|
||||
__m256i __B, const int __imm)
|
||||
{
|
||||
return (__m256) __builtin_ia32_fixupimmps256_maskz ((__v8sf) __A,
|
||||
(__v8sf) __B,
|
||||
(__v8si) __C,
|
||||
(__v8si) __B,
|
||||
__imm,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_pd (__m128d __A, __m128d __B, __m128i __C,
|
||||
_mm_fixupimm_pd (__m128d __A, __m128i __B,
|
||||
const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmpd128_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C,
|
||||
__imm,
|
||||
(__mmask8) -1);
|
||||
return (__m128d) __builtin_ia32_fixupimmpd128 ((__v2df) __A,
|
||||
(__v2di) __B,
|
||||
__imm);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_pd (__m128d __A, __mmask8 __U, __m128d __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_mask_fixupimm_pd (__m128d __W, __mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmpd128_mask ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C,
|
||||
(__v2di) __B,
|
||||
__imm,
|
||||
(__v2df) __W,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m128d
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_pd (__mmask8 __U, __m128d __A, __m128d __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_maskz_fixupimm_pd (__mmask8 __U, __m128d __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128d) __builtin_ia32_fixupimmpd128_maskz ((__v2df) __A,
|
||||
(__v2df) __B,
|
||||
(__v2di) __C,
|
||||
(__v2di) __B,
|
||||
__imm,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_fixupimm_ps (__m128 __A, __m128 __B, __m128i __C, const int __imm)
|
||||
_mm_fixupimm_ps (__m128 __A, __m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmps128_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C,
|
||||
__imm,
|
||||
(__mmask8) -1);
|
||||
return (__m128) __builtin_ia32_fixupimmps128 ((__v4sf) __A,
|
||||
(__v4si) __B,
|
||||
__imm);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_mask_fixupimm_ps (__m128 __A, __mmask8 __U, __m128 __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_mask_fixupimm_ps (__m128 __W, __mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmps128_mask ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C,
|
||||
(__v4si) __B,
|
||||
__imm,
|
||||
(__v4sf) __W,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
||||
extern __inline __m128
|
||||
__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
|
||||
_mm_maskz_fixupimm_ps (__mmask8 __U, __m128 __A, __m128 __B,
|
||||
__m128i __C, const int __imm)
|
||||
_mm_maskz_fixupimm_ps (__mmask8 __U, __m128 __A,
|
||||
__m128i __B, const int __imm)
|
||||
{
|
||||
return (__m128) __builtin_ia32_fixupimmps128_maskz ((__v4sf) __A,
|
||||
(__v4sf) __B,
|
||||
(__v4si) __C,
|
||||
(__v4si) __B,
|
||||
__imm,
|
||||
(__mmask8) __U);
|
||||
}
|
||||
|
@ -12657,78 +12645,74 @@ _mm256_permutex_pd (__m256d __X, const int __M)
|
|||
(__v4sf)(__m128)_mm_setzero_ps (), \
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm256_fixupimm_pd(X, Y, Z, C) \
|
||||
#define _mm256_fixupimm_pd(X, Y, C) \
|
||||
((__m256d)__builtin_ia32_fixupimmpd256_mask ((__v4df)(__m256d)(X), \
|
||||
(__v4df)(__m256d)(Y), \
|
||||
(__v4di)(__m256i)(Z), (int)(C), \
|
||||
(__v4di)(__m256i)(Y), (int)(C), \
|
||||
(__mmask8)(-1)))
|
||||
|
||||
#define _mm256_mask_fixupimm_pd(X, U, Y, Z, C) \
|
||||
#define _mm256_mask_fixupimm_pd(W, U, X, Y, C) \
|
||||
((__m256d)__builtin_ia32_fixupimmpd256_mask ((__v4df)(__m256d)(X), \
|
||||
(__v4df)(__m256d)(Y), \
|
||||
(__v4di)(__m256i)(Z), (int)(C), \
|
||||
(__v4di)(__m256i)(Y), (int)(C), \
|
||||
(__v4df)(__m256d)(W), \
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm256_maskz_fixupimm_pd(U, X, Y, Z, C) \
|
||||
#define _mm256_maskz_fixupimm_pd(U, X, Y, C) \
|
||||
((__m256d)__builtin_ia32_fixupimmpd256_maskz ((__v4df)(__m256d)(X), \
|
||||
(__v4df)(__m256d)(Y), \
|
||||
(__v4di)(__m256i)(Z), (int)(C),\
|
||||
(__v4di)(__m256i)(Y), \
|
||||
(int)(C),\
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm256_fixupimm_ps(X, Y, Z, C) \
|
||||
#define _mm256_fixupimm_ps(X, Y, C) \
|
||||
((__m256)__builtin_ia32_fixupimmps256_mask ((__v8sf)(__m256)(X), \
|
||||
(__v8sf)(__m256)(Y), \
|
||||
(__v8si)(__m256i)(Z), (int)(C), \
|
||||
(__v8si)(__m256i)(Y), (int)(C), \
|
||||
(__mmask8)(-1)))
|
||||
|
||||
|
||||
#define _mm256_mask_fixupimm_ps(X, U, Y, Z, C) \
|
||||
#define _mm256_mask_fixupimm_ps(W, U, X, Y, C) \
|
||||
((__m256)__builtin_ia32_fixupimmps256_mask ((__v8sf)(__m256)(X), \
|
||||
(__v8sf)(__m256)(Y), \
|
||||
(__v8si)(__m256i)(Z), (int)(C), \
|
||||
(__v8si)(__m256i)(Y), (int)(C), \
|
||||
(__v8sf)(__m256)(W), \
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm256_maskz_fixupimm_ps(U, X, Y, Z, C) \
|
||||
#define _mm256_maskz_fixupimm_ps(U, X, Y, C) \
|
||||
((__m256)__builtin_ia32_fixupimmps256_maskz ((__v8sf)(__m256)(X), \
|
||||
(__v8sf)(__m256)(Y), \
|
||||
(__v8si)(__m256i)(Z), (int)(C),\
|
||||
(__v8si)(__m256i)(Y), \
|
||||
(int)(C),\
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm_fixupimm_pd(X, Y, Z, C) \
|
||||
#define _mm_fixupimm_pd(X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmpd128_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), \
|
||||
(__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(-1)))
|
||||
|
||||
|
||||
#define _mm_mask_fixupimm_pd(X, U, Y, Z, C) \
|
||||
#define _mm_mask_fixupimm_pd(W, U, X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmpd128_mask ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), \
|
||||
(__v2di)(__m128i)(Z), (int)(C), \
|
||||
(__v2di)(__m128i)(Y), (int)(C), \
|
||||
(__v2df)(__m128d)(W), \
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm_maskz_fixupimm_pd(U, X, Y, Z, C) \
|
||||
#define _mm_maskz_fixupimm_pd(U, X, Y, C) \
|
||||
((__m128d)__builtin_ia32_fixupimmpd128_maskz ((__v2df)(__m128d)(X), \
|
||||
(__v2df)(__m128d)(Y), \
|
||||
(__v2di)(__m128i)(Z), (int)(C),\
|
||||
(__v2di)(__m128i)(Y), \
|
||||
(int)(C),\
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm_fixupimm_ps(X, Y, Z, C) \
|
||||
#define _mm_fixupimm_ps(X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmps128_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), \
|
||||
(__v4si)(__m128i)(Z), (int)(C), \
|
||||
(__v4si)(__m128i)(Y), (int)(C), \
|
||||
(__mmask8)(-1)))
|
||||
|
||||
#define _mm_mask_fixupimm_ps(X, U, Y, Z, C) \
|
||||
#define _mm_mask_fixupimm_ps(W, U, X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmps128_mask ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), \
|
||||
(__v4si)(__m128i)(Z), (int)(C),\
|
||||
(__v4si)(__m128i)(Y), (int)(C),\
|
||||
(__v4sf)(__m128)(W), \
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm_maskz_fixupimm_ps(U, X, Y, Z, C) \
|
||||
#define _mm_maskz_fixupimm_ps(U, X, Y, C) \
|
||||
((__m128)__builtin_ia32_fixupimmps128_maskz ((__v4sf)(__m128)(X), \
|
||||
(__v4sf)(__m128)(Y), \
|
||||
(__v4si)(__m128i)(Z), (int)(C),\
|
||||
(__v4si)(__m128i)(Y), \
|
||||
(int)(C),\
|
||||
(__mmask8)(U)))
|
||||
|
||||
#define _mm256_mask_srli_epi32(W, U, A, B) \
|
||||
|
|
|
@ -444,9 +444,6 @@ DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, INT, V8DF, UQI)
|
|||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, INT, V8DF, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, INT, V8DF, UQI)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, V8DI, INT)
|
||||
DEF_FUNCTION_TYPE (V4DF, V4DF, V4DF, V4DI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, V2DI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF, V8DI, INT, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DF)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT, V16SF, UHI)
|
||||
|
@ -454,11 +451,6 @@ DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, INT, V16SF, HI, INT)
|
|||
DEF_FUNCTION_TYPE (V16SF, V16SF, INT, V16SF, UHI)
|
||||
DEF_FUNCTION_TYPE (V16SI, V16SI, V4SI, INT, V16SI, UHI)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, V16SI, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SF, V16SI, INT, HI, INT)
|
||||
DEF_FUNCTION_TYPE (V8SF, V8SF, V8SF, V8SI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SI, INT, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, V2DI, INT, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DF, INT, V2DF, UQI, INT)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, INT, V4SF, UQI, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V4SF, INT)
|
||||
|
@ -553,6 +545,9 @@ DEF_FUNCTION_TYPE (V4SF, V4SF, V4SF, V4SF, V4SF, V4SF, PCV4SF, V4SF, UQI)
|
|||
DEF_FUNCTION_TYPE (V16SI, V16SI, V16SI, V16SI, V16SI, V16SI, PCV4SI, V16SI, UHI)
|
||||
DEF_FUNCTION_TYPE (V16SI, V16SI, V16SI, V16SI, V16SI, V16SI, PCV4SI)
|
||||
|
||||
DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT)
|
||||
|
||||
# Instructions returning mask
|
||||
DEF_FUNCTION_TYPE (UCHAR, UQI, UQI, PUCHAR)
|
||||
|
@ -987,6 +982,15 @@ DEF_FUNCTION_TYPE (V8QI, QI, QI, QI, QI, QI, QI, QI, QI)
|
|||
DEF_FUNCTION_TYPE (UCHAR, UCHAR, UINT, UINT, PUNSIGNED)
|
||||
DEF_FUNCTION_TYPE (UCHAR, UCHAR, ULONGLONG, ULONGLONG, PULONGLONG)
|
||||
|
||||
DEF_FUNCTION_TYPE (V4DF, V4DF, V4DI, INT, V4DF, UQI)
|
||||
DEF_FUNCTION_TYPE (V4DF, V4DF, V4DI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT, V8SF, UQI)
|
||||
DEF_FUNCTION_TYPE (V8SF, V8SF, V8SI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, V2DF, UQI)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, UQI)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, V4SF, UQI)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, UQI)
|
||||
|
||||
# Instructions with rounding
|
||||
DEF_FUNCTION_TYPE (UINT64, V2DF, INT)
|
||||
DEF_FUNCTION_TYPE (UINT64, V4SF, INT)
|
||||
|
@ -1120,6 +1124,19 @@ DEF_FUNCTION_TYPE (VOID, QI, V8DI, PCVOID, INT, INT)
|
|||
DEF_FUNCTION_TYPE (VOID, PV8QI, V8HI, UQI)
|
||||
DEF_FUNCTION_TYPE (VOID, PV16QI, V16HI, UHI)
|
||||
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, INT)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, V8DF, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V8DF, V8DF, V8DI, INT, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, V16SF, HI, INT)
|
||||
DEF_FUNCTION_TYPE (V16SF, V16SF, V16SI, INT, HI, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, V2DF, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V2DF, V2DF, V2DI, INT, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, INT)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, V4SF, QI, INT)
|
||||
DEF_FUNCTION_TYPE (V4SF, V4SF, V4SI, INT, QI, INT)
|
||||
|
||||
DEF_FUNCTION_TYPE_ALIAS (V2DF_FTYPE_V2DF, ROUND)
|
||||
DEF_FUNCTION_TYPE_ALIAS (V4DF_FTYPE_V4DF, ROUND)
|
||||
DEF_FUNCTION_TYPE_ALIAS (V8DF_FTYPE_V8DF, ROUND)
|
||||
|
|
|
@ -1797,14 +1797,18 @@ BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv8sf_mask, "__builtin_i
|
|||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv4df_mask, "__builtin_ia32_getexppd256_mask", IX86_BUILTIN_GETEXPPD256, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv4sf_mask, "__builtin_ia32_getexpps128_mask", IX86_BUILTIN_GETEXPPS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_getexpv2df_mask, "__builtin_ia32_getexppd128_mask", IX86_BUILTIN_GETEXPPD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_mask, "__builtin_ia32_fixupimmpd256_mask", IX86_BUILTIN_FIXUPIMMPD256_MASK, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_maskz, "__builtin_ia32_fixupimmpd256_maskz", IX86_BUILTIN_FIXUPIMMPD256_MASKZ, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_mask, "__builtin_ia32_fixupimmps256_mask", IX86_BUILTIN_FIXUPIMMPS256_MASK, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_maskz, "__builtin_ia32_fixupimmps256_maskz", IX86_BUILTIN_FIXUPIMMPS256_MASKZ, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_mask, "__builtin_ia32_fixupimmpd128_mask", IX86_BUILTIN_FIXUPIMMPD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_maskz, "__builtin_ia32_fixupimmpd128_maskz", IX86_BUILTIN_FIXUPIMMPD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_mask, "__builtin_ia32_fixupimmps128_mask", IX86_BUILTIN_FIXUPIMMPS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_maskz, "__builtin_ia32_fixupimmps128_maskz", IX86_BUILTIN_FIXUPIMMPS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df, "__builtin_ia32_fixupimmpd256", IX86_BUILTIN_FIXUPIMMPD256, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_mask, "__builtin_ia32_fixupimmpd256_mask", IX86_BUILTIN_FIXUPIMMPD256_MASK, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT_V4DF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4df_maskz, "__builtin_ia32_fixupimmpd256_maskz", IX86_BUILTIN_FIXUPIMMPD256_MASKZ, UNKNOWN, (int) V4DF_FTYPE_V4DF_V4DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf, "__builtin_ia32_fixupimmps256", IX86_BUILTIN_FIXUPIMMPS256, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_mask, "__builtin_ia32_fixupimmps256_mask", IX86_BUILTIN_FIXUPIMMPS256_MASK, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT_V8SF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv8sf_maskz, "__builtin_ia32_fixupimmps256_maskz", IX86_BUILTIN_FIXUPIMMPS256_MASKZ, UNKNOWN, (int) V8SF_FTYPE_V8SF_V8SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df, "__builtin_ia32_fixupimmpd128", IX86_BUILTIN_FIXUPIMMPD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_mask, "__builtin_ia32_fixupimmpd128_mask", IX86_BUILTIN_FIXUPIMMPD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_V2DF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv2df_maskz, "__builtin_ia32_fixupimmpd128_maskz", IX86_BUILTIN_FIXUPIMMPD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf, "__builtin_ia32_fixupimmps128", IX86_BUILTIN_FIXUPIMMPS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_mask, "__builtin_ia32_fixupimmps128_mask", IX86_BUILTIN_FIXUPIMMPS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_V4SF_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_avx512vl_fixupimmv4sf_maskz, "__builtin_ia32_fixupimmps128_maskz", IX86_BUILTIN_FIXUPIMMPS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv4di2_mask, "__builtin_ia32_pabsq256_mask", IX86_BUILTIN_PABSQ256, UNKNOWN, (int) V4DI_FTYPE_V4DI_V4DI_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv2di2_mask, "__builtin_ia32_pabsq128_mask", IX86_BUILTIN_PABSQ128, UNKNOWN, (int) V2DI_FTYPE_V2DI_V2DI_UQI)
|
||||
BDESC (OPTION_MASK_ISA_AVX512VL, CODE_FOR_absv8si2_mask, "__builtin_ia32_pabsd256_mask", IX86_BUILTIN_PABSD256_MASK, UNKNOWN, (int) V8SI_FTYPE_V8SI_V8SI_UQI)
|
||||
|
@ -2702,14 +2706,18 @@ BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse2_vmdivv2df3_round, "__builtin_ia32_
|
|||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse2_vmdivv2df3_mask_round, "__builtin_ia32_divsd_mask_round", IX86_BUILTIN_DIVSD_MASK_ROUND, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DF_UQI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse_vmdivv4sf3_round, "__builtin_ia32_divss_round", IX86_BUILTIN_DIVSS_ROUND, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_sse_vmdivv4sf3_mask_round, "__builtin_ia32_divss_mask_round", IX86_BUILTIN_DIVSS_MASK_ROUND, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SF_UQI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_mask_round, "__builtin_ia32_fixupimmpd512_mask", IX86_BUILTIN_FIXUPIMMPD512_MASK, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_maskz_round, "__builtin_ia32_fixupimmpd512_maskz", IX86_BUILTIN_FIXUPIMMPD512_MASKZ, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_mask_round, "__builtin_ia32_fixupimmps512_mask", IX86_BUILTIN_FIXUPIMMPS512_MASK, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_maskz_round, "__builtin_ia32_fixupimmps512_maskz", IX86_BUILTIN_FIXUPIMMPS512_MASKZ, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_mask_round, "__builtin_ia32_fixupimmsd_mask", IX86_BUILTIN_FIXUPIMMSD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_maskz_round, "__builtin_ia32_fixupimmsd_maskz", IX86_BUILTIN_FIXUPIMMSD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_mask_round, "__builtin_ia32_fixupimmss_mask", IX86_BUILTIN_FIXUPIMMSS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_maskz_round, "__builtin_ia32_fixupimmss_maskz", IX86_BUILTIN_FIXUPIMMSS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_round, "__builtin_ia32_fixupimmpd512", IX86_BUILTIN_FIXUPIMMPD512, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_mask_round, "__builtin_ia32_fixupimmpd512_mask", IX86_BUILTIN_FIXUPIMMPD512_MASK, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_V8DF_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv8df_maskz_round, "__builtin_ia32_fixupimmpd512_maskz", IX86_BUILTIN_FIXUPIMMPD512_MASKZ, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_round, "__builtin_ia32_fixupimmps512", IX86_BUILTIN_FIXUPIMMPS512, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_mask_round, "__builtin_ia32_fixupimmps512_mask", IX86_BUILTIN_FIXUPIMMPS512_MASK, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_V16SF_HI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_fixupimmv16sf_maskz_round, "__builtin_ia32_fixupimmps512_maskz", IX86_BUILTIN_FIXUPIMMPS512_MASKZ, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SI_INT_HI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_round, "__builtin_ia32_fixupimmsd", IX86_BUILTIN_FIXUPIMMSD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_mask_round, "__builtin_ia32_fixupimmsd_mask", IX86_BUILTIN_FIXUPIMMSD128_MASK, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_V2DF_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv2df_maskz_round, "__builtin_ia32_fixupimmsd_maskz", IX86_BUILTIN_FIXUPIMMSD128_MASKZ, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_round, "__builtin_ia32_fixupimmss", IX86_BUILTIN_FIXUPIMMSS128, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_mask_round, "__builtin_ia32_fixupimmss_mask", IX86_BUILTIN_FIXUPIMMSS128_MASK, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_V4SF_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sfixupimmv4sf_maskz_round, "__builtin_ia32_fixupimmss_maskz", IX86_BUILTIN_FIXUPIMMSS128_MASKZ, UNKNOWN, (int) V4SF_FTYPE_V4SF_V4SI_INT_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_getexpv8df_mask_round, "__builtin_ia32_getexppd512_mask", IX86_BUILTIN_GETEXPPD512, UNKNOWN, (int) V8DF_FTYPE_V8DF_V8DF_QI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_getexpv16sf_mask_round, "__builtin_ia32_getexpps512_mask", IX86_BUILTIN_GETEXPPS512, UNKNOWN, (int) V16SF_FTYPE_V16SF_V16SF_HI_INT)
|
||||
BDESC (OPTION_MASK_ISA_AVX512F, CODE_FOR_avx512f_sgetexpv2df_round, "__builtin_ia32_getexpsd128_round", IX86_BUILTIN_GETEXPSD128, UNKNOWN, (int) V2DF_FTYPE_V2DF_V2DF_INT)
|
||||
|
|
|
@ -34753,6 +34753,10 @@ ix86_expand_args_builtin (const struct builtin_description *d,
|
|||
case V32HI_FTYPE_V32HI_V32HI_INT:
|
||||
case V16SI_FTYPE_V16SI_V16SI_INT:
|
||||
case V8DI_FTYPE_V8DI_V8DI_INT:
|
||||
case V4DF_FTYPE_V4DF_V4DI_INT:
|
||||
case V8SF_FTYPE_V8SF_V8SI_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT:
|
||||
nargs = 3;
|
||||
nargs_constant = 1;
|
||||
break;
|
||||
|
@ -34908,6 +34912,10 @@ ix86_expand_args_builtin (const struct builtin_description *d,
|
|||
break;
|
||||
case UQI_FTYPE_V8DI_V8DI_INT_UQI:
|
||||
case UHI_FTYPE_V16SI_V16SI_INT_UHI:
|
||||
case V4DF_FTYPE_V4DF_V4DI_INT_UQI:
|
||||
case V8SF_FTYPE_V8SF_V8SI_INT_UQI:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT_UQI:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT_UQI:
|
||||
mask_pos = 1;
|
||||
nargs = 4;
|
||||
nargs_constant = 1;
|
||||
|
@ -34973,17 +34981,17 @@ ix86_expand_args_builtin (const struct builtin_description *d,
|
|||
case V8SI_FTYPE_V8SI_V4SI_INT_V8SI_UQI:
|
||||
case V4DI_FTYPE_V4DI_V2DI_INT_V4DI_UQI:
|
||||
case V4DF_FTYPE_V4DF_V2DF_INT_V4DF_UQI:
|
||||
case V4DF_FTYPE_V4DF_V4DI_INT_V4DF_UQI:
|
||||
case V8SF_FTYPE_V8SF_V8SI_INT_V8SF_UQI:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT_V2DF_UQI:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT_V4SF_UQI:
|
||||
nargs = 5;
|
||||
mask_pos = 2;
|
||||
nargs_constant = 1;
|
||||
break;
|
||||
case V8DI_FTYPE_V8DI_V8DI_V8DI_INT_UQI:
|
||||
case V16SI_FTYPE_V16SI_V16SI_V16SI_INT_UHI:
|
||||
case V2DF_FTYPE_V2DF_V2DF_V2DI_INT_UQI:
|
||||
case V4SF_FTYPE_V4SF_V4SF_V4SI_INT_UQI:
|
||||
case V8SF_FTYPE_V8SF_V8SF_V8SI_INT_UQI:
|
||||
case V8SI_FTYPE_V8SI_V8SI_V8SI_INT_UQI:
|
||||
case V4DF_FTYPE_V4DF_V4DF_V4DI_INT_UQI:
|
||||
case V4DI_FTYPE_V4DI_V4DI_V4DI_INT_UQI:
|
||||
case V4SI_FTYPE_V4SI_V4SI_V4SI_INT_UQI:
|
||||
case V2DI_FTYPE_V2DI_V2DI_V2DI_INT_UQI:
|
||||
|
@ -35447,6 +35455,10 @@ ix86_expand_round_builtin (const struct builtin_description *d,
|
|||
break;
|
||||
case V4SF_FTYPE_V4SF_V4SF_INT_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DF_INT_INT:
|
||||
case V8DF_FTYPE_V8DF_V8DI_INT_INT:
|
||||
case V16SF_FTYPE_V16SF_V16SI_INT_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT_INT:
|
||||
nargs_constant = 2;
|
||||
nargs = 4;
|
||||
break;
|
||||
|
@ -35472,6 +35484,10 @@ ix86_expand_round_builtin (const struct builtin_description *d,
|
|||
case UQI_FTYPE_V2DF_V2DF_INT_UQI_INT:
|
||||
case UHI_FTYPE_V16SF_V16SF_INT_UHI_INT:
|
||||
case UQI_FTYPE_V4SF_V4SF_INT_UQI_INT:
|
||||
case V8DF_FTYPE_V8DF_V8DI_INT_QI_INT:
|
||||
case V16SF_FTYPE_V16SF_V16SI_INT_HI_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT_QI_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT_QI_INT:
|
||||
nargs_constant = 3;
|
||||
nargs = 5;
|
||||
break;
|
||||
|
@ -35481,16 +35497,13 @@ ix86_expand_round_builtin (const struct builtin_description *d,
|
|||
case V2DF_FTYPE_V2DF_V2DF_INT_V2DF_QI_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DF_INT_V2DF_UQI_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SF_INT_V4SF_UQI_INT:
|
||||
case V8DF_FTYPE_V8DF_V8DI_INT_V8DF_QI_INT:
|
||||
case V16SF_FTYPE_V16SF_V16SI_INT_V16SF_HI_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DI_INT_V2DF_QI_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SI_INT_V4SF_QI_INT:
|
||||
nargs = 6;
|
||||
nargs_constant = 4;
|
||||
break;
|
||||
case V8DF_FTYPE_V8DF_V8DF_V8DI_INT_QI_INT:
|
||||
case V16SF_FTYPE_V16SF_V16SF_V16SI_INT_HI_INT:
|
||||
case V2DF_FTYPE_V2DF_V2DF_V2DI_INT_QI_INT:
|
||||
case V4SF_FTYPE_V4SF_V4SF_V4SI_INT_QI_INT:
|
||||
nargs = 6;
|
||||
nargs_constant = 3;
|
||||
break;
|
||||
default:
|
||||
gcc_unreachable ();
|
||||
}
|
||||
|
|
|
@ -8812,29 +8812,27 @@
|
|||
(define_expand "<avx512>_fixupimm<mode>_maskz<round_saeonly_expand_name>"
|
||||
[(match_operand:VF_AVX512VL 0 "register_operand")
|
||||
(match_operand:VF_AVX512VL 1 "register_operand")
|
||||
(match_operand:VF_AVX512VL 2 "register_operand")
|
||||
(match_operand:<sseintvecmode> 3 "<round_saeonly_expand_nimm_predicate>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")
|
||||
(match_operand:<avx512fmaskmode> 5 "register_operand")]
|
||||
(match_operand:<sseintvecmode> 2 "<round_saeonly_expand_nimm_predicate>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")
|
||||
(match_operand:<avx512fmaskmode> 4 "register_operand")]
|
||||
"TARGET_AVX512F"
|
||||
{
|
||||
emit_insn (gen_<avx512>_fixupimm<mode>_maskz_1<round_saeonly_expand_name> (
|
||||
operands[0], operands[1], operands[2], operands[3],
|
||||
operands[4], CONST0_RTX (<MODE>mode), operands[5]
|
||||
<round_saeonly_expand_operand6>));
|
||||
CONST0_RTX (<MODE>mode), operands[4]
|
||||
<round_saeonly_expand_operand5>));
|
||||
DONE;
|
||||
})
|
||||
|
||||
(define_insn "<avx512>_fixupimm<mode><sd_maskz_name><round_saeonly_name>"
|
||||
[(set (match_operand:VF_AVX512VL 0 "register_operand" "=v")
|
||||
(unspec:VF_AVX512VL
|
||||
[(match_operand:VF_AVX512VL 1 "register_operand" "0")
|
||||
(match_operand:VF_AVX512VL 2 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 3 "nonimmediate_operand" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")]
|
||||
[(match_operand:VF_AVX512VL 1 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 2 "nonimmediate_operand" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")]
|
||||
UNSPEC_FIXUPIMM))]
|
||||
"TARGET_AVX512F"
|
||||
"vfixupimm<ssemodesuffix>\t{%4, <round_saeonly_sd_mask_op5>%3, %2, %0<sd_mask_op5>|%0<sd_mask_op5>, %2, %3<round_saeonly_sd_mask_op5>, %4}";
|
||||
"vfixupimm<ssemodesuffix>\t{%3, <round_saeonly_sd_mask_op4>%2, %1, %0<sd_mask_op4>|%0<sd_mask_op4>, %1, %2<round_saeonly_sd_mask_op4>, %3}";
|
||||
[(set_attr "prefix" "evex")
|
||||
(set_attr "mode" "<MODE>")])
|
||||
|
||||
|
@ -8842,66 +8840,56 @@
|
|||
[(set (match_operand:VF_AVX512VL 0 "register_operand" "=v")
|
||||
(vec_merge:VF_AVX512VL
|
||||
(unspec:VF_AVX512VL
|
||||
[(match_operand:VF_AVX512VL 1 "register_operand" "0")
|
||||
(match_operand:VF_AVX512VL 2 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 3 "nonimmediate_operand" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")]
|
||||
[(match_operand:VF_AVX512VL 1 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 2 "nonimmediate_operand" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")]
|
||||
UNSPEC_FIXUPIMM)
|
||||
(match_dup 1)
|
||||
(match_operand:VF_AVX512VL 4 "register_operand" "0")
|
||||
(match_operand:<avx512fmaskmode> 5 "register_operand" "Yk")))]
|
||||
"TARGET_AVX512F"
|
||||
"vfixupimm<ssemodesuffix>\t{%4, <round_saeonly_op6>%3, %2, %0%{%5%}|%0%{%5%}, %2, %3<round_saeonly_op6>, %4}";
|
||||
"vfixupimm<ssemodesuffix>\t{%3, <round_saeonly_op6>%2, %1, %0%{%5%}|%0%{%5%}, %1, %2<round_saeonly_op6>, %3}";
|
||||
[(set_attr "prefix" "evex")
|
||||
(set_attr "mode" "<MODE>")])
|
||||
|
||||
(define_expand "avx512f_sfixupimm<mode>_maskz<round_saeonly_expand_name>"
|
||||
[(match_operand:VF_128 0 "register_operand")
|
||||
(match_operand:VF_128 1 "register_operand")
|
||||
(match_operand:VF_128 2 "register_operand")
|
||||
(match_operand:<sseintvecmode> 3 "<round_saeonly_expand_nimm_predicate>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")
|
||||
(match_operand:<avx512fmaskmode> 5 "register_operand")]
|
||||
(match_operand:<sseintvecmode> 2 "<round_saeonly_expand_nimm_predicate>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")
|
||||
(match_operand:<avx512fmaskmode> 4 "register_operand")]
|
||||
"TARGET_AVX512F"
|
||||
{
|
||||
emit_insn (gen_avx512f_sfixupimm<mode>_maskz_1<round_saeonly_expand_name> (
|
||||
operands[0], operands[1], operands[2], operands[3],
|
||||
operands[4], CONST0_RTX (<MODE>mode), operands[5]
|
||||
<round_saeonly_expand_operand6>));
|
||||
CONST0_RTX (<MODE>mode), operands[4]
|
||||
<round_saeonly_expand_operand5>));
|
||||
DONE;
|
||||
})
|
||||
|
||||
(define_insn "avx512f_sfixupimm<mode><sd_maskz_name><round_saeonly_name>"
|
||||
[(set (match_operand:VF_128 0 "register_operand" "=v")
|
||||
(vec_merge:VF_128
|
||||
(unspec:VF_128
|
||||
[(match_operand:VF_128 1 "register_operand" "0")
|
||||
(match_operand:VF_128 2 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 3 "<round_saeonly_nimm_predicate>" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")]
|
||||
UNSPEC_FIXUPIMM)
|
||||
(match_dup 1)
|
||||
(const_int 1)))]
|
||||
(unspec:VF_128
|
||||
[(match_operand:VF_128 1 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 2 "<round_saeonly_nimm_predicate>" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")]
|
||||
UNSPEC_FIXUPIMM))]
|
||||
"TARGET_AVX512F"
|
||||
"vfixupimm<ssescalarmodesuffix>\t{%4, <round_saeonly_sd_mask_op5>%3, %2, %0<sd_mask_op5>|%0<sd_mask_op5>, %2, %<iptr>3<round_saeonly_sd_mask_op5>, %4}";
|
||||
"vfixupimm<ssescalarmodesuffix>\t{%3, <round_saeonly_sd_mask_op4>%2, %1, %0<sd_mask_op4>|%0<sd_mask_op4>, %1, %<iptr>2<round_saeonly_sd_mask_op4>, %3}";
|
||||
[(set_attr "prefix" "evex")
|
||||
(set_attr "mode" "<ssescalarmode>")])
|
||||
|
||||
(define_insn "avx512f_sfixupimm<mode>_mask<round_saeonly_name>"
|
||||
[(set (match_operand:VF_128 0 "register_operand" "=v")
|
||||
(vec_merge:VF_128
|
||||
(vec_merge:VF_128
|
||||
(unspec:VF_128
|
||||
[(match_operand:VF_128 1 "register_operand" "0")
|
||||
(match_operand:VF_128 2 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 3 "<round_saeonly_nimm_predicate>" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 4 "const_0_to_255_operand")]
|
||||
[(match_operand:VF_128 1 "register_operand" "v")
|
||||
(match_operand:<sseintvecmode> 2 "<round_saeonly_nimm_predicate>" "<round_saeonly_constraint>")
|
||||
(match_operand:SI 3 "const_0_to_255_operand")]
|
||||
UNSPEC_FIXUPIMM)
|
||||
(match_dup 1)
|
||||
(const_int 1))
|
||||
(match_dup 1)
|
||||
(match_operand:VF_128 4 "register_operand" "0")
|
||||
(match_operand:<avx512fmaskmode> 5 "register_operand" "Yk")))]
|
||||
"TARGET_AVX512F"
|
||||
"vfixupimm<ssescalarmodesuffix>\t{%4, <round_saeonly_op6>%3, %2, %0%{%5%}|%0%{%5%}, %2, %<iptr>3<round_saeonly_op6>, %4}";
|
||||
"vfixupimm<ssescalarmodesuffix>\t{%3, <round_saeonly_op6>%2, %1, %0%{%5%}|%0%{%5%}, %1, %<iptr>2<round_saeonly_op6>, %3}";
|
||||
[(set_attr "prefix" "evex")
|
||||
(set_attr "mode" "<ssescalarmode>")])
|
||||
|
||||
|
|
|
@ -149,6 +149,7 @@
|
|||
(define_subst_attr "round_saeonly_mask_operand3" "mask" "%r3" "%r5")
|
||||
(define_subst_attr "round_saeonly_mask_operand4" "mask" "%r4" "%r6")
|
||||
(define_subst_attr "round_saeonly_mask_scalar_merge_operand4" "mask_scalar_merge" "%r4" "%r5")
|
||||
(define_subst_attr "round_saeonly_sd_mask_operand4" "sd" "%r4" "%r6")
|
||||
(define_subst_attr "round_saeonly_sd_mask_operand5" "sd" "%r5" "%r7")
|
||||
(define_subst_attr "round_saeonly_op2" "round_saeonly" "" "%r2")
|
||||
(define_subst_attr "round_saeonly_op3" "round_saeonly" "" "%r3")
|
||||
|
@ -160,6 +161,7 @@
|
|||
(define_subst_attr "round_saeonly_mask_op3" "round_saeonly" "" "<round_saeonly_mask_operand3>")
|
||||
(define_subst_attr "round_saeonly_mask_op4" "round_saeonly" "" "<round_saeonly_mask_operand4>")
|
||||
(define_subst_attr "round_saeonly_mask_scalar_merge_op4" "round_saeonly" "" "<round_saeonly_mask_scalar_merge_operand4>")
|
||||
(define_subst_attr "round_saeonly_sd_mask_op4" "round_saeonly" "" "<round_saeonly_sd_mask_operand4>")
|
||||
(define_subst_attr "round_saeonly_sd_mask_op5" "round_saeonly" "" "<round_saeonly_sd_mask_operand5>")
|
||||
(define_subst_attr "round_saeonly_mask_arg3" "round_saeonly" "" ", operands[<mask_expand_op3>]")
|
||||
(define_subst_attr "round_saeonly_constraint" "round_saeonly" "vm" "v")
|
||||
|
@ -212,23 +214,21 @@
|
|||
|
||||
(define_subst_attr "round_saeonly_expand_name" "round_saeonly_expand" "" "_round")
|
||||
(define_subst_attr "round_saeonly_expand_nimm_predicate" "round_saeonly_expand" "nonimmediate_operand" "register_operand")
|
||||
(define_subst_attr "round_saeonly_expand_operand6" "round_saeonly_expand" "" ", operands[6]")
|
||||
(define_subst_attr "round_saeonly_expand_operand5" "round_saeonly_expand" "" ", operands[5]")
|
||||
|
||||
(define_subst "round_saeonly_expand"
|
||||
[(match_operand:SUBST_V 0)
|
||||
(match_operand:SUBST_V 1)
|
||||
(match_operand:SUBST_V 2)
|
||||
(match_operand:SUBST_A 3)
|
||||
(match_operand:SI 4)
|
||||
(match_operand:SUBST_S 5)]
|
||||
(match_operand:SUBST_A 2)
|
||||
(match_operand:SI 3)
|
||||
(match_operand:SUBST_S 4)]
|
||||
"TARGET_AVX512F"
|
||||
[(match_dup 0)
|
||||
(match_dup 1)
|
||||
(match_dup 2)
|
||||
(match_dup 3)
|
||||
(match_dup 4)
|
||||
(match_dup 5)
|
||||
(unspec [(match_operand:SI 6 "const48_operand")] UNSPEC_EMBEDDED_ROUNDING)])
|
||||
(unspec [(match_operand:SI 5 "const48_operand")] UNSPEC_EMBEDDED_ROUNDING)])
|
||||
|
||||
(define_subst_attr "mask_expand4_name" "mask_expand4" "" "_mask")
|
||||
(define_subst_attr "mask_expand4_args" "mask_expand4" "" ", operands[4], operands[5]")
|
||||
|
|
|
@ -1,3 +1,22 @@
|
|||
2018-11-06 Wei Xiao <wei3.xiao@intel.com>
|
||||
|
||||
* gcc.target/i386/avx-1.c: Update tests for VFIXUPIMM* intrinsics.
|
||||
* gcc.target/i386/avx512f-vfixupimmpd-1.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmpd-2.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmps-1.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmsd-1.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmsd-2.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmss-1.c: Ditto.
|
||||
* gcc.target/i386/avx512f-vfixupimmss-2.c: Ditto.
|
||||
* gcc.target/i386/avx512vl-vfixupimmpd-1.c: Ditto.
|
||||
* gcc.target/i386/avx512vl-vfixupimmps-1.c: Ditto.
|
||||
* gcc.target/i386/sse-13.c: Ditto.
|
||||
* gcc.target/i386/sse-14.c: Ditto.
|
||||
* gcc.target/i386/sse-22.c: Ditto.
|
||||
* gcc.target/i386/sse-23.c: Ditto.
|
||||
* gcc.target/i386/testimm-10.c: Ditto.
|
||||
* gcc.target/i386/testround-1.c: Ditto.
|
||||
|
||||
2018-11-05 Paul Koning <ni1d@arrl.net>
|
||||
|
||||
* lib/target-supports.exp: Add check for "inf" effective target
|
||||
|
|
|
@ -214,14 +214,18 @@
|
|||
#define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8)
|
||||
|
@ -550,14 +554,19 @@
|
|||
#define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1)
|
||||
|
||||
#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1)
|
||||
#define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D)
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask8 m;
|
|||
void extern
|
||||
avx512f_test (void)
|
||||
{
|
||||
x1 = _mm512_fixupimm_pd (x1, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_pd (x2, y, 3);
|
||||
x1 = _mm512_mask_fixupimm_pd (x1, m, x2, y, 3);
|
||||
x1 = _mm512_maskz_fixupimm_pd (m, x1, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_round_pd (x1, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_pd (m, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_round_pd (x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_mask_fixupimm_round_pd (x1, m, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_round_pd (m, x1, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_round_pd (m, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
}
|
||||
|
|
|
@ -99,9 +99,9 @@ TEST (void)
|
|||
CALC (&res_ref[j], s1.a[j], s2.a[j]);
|
||||
}
|
||||
|
||||
res1.x = INTRINSIC (_fixupimm_pd) (res1.x, s1.x, s2.x, 0);
|
||||
res1.x = INTRINSIC (_fixupimm_pd) (s1.x, s2.x, 0);
|
||||
res2.x = INTRINSIC (_mask_fixupimm_pd) (res2.x, mask, s1.x, s2.x, 0);
|
||||
res3.x = INTRINSIC (_maskz_fixupimm_pd) (mask, res3.x, s1.x, s2.x, 0);
|
||||
res3.x = INTRINSIC (_maskz_fixupimm_pd) (mask, s1.x, s2.x, 0);
|
||||
|
||||
if (UNION_CHECK (AVX512F_LEN, d) (res1, res_ref))
|
||||
abort ();
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask16 m;
|
|||
void extern
|
||||
avx512f_test (void)
|
||||
{
|
||||
x1 = _mm512_fixupimm_ps (x1, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_ps (x2, y, 3);
|
||||
x1 = _mm512_mask_fixupimm_ps (x1, m, x2, y, 3);
|
||||
x1 = _mm512_maskz_fixupimm_ps (m, x1, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_round_ps (x1, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_ps (m, x2, y, 3);
|
||||
x1 = _mm512_fixupimm_round_ps (x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_mask_fixupimm_round_ps (x1, m, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_round_ps (m, x1, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
x1 = _mm512_maskz_fixupimm_round_ps (m, x2, y, 3, _MM_FROUND_NO_EXC);
|
||||
}
|
||||
|
|
|
@ -104,9 +104,9 @@ TEST (void)
|
|||
CALC (&res_ref[j], s1.a[j], s2.a[j]);
|
||||
}
|
||||
|
||||
res1.x = INTRINSIC (_fixupimm_ps) (res1.x, s1.x, s2.x, 0);
|
||||
res1.x = INTRINSIC (_fixupimm_ps) (s1.x, s2.x, 0);
|
||||
res2.x = INTRINSIC (_mask_fixupimm_ps) (res2.x, mask, s1.x, s2.x, 0);
|
||||
res3.x = INTRINSIC (_maskz_fixupimm_ps) (mask, res3.x, s1.x, s2.x, 0);
|
||||
res3.x = INTRINSIC (_maskz_fixupimm_ps) (mask, s1.x, s2.x, 0);
|
||||
|
||||
if (UNION_CHECK (AVX512F_LEN,) (res1, res_ref))
|
||||
abort ();
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask8 m;
|
|||
void extern
|
||||
avx512f_test (void)
|
||||
{
|
||||
x = _mm_fixupimm_sd (x, x, y, 3);
|
||||
x = _mm_fixupimm_sd (x, y, 3);
|
||||
x = _mm_mask_fixupimm_sd (x, m, x, y, 3);
|
||||
x = _mm_maskz_fixupimm_sd (m, x, x, y, 3);
|
||||
x = _mm_fixupimm_round_sd (x, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_sd (m, x, y, 3);
|
||||
x = _mm_fixupimm_round_sd (x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_mask_fixupimm_round_sd (x, m, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_round_sd (m, x, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_round_sd (m, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
}
|
||||
|
|
|
@ -100,9 +100,9 @@ avx512f_test (void)
|
|||
s2.a[0] = controls[j];
|
||||
compute_fixupimmpd (&res_ref[0], s1.a[0], s2.a[0]);
|
||||
|
||||
res1.x = _mm_fixupimm_sd (res1.x, s1.x, s2.x, 0);
|
||||
res1.x = _mm_fixupimm_sd (s1.x, s2.x, 0);
|
||||
res2.x = _mm_mask_fixupimm_sd (res2.x, mask, s1.x, s2.x, 0);
|
||||
res3.x = _mm_maskz_fixupimm_sd (mask, res3.x, s1.x, s2.x, 0);
|
||||
res3.x = _mm_maskz_fixupimm_sd (mask, s1.x, s2.x, 0);
|
||||
|
||||
if (check_union128d (res1, res_ref))
|
||||
abort ();
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask8 m;
|
|||
void extern
|
||||
avx512f_test (void)
|
||||
{
|
||||
x = _mm_fixupimm_ss (x, x, y, 3);
|
||||
x = _mm_fixupimm_ss (x, y, 3);
|
||||
x = _mm_mask_fixupimm_ss (x, m, x, y, 3);
|
||||
x = _mm_maskz_fixupimm_ss (m, x, x, y, 3);
|
||||
x = _mm_fixupimm_round_ss (x, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_ss (m, x, y, 3);
|
||||
x = _mm_fixupimm_round_ss (x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_mask_fixupimm_round_ss (x, m, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_round_ss (m, x, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
x = _mm_maskz_fixupimm_round_ss (m, x, y, 3, _MM_FROUND_NO_EXC);
|
||||
}
|
||||
|
|
|
@ -101,9 +101,9 @@ avx512f_test (void)
|
|||
s2.a[0] = controls[j];
|
||||
compute_fixupimmps (&res_ref[0], s1.a[0], s2.a[0]);
|
||||
|
||||
res1.x = _mm_fixupimm_ss (res1.x, s1.x, s2.x, 0);
|
||||
res1.x = _mm_fixupimm_ss (s1.x, s2.x, 0);
|
||||
res2.x = _mm_mask_fixupimm_ss (res2.x, mask, s1.x, s2.x, 0);
|
||||
res3.x = _mm_maskz_fixupimm_ss (mask, res3.x, s1.x, s2.x, 0);
|
||||
res3.x = _mm_maskz_fixupimm_ss (mask, s1.x, s2.x, 0);
|
||||
|
||||
if (check_union128 (res1, res_ref))
|
||||
abort ();
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask8 m;
|
|||
void extern
|
||||
avx512vl_test (void)
|
||||
{
|
||||
xx = _mm256_fixupimm_pd (xx, xx, yy, 3);
|
||||
xx = _mm256_fixupimm_pd (xx, yy, 3);
|
||||
xx = _mm256_mask_fixupimm_pd (xx, m, xx, yy, 3);
|
||||
xx = _mm256_maskz_fixupimm_pd (m, xx, xx, yy, 3);
|
||||
x2 = _mm_fixupimm_pd (x2, x2, y2, 3);
|
||||
xx = _mm256_maskz_fixupimm_pd (m, xx, yy, 3);
|
||||
x2 = _mm_fixupimm_pd (x2, y2, 3);
|
||||
x2 = _mm_mask_fixupimm_pd (x2, m, x2, y2, 3);
|
||||
x2 = _mm_maskz_fixupimm_pd (m, x2, x2, y2, 3);
|
||||
x2 = _mm_maskz_fixupimm_pd (m, x2, y2, 3);
|
||||
}
|
||||
|
|
|
@ -16,10 +16,10 @@ volatile __mmask8 m;
|
|||
void extern
|
||||
avx512vl_test (void)
|
||||
{
|
||||
xx = _mm256_fixupimm_ps (xx, xx, yy, 3);
|
||||
xx = _mm256_fixupimm_ps (xx, yy, 3);
|
||||
xx = _mm256_mask_fixupimm_ps (xx, m, xx, yy, 3);
|
||||
xx = _mm256_maskz_fixupimm_ps (m, xx, xx, yy, 3);
|
||||
x2 = _mm_fixupimm_ps (x2, x2, y2, 3);
|
||||
xx = _mm256_maskz_fixupimm_ps (m, xx, yy, 3);
|
||||
x2 = _mm_fixupimm_ps (x2, y2, 3);
|
||||
x2 = _mm_mask_fixupimm_ps (x2, m, x2, y2, 3);
|
||||
x2 = _mm_maskz_fixupimm_ps (m, x2, x2, y2, 3);
|
||||
x2 = _mm_maskz_fixupimm_ps (m, x2, y2, 3);
|
||||
}
|
||||
|
|
|
@ -231,14 +231,18 @@
|
|||
#define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8)
|
||||
|
@ -567,14 +571,19 @@
|
|||
#define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1)
|
||||
|
||||
#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1)
|
||||
#define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D)
|
||||
|
|
|
@ -444,8 +444,8 @@ test_3v (_mm512_i64scatter_pd, void *, __m512i, __m512d, 1)
|
|||
test_3v (_mm512_i64scatter_ps, void *, __m512i, __m256, 1)
|
||||
test_3x (_mm512_mask_roundscale_round_pd, __m512d, __m512d, __mmask8, __m512d, 1, 8)
|
||||
test_3x (_mm512_mask_roundscale_round_ps, __m512, __m512, __mmask16, __m512, 1, 8)
|
||||
test_3x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_3x (_mm_fixupimm_round_ss, __m128, __m128, __m128, __m128i, 1, 8)
|
||||
test_2x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_2x (_mm_fixupimm_round_ss, __m128, __m128, __m128i, 1, 8)
|
||||
test_3x (_mm_mask_cmp_round_sd_mask, __mmask8, __mmask8, __m128d, __m128d, 1, 8)
|
||||
test_3x (_mm_mask_cmp_round_ss_mask, __mmask8, __mmask8, __m128, __m128, 1, 8)
|
||||
test_4 (_mm512_mask3_fmadd_round_pd, __m512d, __m512d, __m512d, __m512d, __mmask8, 9)
|
||||
|
@ -544,12 +544,12 @@ test_4v (_mm512_mask_i64scatter_pd, void *, __mmask8, __m512i, __m512d, 1)
|
|||
test_4v (_mm512_mask_i64scatter_ps, void *, __mmask8, __m512i, __m256, 1)
|
||||
test_4x (_mm512_mask_fixupimm_round_pd, __m512d, __m512d, __mmask8, __m512d, __m512i, 1, 8)
|
||||
test_4x (_mm512_mask_fixupimm_round_ps, __m512, __m512, __mmask16, __m512, __m512i, 1, 8)
|
||||
test_4x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512d, __m512i, 1, 8)
|
||||
test_4x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512, __m512i, 1, 8)
|
||||
test_3x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512i, 1, 8)
|
||||
test_3x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512i, 1, 8)
|
||||
test_4x (_mm_mask_fixupimm_round_sd, __m128d, __m128d, __mmask8, __m128d, __m128i, 1, 8)
|
||||
test_4x (_mm_mask_fixupimm_round_ss, __m128, __m128, __mmask8, __m128, __m128i, 1, 8)
|
||||
test_4x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_4x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128, __m128i, 1, 8)
|
||||
test_3x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128i, 1, 8)
|
||||
test_3x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128i, 1, 8)
|
||||
|
||||
/* avx512pfintrin.h */
|
||||
test_2vx (_mm512_prefetch_i32gather_ps, __m512i, void const *, 1, _MM_HINT_T0)
|
||||
|
|
|
@ -555,8 +555,8 @@ test_3x (_mm512_mask_roundscale_round_pd, __m512d, __m512d, __mmask8, __m512d, 1
|
|||
test_3x (_mm512_mask_roundscale_round_ps, __m512, __m512, __mmask16, __m512, 1, 8)
|
||||
test_3x (_mm512_mask_cmp_round_pd_mask, __mmask8, __mmask8, __m512d, __m512d, 1, 8)
|
||||
test_3x (_mm512_mask_cmp_round_ps_mask, __mmask16, __mmask16, __m512, __m512, 1, 8)
|
||||
test_3x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_3x (_mm_fixupimm_round_ss, __m128, __m128, __m128, __m128i, 1, 8)
|
||||
test_2x (_mm_fixupimm_round_sd, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_2x (_mm_fixupimm_round_ss, __m128, __m128, __m128i, 1, 8)
|
||||
test_3x (_mm_mask_cmp_round_sd_mask, __mmask8, __mmask8, __m128d, __m128d, 1, 8)
|
||||
test_3x (_mm_mask_cmp_round_ss_mask, __mmask8, __mmask8, __m128, __m128, 1, 8)
|
||||
test_4 (_mm512_mask3_fmadd_round_pd, __m512d, __m512d, __m512d, __m512d, __mmask8, 9)
|
||||
|
@ -643,12 +643,12 @@ test_4v (_mm512_mask_i64scatter_pd, void *, __mmask8, __m512i, __m512d, 1)
|
|||
test_4v (_mm512_mask_i64scatter_ps, void *, __mmask8, __m512i, __m256, 1)
|
||||
test_4x (_mm512_mask_fixupimm_round_pd, __m512d, __m512d, __mmask8, __m512d, __m512i, 1, 8)
|
||||
test_4x (_mm512_mask_fixupimm_round_ps, __m512, __m512, __mmask16, __m512, __m512i, 1, 8)
|
||||
test_4x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512d, __m512i, 1, 8)
|
||||
test_4x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512, __m512i, 1, 8)
|
||||
test_3x (_mm512_maskz_fixupimm_round_pd, __m512d, __mmask8, __m512d, __m512i, 1, 8)
|
||||
test_3x (_mm512_maskz_fixupimm_round_ps, __m512, __mmask16, __m512, __m512i, 1, 8)
|
||||
test_4x (_mm_mask_fixupimm_round_sd, __m128d, __m128d, __mmask8, __m128d, __m128i, 1, 8)
|
||||
test_4x (_mm_mask_fixupimm_round_ss, __m128, __m128, __mmask8, __m128, __m128i, 1, 8)
|
||||
test_4x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128d, __m128i, 1, 8)
|
||||
test_4x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128, __m128i, 1, 8)
|
||||
test_3x (_mm_maskz_fixupimm_round_sd, __m128d, __mmask8, __m128d, __m128i, 1, 8)
|
||||
test_3x (_mm_maskz_fixupimm_round_ss, __m128, __mmask8, __m128, __m128i, 1, 8)
|
||||
|
||||
/* avx512pfintrin.h */
|
||||
test_2vx (_mm512_prefetch_i32gather_ps, __m512i, void const *, 1, _MM_HINT_T0)
|
||||
|
|
|
@ -232,14 +232,18 @@
|
|||
#define __builtin_ia32_extractf64x4_mask(A, E, C, D) __builtin_ia32_extractf64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti32x4_mask(A, E, C, D) __builtin_ia32_extracti32x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extracti64x4_mask(A, E, C, D) __builtin_ia32_extracti64x4_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(A, B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(A, B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512(A, B, C, I) __builtin_ia32_fixupimmpd512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmpd512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmpd512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmpd512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512(A, B, C, I) __builtin_ia32_fixupimmps512(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmps512_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmps512_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmps512_maskz(B, C, I, E, F) __builtin_ia32_fixupimmps512_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd(A, B, C, I) __builtin_ia32_fixupimmsd(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmsd_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmsd_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmsd_maskz(B, C, I, E, F) __builtin_ia32_fixupimmsd_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_fixupimmss(A, B, C, I) __builtin_ia32_fixupimmss(A, B, 1, 8)
|
||||
#define __builtin_ia32_fixupimmss_mask(A, B, C, I, E, F) __builtin_ia32_fixupimmss_mask(A, B, 1, I, E, 8)
|
||||
#define __builtin_ia32_fixupimmss_maskz(B, C, I, E, F) __builtin_ia32_fixupimmss_maskz(B, C, 1, E, 8)
|
||||
#define __builtin_ia32_gatherdiv8df(A, B, C, D, F) __builtin_ia32_gatherdiv8df(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv8di(A, B, C, D, F) __builtin_ia32_gatherdiv8di(A, B, C, D, 8)
|
||||
#define __builtin_ia32_gatherdiv16sf(A, B, C, D, F) __builtin_ia32_gatherdiv16sf(A, B, C, D, 8)
|
||||
|
@ -566,14 +570,19 @@
|
|||
#define __builtin_ia32_gather3div4df(A, B, C, D, F) __builtin_ia32_gather3div4df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2di(A, B, C, D, F) __builtin_ia32_gather3div2di(A, B, C, D, 1)
|
||||
#define __builtin_ia32_gather3div2df(A, B, C, D, F) __builtin_ia32_gather3div2df(A, B, C, D, 1)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmps128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(A, B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_maskz(B, C, F, E) __builtin_ia32_fixupimmps256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps256_mask(A, B, C, F, E) __builtin_ia32_fixupimmps256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps256(A, B, C) __builtin_ia32_fixupimmps256(A, B, 1)
|
||||
|
||||
#define __builtin_ia32_fixupimmps128_maskz(B, C, F, E) __builtin_ia32_fixupimmps128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmps128_mask(A, B, C, F, E) __builtin_ia32_fixupimmps128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmps128(A, B, C) __builtin_ia32_fixupimmps128(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd256_maskz(B, C, F, E) __builtin_ia32_fixupimmpd256_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd256_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd256_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd256(A, B, C) __builtin_ia32_fixupimmpd256(A, B, 1)
|
||||
#define __builtin_ia32_fixupimmpd128_maskz(B, C, F, E) __builtin_ia32_fixupimmpd128_maskz(B, C, 1, E)
|
||||
#define __builtin_ia32_fixupimmpd128_mask(A, B, C, F, E) __builtin_ia32_fixupimmpd128_mask(A, B, 1, F, E)
|
||||
#define __builtin_ia32_fixupimmpd128(A, B, C) __builtin_ia32_fixupimmpd128(A, B, 1)
|
||||
#define __builtin_ia32_extracti32x4_256_mask(A, E, C, D) __builtin_ia32_extracti32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_extractf32x4_256_mask(A, E, C, D) __builtin_ia32_extractf32x4_256_mask(A, 1, C, D)
|
||||
#define __builtin_ia32_cmpq256_mask(A, B, E, D) __builtin_ia32_cmpq256_mask(A, B, 1, D)
|
||||
|
|
|
@ -69,21 +69,21 @@ test8bit (void)
|
|||
m512 = _mm512_mask_shuffle_ps (m512, mmask16, m512, m512, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */
|
||||
m512 = _mm512_maskz_shuffle_ps (mmask16, m512, m512, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */
|
||||
|
||||
m512d = _mm512_fixupimm_pd (m512d, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512d = _mm512_fixupimm_pd (m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512d = _mm512_mask_fixupimm_pd (m512d, mmask8, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512d = _mm512_maskz_fixupimm_pd (mmask8, m512d, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512d = _mm512_maskz_fixupimm_pd (mmask8, m512d, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
|
||||
m512 = _mm512_fixupimm_ps (m512, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512 = _mm512_fixupimm_ps (m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512 = _mm512_mask_fixupimm_ps (m512, mmask16, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512 = _mm512_maskz_fixupimm_ps (mmask16, m512, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m512 = _mm512_maskz_fixupimm_ps (mmask16, m512, m512i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
|
||||
m128d = _mm_fixupimm_sd (m128d, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128d = _mm_fixupimm_sd (m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128d = _mm_mask_fixupimm_sd (m128d, mmask8, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128d = _mm_maskz_fixupimm_sd (mmask8, m128d, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128d = _mm_maskz_fixupimm_sd (mmask8, m128d, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
|
||||
m128 = _mm_fixupimm_ss (m128, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128 = _mm_fixupimm_ss (m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128 = _mm_mask_fixupimm_ss (m128, mmask8, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128 = _mm_maskz_fixupimm_ss (mmask8, m128, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
m128 = _mm_maskz_fixupimm_ss (mmask8, m128, m128i, 256); /* { dg-error "the immediate argument must be an 8-bit immediate" } */
|
||||
|
||||
m512i = _mm512_rol_epi32 (m512i, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */
|
||||
m512i = _mm512_mask_rol_epi32 (m512i, mmask16, m512i, 256); /* { dg-error "the last argument must be an 8-bit immediate" } */
|
||||
|
|
|
@ -220,18 +220,18 @@ test_round (void)
|
|||
m512i = _mm512_mask_cvtt_roundps_epu32 (m512i, mmask16, m512, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512i = _mm512_maskz_cvtt_roundps_epu32 (mmask16, m512, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
|
||||
m512d = _mm512_fixupimm_round_pd (m512d, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_fixupimm_round_pd (m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_mask_fixupimm_round_pd (m512d, mmask8, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_fixupimm_round_ps (m512, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_fixupimm_round_ps (m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_mask_fixupimm_round_ps (m512, mmask16, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_fixupimm_round_sd (m128d, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_fixupimm_round_sd (m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_mask_fixupimm_round_sd (m128d, mmask8, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_fixupimm_round_ss (m128, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_fixupimm_round_ss (m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_mask_fixupimm_round_ss (m128, mmask8, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128i, 4, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
|
||||
ui = _mm_cvtt_roundss_u32 (m128, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
i = _mm_cvtt_roundss_i32 (m128, 7); /* { dg-error "incorrect rounding operand" } */
|
||||
|
@ -503,18 +503,18 @@ test_sae_only (void)
|
|||
m512i = _mm512_mask_cvtt_roundps_epu32 (m512i, mmask16, m512, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512i = _mm512_maskz_cvtt_roundps_epu32 (mmask16, m512, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
|
||||
m512d = _mm512_fixupimm_round_pd (m512d, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_fixupimm_round_pd (m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_mask_fixupimm_round_pd (m512d, mmask8, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_fixupimm_round_ps (m512, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512d = _mm512_maskz_fixupimm_round_pd (mmask8, m512d, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_fixupimm_round_ps (m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_mask_fixupimm_round_ps (m512, mmask16, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_fixupimm_round_sd (m128d, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m512 = _mm512_maskz_fixupimm_round_ps (mmask16, m512, m512i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_fixupimm_round_sd (m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_mask_fixupimm_round_sd (m128d, mmask8, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_fixupimm_round_ss (m128, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128d = _mm_maskz_fixupimm_round_sd (mmask8, m128d, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_fixupimm_round_ss (m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_mask_fixupimm_round_ss (m128, mmask8, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
m128 = _mm_maskz_fixupimm_round_ss (mmask8, m128, m128i, 4, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
|
||||
ui = _mm_cvtt_roundss_u32 (m128, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
i = _mm_cvtt_roundss_i32 (m128, 3); /* { dg-error "incorrect rounding operand" } */
|
||||
|
|
Loading…
Add table
Reference in a new issue