Middle-end _BitInt support [PR102989]

The following patch introduces the middle-end part of the _BitInt
support, a new BITINT_TYPE, handling it where needed, except the lowering
pass and sanitizer support.

2023-09-06  Jakub Jelinek  <jakub@redhat.com>

	PR c/102989
	* tree.def (BITINT_TYPE): New type.
	* tree.h (TREE_CHECK6, TREE_NOT_CHECK6): Define.
	(NUMERICAL_TYPE_CHECK, INTEGRAL_TYPE_P): Include
	BITINT_TYPE.
	(BITINT_TYPE_P): Define.
	(CONSTRUCTOR_BITFIELD_P): Return true even for BLKmode bit-fields if
	they have BITINT_TYPE type.
	(tree_check6, tree_not_check6): New inline functions.
	(any_integral_type_check): Include BITINT_TYPE.
	(build_bitint_type): Declare.
	* tree.cc (tree_code_size, wide_int_to_tree_1, cache_integer_cst,
	build_zero_cst, type_hash_canon_hash, type_cache_hasher::equal,
	type_hash_canon): Handle BITINT_TYPE.
	(bitint_type_cache): New variable.
	(build_bitint_type): New function.
	(signed_or_unsigned_type_for, verify_type_variant, verify_type):
	Handle BITINT_TYPE.
	(tree_cc_finalize): Free bitint_type_cache.
	* builtins.cc (type_to_class): Handle BITINT_TYPE.
	(fold_builtin_unordered_cmp): Handle BITINT_TYPE like INTEGER_TYPE.
	* cfgexpand.cc (expand_debug_expr): Punt on BLKmode BITINT_TYPE
	INTEGER_CSTs.
	* convert.cc (convert_to_pointer_1, convert_to_real_1,
	convert_to_complex_1): Handle BITINT_TYPE like INTEGER_TYPE.
	(convert_to_integer_1): Likewise.  For BITINT_TYPE don't check
	GET_MODE_PRECISION (TYPE_MODE (type)).
	* doc/generic.texi (BITINT_TYPE): Document.
	* doc/tm.texi.in (TARGET_C_BITINT_TYPE_INFO): New.
	* doc/tm.texi: Regenerated.
	* dwarf2out.cc (base_type_die, is_base_type, modified_type_die,
	gen_type_die_with_usage): Handle BITINT_TYPE.
	(rtl_for_decl_init): Punt on BLKmode BITINT_TYPE INTEGER_CSTs or
	handle those which fit into shwi.
	* expr.cc (expand_expr_real_1): Define EXTEND_BITINT macro, reduce
	to bitfield precision reads from BITINT_TYPE vars, parameters or
	memory locations.  Expand large/huge BITINT_TYPE INTEGER_CSTs into
	memory.
	* fold-const.cc (fold_convert_loc, make_range_step): Handle
	BITINT_TYPE.
	(extract_muldiv_1): For BITINT_TYPE use TYPE_PRECISION rather than
	GET_MODE_SIZE (SCALAR_INT_TYPE_MODE).
	(native_encode_int, native_interpret_int, native_interpret_expr):
	Handle BITINT_TYPE.
	* gimple-expr.cc (useless_type_conversion_p): Make BITINT_TYPE
	to some other integral type or vice versa conversions non-useless.
	* gimple-fold.cc (gimple_fold_builtin_memset): Punt for BITINT_TYPE.
	(clear_padding_unit): Mention in comment that _BitInt types don't need
	to fit either.
	(clear_padding_bitint_needs_padding_p): New function.
	(clear_padding_type_may_have_padding_p): Handle BITINT_TYPE.
	(clear_padding_type): Likewise.
	* internal-fn.cc (expand_mul_overflow): For unsigned non-mode
	precision operands force pos_neg? to 1.
	(expand_MULBITINT, expand_DIVMODBITINT, expand_FLOATTOBITINT,
	expand_BITINTTOFLOAT): New functions.
	* internal-fn.def (MULBITINT, DIVMODBITINT, FLOATTOBITINT,
	BITINTTOFLOAT): New internal functions.
	* internal-fn.h (expand_MULBITINT, expand_DIVMODBITINT,
	expand_FLOATTOBITINT, expand_BITINTTOFLOAT): Declare.
	* match.pd (non-equality compare simplifications from fold_binary):
	Punt if TYPE_MODE (arg1_type) is BLKmode.
	* pretty-print.h (pp_wide_int): Handle printing of large precision
	wide_ints which would buffer overflow digit_buffer.
	* stor-layout.cc (finish_bitfield_representative): For bit-fields
	with BITINT_TYPE, prefer representatives with precisions in
	multiple of limb precision.
	(layout_type): Handle BITINT_TYPE.  Handle COMPLEX_TYPE with BLKmode
	element type and assert it is BITINT_TYPE.
	* target.def (bitint_type_info): New C target hook.
	* target.h (struct bitint_info): New type.
	* targhooks.cc (default_bitint_type_info): New function.
	* targhooks.h (default_bitint_type_info): Declare.
	* tree-pretty-print.cc (dump_generic_node): Handle BITINT_TYPE.
	Handle printing large wide_ints which would buffer overflow
	digit_buffer.
	* tree-ssa-sccvn.cc: Include target.h.
	(eliminate_dom_walker::eliminate_stmt): Punt for large/huge
	BITINT_TYPE.
	* tree-switch-conversion.cc (jump_table_cluster::emit): For more than
	64-bit BITINT_TYPE subtract low bound from expression and cast to
	64-bit integer type both the controlling expression and case labels.
	* typeclass.h (enum type_class): Add bitint_type_class enumerator.
	* varasm.cc (output_constant): Handle BITINT_TYPE INTEGER_CSTs.
	* vr-values.cc (check_for_binary_op_overflow): Use widest2_int rather
	than widest_int.
	(simplify_using_ranges::simplify_internal_call_using_ranges): Use
	unsigned_type_for rather than build_nonstandard_integer_type.
This commit is contained in:
Jakub Jelinek 2023-09-06 17:25:49 +02:00
parent 6b96de22d6
commit 4f4fa25011
30 changed files with 877 additions and 63 deletions

View file

@ -1876,6 +1876,7 @@ type_to_class (tree type)
? string_type_class : array_type_class);
case LANG_TYPE: return lang_type_class;
case OPAQUE_TYPE: return opaque_type_class;
case BITINT_TYPE: return bitint_type_class;
default: return no_type_class;
}
}
@ -9423,9 +9424,11 @@ fold_builtin_unordered_cmp (location_t loc, tree fndecl, tree arg0, tree arg1,
/* Choose the wider of two real types. */
cmp_type = TYPE_PRECISION (type0) >= TYPE_PRECISION (type1)
? type0 : type1;
else if (code0 == REAL_TYPE && code1 == INTEGER_TYPE)
else if (code0 == REAL_TYPE
&& (code1 == INTEGER_TYPE || code1 == BITINT_TYPE))
cmp_type = type0;
else if (code0 == INTEGER_TYPE && code1 == REAL_TYPE)
else if ((code0 == INTEGER_TYPE || code0 == BITINT_TYPE)
&& code1 == REAL_TYPE)
cmp_type = type1;
arg0 = fold_convert_loc (loc, cmp_type, arg0);

View file

@ -4524,6 +4524,10 @@ expand_debug_expr (tree exp)
/* Fall through. */
case INTEGER_CST:
if (TREE_CODE (TREE_TYPE (exp)) == BITINT_TYPE
&& TYPE_MODE (TREE_TYPE (exp)) == BLKmode)
return NULL;
/* FALLTHRU */
case REAL_CST:
case FIXED_CST:
op0 = expand_expr (exp, NULL_RTX, mode, EXPAND_INITIALIZER);

View file

@ -77,6 +77,7 @@ convert_to_pointer_1 (tree type, tree expr, bool fold_p)
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
{
/* If the input precision differs from the target pointer type
precision, first convert the input expression to an integer type of
@ -316,6 +317,7 @@ convert_to_real_1 (tree type, tree expr, bool fold_p)
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
return build1 (FLOAT_EXPR, type, expr);
case FIXED_POINT_TYPE:
@ -660,6 +662,7 @@ convert_to_integer_1 (tree type, tree expr, bool dofold)
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case OFFSET_TYPE:
case BITINT_TYPE:
/* If this is a logical operation, which just returns 0 or 1, we can
change the type of the expression. */
@ -701,7 +704,9 @@ convert_to_integer_1 (tree type, tree expr, bool dofold)
type corresponding to its mode, then do a nop conversion
to TYPE. */
else if (TREE_CODE (type) == ENUMERAL_TYPE
|| maybe_ne (outprec, GET_MODE_PRECISION (TYPE_MODE (type))))
|| (TREE_CODE (type) != BITINT_TYPE
&& maybe_ne (outprec,
GET_MODE_PRECISION (TYPE_MODE (type)))))
{
expr
= convert_to_integer_1 (lang_hooks.types.type_for_mode
@ -1000,6 +1005,7 @@ convert_to_complex_1 (tree type, tree expr, bool fold_p)
case INTEGER_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
return build2 (COMPLEX_EXPR, type, convert (subtype, expr),
convert (subtype, integer_zero_node));

View file

@ -290,6 +290,7 @@ The elements are indexed from zero.
@tindex INTEGER_TYPE
@tindex TYPE_MIN_VALUE
@tindex TYPE_MAX_VALUE
@tindex BITINT_TYPE
@tindex REAL_TYPE
@tindex FIXED_POINT_TYPE
@tindex COMPLEX_TYPE
@ -449,6 +450,14 @@ integer that may be represented by this type. Similarly, the
@code{TYPE_MAX_VALUE} is an @code{INTEGER_CST} for the largest integer
that may be represented by this type.
@item BITINT_TYPE
Used to represent the bit-precise integer types, @code{_BitInt(@var{N})}.
These types are similar to @code{INTEGER_TYPE}, but can have arbitrary
user selected precisions and do or can have different alignment, function
argument and return value passing conventions.
Larger BITINT_TYPEs can have @code{BLKmode} @code{TYPE_MODE} and need to
be lowered by a special BITINT_TYPE lowering pass.
@item REAL_TYPE
Used to represent the @code{float}, @code{double}, and @code{long
double} types. The number of bits in the floating-point representation

View file

@ -1020,6 +1020,21 @@ Return a value, with the same meaning as the C99 macro
@code{FLT_EVAL_METHOD} that describes which excess precision should be
applied.
@deftypefn {Target Hook} bool TARGET_C_BITINT_TYPE_INFO (int @var{n}, struct bitint_info *@var{info})
This target hook returns true if @code{_BitInt(@var{N})} is supported and
provides details on it. @code{_BitInt(@var{N})} is to be represented as
series of @code{info->limb_mode}
@code{CEIL (@var{N}, GET_MODE_PRECISION (info->limb_mode))} limbs,
ordered from least significant to most significant if
@code{!info->big_endian}, otherwise from most significant to least
significant. If @code{info->extended} is false, the bits above or equal to
@var{N} are undefined when stored in a register or memory, otherwise they
are zero or sign extended depending on if it is
@code{unsigned _BitInt(@var{N})} or one of @code{_BitInt(@var{N})} or
@code{signed _BitInt(@var{N})}. Alignment of the type is
@code{GET_MODE_ALIGNMENT (info->limb_mode)}.
@end deftypefn
@deftypefn {Target Hook} machine_mode TARGET_PROMOTE_FUNCTION_MODE (const_tree @var{type}, machine_mode @var{mode}, int *@var{punsignedp}, const_tree @var{funtype}, int @var{for_return})
Like @code{PROMOTE_MODE}, but it is applied to outgoing function arguments or
function return values. The target hook should return the new mode

View file

@ -936,6 +936,8 @@ Return a value, with the same meaning as the C99 macro
@code{FLT_EVAL_METHOD} that describes which excess precision should be
applied.
@hook TARGET_C_BITINT_TYPE_INFO
@hook TARGET_PROMOTE_FUNCTION_MODE
@defmac PARM_BOUNDARY

View file

@ -13298,6 +13298,14 @@ base_type_die (tree type, bool reverse)
encoding = DW_ATE_boolean;
break;
case BITINT_TYPE:
/* C23 _BitInt(N). */
if (TYPE_UNSIGNED (type))
encoding = DW_ATE_unsigned;
else
encoding = DW_ATE_signed;
break;
default:
/* No other TREE_CODEs are Dwarf fundamental types. */
gcc_unreachable ();
@ -13308,6 +13316,8 @@ base_type_die (tree type, bool reverse)
add_AT_unsigned (base_type_result, DW_AT_byte_size,
int_size_in_bytes (type));
add_AT_unsigned (base_type_result, DW_AT_encoding, encoding);
if (TREE_CODE (type) == BITINT_TYPE)
add_AT_unsigned (base_type_result, DW_AT_bit_size, TYPE_PRECISION (type));
if (need_endianity_attribute_p (reverse))
add_AT_unsigned (base_type_result, DW_AT_endianity,
@ -13392,6 +13402,7 @@ is_base_type (tree type)
case FIXED_POINT_TYPE:
case COMPLEX_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
return true;
case VOID_TYPE:
@ -13990,12 +14001,24 @@ modified_type_die (tree type, int cv_quals, bool reverse,
name = DECL_NAME (name);
add_name_attribute (mod_type_die, IDENTIFIER_POINTER (name));
}
/* This probably indicates a bug. */
else if (mod_type_die && mod_type_die->die_tag == DW_TAG_base_type)
{
name = TYPE_IDENTIFIER (type);
add_name_attribute (mod_type_die,
name ? IDENTIFIER_POINTER (name) : "__unknown__");
if (TREE_CODE (type) == BITINT_TYPE)
{
char name_buf[sizeof ("unsigned _BitInt(2147483647)")];
snprintf (name_buf, sizeof (name_buf),
"%s_BitInt(%d)", TYPE_UNSIGNED (type) ? "unsigned " : "",
TYPE_PRECISION (type));
add_name_attribute (mod_type_die, name_buf);
}
else
{
/* This probably indicates a bug. */
name = TYPE_IDENTIFIER (type);
add_name_attribute (mod_type_die,
name
? IDENTIFIER_POINTER (name) : "__unknown__");
}
}
if (qualified_type && !reverse_base_type)
@ -20523,6 +20546,17 @@ rtl_for_decl_init (tree init, tree type)
return NULL;
}
/* Large _BitInt BLKmode INTEGER_CSTs would yield a MEM. */
if (TREE_CODE (init) == INTEGER_CST
&& TREE_CODE (TREE_TYPE (init)) == BITINT_TYPE
&& TYPE_MODE (TREE_TYPE (init)) == BLKmode)
{
if (tree_fits_shwi_p (init))
return GEN_INT (tree_to_shwi (init));
else
return NULL;
}
rtl = expand_expr (init, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
/* If expand_expr returns a MEM, it wasn't immediate. */
@ -26361,6 +26395,7 @@ gen_type_die_with_usage (tree type, dw_die_ref context_die,
case FIXED_POINT_TYPE:
case COMPLEX_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
/* No DIEs needed for fundamental types. */
break;

View file

@ -10650,6 +10650,25 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
tree ssa_name = NULL_TREE;
gimple *g;
/* Some ABIs define padding bits in _BitInt uninitialized. Normally, RTL
expansion sign/zero extends integral types with less than mode precision
when reading from bit-fields and after arithmetic operations (see
REDUCE_BIT_FIELD in expand_expr_real_2) and on subsequent loads relies
on those extensions to have been already performed, but because of the
above for _BitInt they need to be sign/zero extended when reading from
locations that could be exposed to ABI boundaries (when loading from
objects in memory, or function arguments, return value). Because we
internally extend after arithmetic operations, we can avoid doing that
when reading from SSA_NAMEs of vars. */
#define EXTEND_BITINT(expr) \
((TREE_CODE (type) == BITINT_TYPE \
&& reduce_bit_field \
&& mode != BLKmode \
&& modifier != EXPAND_MEMORY \
&& modifier != EXPAND_WRITE \
&& modifier != EXPAND_CONST_ADDRESS) \
? reduce_to_bit_field_precision ((expr), NULL_RTX, type) : (expr))
type = TREE_TYPE (exp);
mode = TYPE_MODE (type);
unsignedp = TYPE_UNSIGNED (type);
@ -10823,6 +10842,13 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
ssa_name = exp;
decl_rtl = get_rtx_for_ssa_name (ssa_name);
exp = SSA_NAME_VAR (ssa_name);
/* Optimize and avoid to EXTEND_BITINIT doing anything if it is an
SSA_NAME computed within the current function. In such case the
value have been already extended before. While if it is a function
parameter, result or some memory location, we need to be prepared
for some other compiler leaving the bits uninitialized. */
if (!exp || VAR_P (exp))
reduce_bit_field = false;
goto expand_decl_rtl;
case VAR_DECL:
@ -10956,7 +10982,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
MEM_ALIGN (temp), NULL_RTX, NULL);
return temp;
return EXTEND_BITINT (temp);
}
if (exp)
@ -11002,13 +11028,35 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
temp = gen_lowpart_SUBREG (mode, decl_rtl);
SUBREG_PROMOTED_VAR_P (temp) = 1;
SUBREG_PROMOTED_SET (temp, unsignedp);
return temp;
return EXTEND_BITINT (temp);
}
return decl_rtl;
return EXTEND_BITINT (decl_rtl);
case INTEGER_CST:
{
if (TREE_CODE (type) == BITINT_TYPE)
{
unsigned int prec = TYPE_PRECISION (type);
struct bitint_info info;
gcc_assert (targetm.c.bitint_type_info (prec, &info));
scalar_int_mode limb_mode
= as_a <scalar_int_mode> (info.limb_mode);
unsigned int limb_prec = GET_MODE_PRECISION (limb_mode);
if (prec > limb_prec)
{
scalar_int_mode arith_mode
= (targetm.scalar_mode_supported_p (TImode)
? TImode : DImode);
if (prec > GET_MODE_PRECISION (arith_mode))
{
/* Emit large/huge _BitInt INTEGER_CSTs into memory. */
exp = tree_output_constant_def (exp);
return expand_expr (exp, target, VOIDmode, modifier);
}
}
}
/* Given that TYPE_PRECISION (type) is not always equal to
GET_MODE_PRECISION (TYPE_MODE (type)), we need to extend from
the former to the latter according to the signedness of the
@ -11187,7 +11235,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
&& align < GET_MODE_ALIGNMENT (mode))
temp = expand_misaligned_mem_ref (temp, mode, unsignedp,
align, NULL_RTX, NULL);
return temp;
return EXTEND_BITINT (temp);
}
case MEM_REF:
@ -11258,7 +11306,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
? NULL_RTX : target, alt_rtl);
if (reverse)
temp = flip_storage_order (mode, temp);
return temp;
return EXTEND_BITINT (temp);
}
case ARRAY_REF:
@ -11810,6 +11858,8 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
&& modifier != EXPAND_WRITE)
op0 = flip_storage_order (mode1, op0);
op0 = EXTEND_BITINT (op0);
if (mode == mode1 || mode1 == BLKmode || mode1 == tmode
|| modifier == EXPAND_CONST_ADDRESS
|| modifier == EXPAND_INITIALIZER)
@ -12155,6 +12205,7 @@ expand_expr_real_1 (tree exp, rtx target, machine_mode tmode,
return expand_expr_real_2 (&ops, target, tmode, modifier);
}
}
#undef EXTEND_BITINT
/* Subroutine of above: reduce EXP to the precision of TYPE (in the
signedness of TYPE), possibly returning the result in TARGET.

View file

@ -2558,7 +2558,7 @@ fold_convert_loc (location_t loc, tree type, tree arg)
/* fall through */
case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
case OFFSET_TYPE:
case OFFSET_TYPE: case BITINT_TYPE:
if (TREE_CODE (arg) == INTEGER_CST)
{
tem = fold_convert_const (NOP_EXPR, type, arg);
@ -2598,7 +2598,7 @@ fold_convert_loc (location_t loc, tree type, tree arg)
switch (TREE_CODE (orig))
{
case INTEGER_TYPE:
case INTEGER_TYPE: case BITINT_TYPE:
case BOOLEAN_TYPE: case ENUMERAL_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
return fold_build1_loc (loc, FLOAT_EXPR, type, arg);
@ -2633,6 +2633,7 @@ fold_convert_loc (location_t loc, tree type, tree arg)
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case REAL_TYPE:
case BITINT_TYPE:
return fold_build1_loc (loc, FIXED_CONVERT_EXPR, type, arg);
case COMPLEX_TYPE:
@ -2646,7 +2647,7 @@ fold_convert_loc (location_t loc, tree type, tree arg)
case COMPLEX_TYPE:
switch (TREE_CODE (orig))
{
case INTEGER_TYPE:
case INTEGER_TYPE: case BITINT_TYPE:
case BOOLEAN_TYPE: case ENUMERAL_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
case REAL_TYPE:
@ -5325,6 +5326,8 @@ make_range_step (location_t loc, enum tree_code code, tree arg0, tree arg1,
equiv_type
= lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type),
TYPE_SATURATING (arg0_type));
else if (TREE_CODE (arg0_type) == BITINT_TYPE)
equiv_type = arg0_type;
else
equiv_type
= lang_hooks.types.type_for_mode (TYPE_MODE (arg0_type), 1);
@ -6851,10 +6854,19 @@ extract_muldiv_1 (tree t, tree c, enum tree_code code, tree wide_type,
{
tree type = TREE_TYPE (t);
enum tree_code tcode = TREE_CODE (t);
tree ctype = (wide_type != 0
&& (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
> GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
? wide_type : type);
tree ctype = type;
if (wide_type)
{
if (TREE_CODE (type) == BITINT_TYPE
|| TREE_CODE (wide_type) == BITINT_TYPE)
{
if (TYPE_PRECISION (wide_type) > TYPE_PRECISION (type))
ctype = wide_type;
}
else if (GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (wide_type))
> GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type)))
ctype = wide_type;
}
tree t1, t2;
bool same_p = tcode == code;
tree op0 = NULL_TREE, op1 = NULL_TREE;
@ -7715,7 +7727,29 @@ static int
native_encode_int (const_tree expr, unsigned char *ptr, int len, int off)
{
tree type = TREE_TYPE (expr);
int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
int total_bytes;
if (TREE_CODE (type) == BITINT_TYPE)
{
struct bitint_info info;
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
&info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
{
total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
/* More work is needed when adding _BitInt support to PDP endian
if limb is smaller than word, or if _BitInt limb ordering doesn't
match target endianity here. */
gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
&& (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
|| (GET_MODE_SIZE (limb_mode)
>= UNITS_PER_WORD)));
}
else
total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
}
else
total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
int byte, offset, word, words;
unsigned char value;
@ -8623,7 +8657,29 @@ native_encode_initializer (tree init, unsigned char *ptr, int len,
static tree
native_interpret_int (tree type, const unsigned char *ptr, int len)
{
int total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
int total_bytes;
if (TREE_CODE (type) == BITINT_TYPE)
{
struct bitint_info info;
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
&info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) > GET_MODE_PRECISION (limb_mode))
{
total_bytes = tree_to_uhwi (TYPE_SIZE_UNIT (type));
/* More work is needed when adding _BitInt support to PDP endian
if limb is smaller than word, or if _BitInt limb ordering doesn't
match target endianity here. */
gcc_checking_assert (info.big_endian == WORDS_BIG_ENDIAN
&& (BYTES_BIG_ENDIAN == WORDS_BIG_ENDIAN
|| (GET_MODE_SIZE (limb_mode)
>= UNITS_PER_WORD)));
}
else
total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
}
else
total_bytes = GET_MODE_SIZE (SCALAR_INT_TYPE_MODE (type));
if (total_bytes > len
|| total_bytes * BITS_PER_UNIT > HOST_BITS_PER_DOUBLE_INT)
@ -8825,6 +8881,7 @@ native_interpret_expr (tree type, const unsigned char *ptr, int len)
case POINTER_TYPE:
case REFERENCE_TYPE:
case OFFSET_TYPE:
case BITINT_TYPE:
return native_interpret_int (type, ptr, len);
case REAL_TYPE:

View file

@ -111,6 +111,15 @@ useless_type_conversion_p (tree outer_type, tree inner_type)
&& TYPE_PRECISION (outer_type) != 1)
return false;
/* Preserve conversions to/from BITINT_TYPE. While we don't
need to care that much about such conversions within a function's
body, we need to prevent changing BITINT_TYPE to INTEGER_TYPE
of the same precision or vice versa when passed to functions,
especially for varargs. */
if ((TREE_CODE (inner_type) == BITINT_TYPE)
!= (TREE_CODE (outer_type) == BITINT_TYPE))
return false;
/* We don't need to preserve changes in the types minimum or
maximum value in general as these do not generate code
unless the types precisions are different. */

View file

@ -1475,8 +1475,9 @@ gimple_fold_builtin_memset (gimple_stmt_iterator *gsi, tree c, tree len)
if (TREE_CODE (etype) == ARRAY_TYPE)
etype = TREE_TYPE (etype);
if (!INTEGRAL_TYPE_P (etype)
&& !POINTER_TYPE_P (etype))
if ((!INTEGRAL_TYPE_P (etype)
&& !POINTER_TYPE_P (etype))
|| TREE_CODE (etype) == BITINT_TYPE)
return NULL_TREE;
if (! var_decl_component_p (var))
@ -4102,8 +4103,8 @@ gimple_fold_builtin_realloc (gimple_stmt_iterator *gsi)
return false;
}
/* Number of bytes into which any type but aggregate or vector types
should fit. */
/* Number of bytes into which any type but aggregate, vector or
_BitInt types should fit. */
static constexpr size_t clear_padding_unit
= MAX_BITSIZE_MODE_ANY_MODE / BITS_PER_UNIT;
/* Buffer size on which __builtin_clear_padding folding code works. */
@ -4594,6 +4595,26 @@ clear_padding_real_needs_padding_p (tree type)
&& (fmt->signbit_ro == 79 || fmt->signbit_ro == 95));
}
/* _BitInt has padding bits if it isn't extended in the ABI and has smaller
precision than bits in limb or corresponding number of limbs. */
static bool
clear_padding_bitint_needs_padding_p (tree type)
{
struct bitint_info info;
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type), &info));
if (info.extended)
return false;
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) < GET_MODE_PRECISION (limb_mode))
return true;
else if (TYPE_PRECISION (type) == GET_MODE_PRECISION (limb_mode))
return false;
else
return (((unsigned) TYPE_PRECISION (type))
% GET_MODE_PRECISION (limb_mode)) != 0;
}
/* Return true if TYPE might contain any padding bits. */
bool
@ -4610,6 +4631,8 @@ clear_padding_type_may_have_padding_p (tree type)
return clear_padding_type_may_have_padding_p (TREE_TYPE (type));
case REAL_TYPE:
return clear_padding_real_needs_padding_p (type);
case BITINT_TYPE:
return clear_padding_bitint_needs_padding_p (type);
default:
return false;
}
@ -4854,6 +4877,57 @@ clear_padding_type (clear_padding_struct *buf, tree type,
memset (buf->buf + buf->size, ~0, sz);
buf->size += sz;
break;
case BITINT_TYPE:
{
struct bitint_info info;
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type), &info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
{
gcc_assert ((size_t) sz <= clear_padding_unit);
if ((unsigned HOST_WIDE_INT) sz + buf->size
> clear_padding_buf_size)
clear_padding_flush (buf, false);
if (!info.extended
&& TYPE_PRECISION (type) < GET_MODE_PRECISION (limb_mode))
{
int tprec = GET_MODE_PRECISION (limb_mode);
int prec = TYPE_PRECISION (type);
tree t = build_nonstandard_integer_type (tprec, 1);
tree cst = wide_int_to_tree (t, wi::mask (prec, true, tprec));
int len = native_encode_expr (cst, buf->buf + buf->size, sz);
gcc_assert (len > 0 && (size_t) len == (size_t) sz);
}
else
memset (buf->buf + buf->size, 0, sz);
buf->size += sz;
break;
}
tree limbtype
= build_nonstandard_integer_type (GET_MODE_PRECISION (limb_mode), 1);
fldsz = int_size_in_bytes (limbtype);
nelts = int_size_in_bytes (type) / fldsz;
for (HOST_WIDE_INT i = 0; i < nelts; i++)
{
if (!info.extended
&& i == (info.big_endian ? 0 : nelts - 1)
&& (((unsigned) TYPE_PRECISION (type))
% TYPE_PRECISION (limbtype)) != 0)
{
int tprec = GET_MODE_PRECISION (limb_mode);
int prec = (((unsigned) TYPE_PRECISION (type)) % tprec);
tree cst = wide_int_to_tree (limbtype,
wi::mask (prec, true, tprec));
int len = native_encode_expr (cst, buf->buf + buf->size,
fldsz);
gcc_assert (len > 0 && (size_t) len == (size_t) fldsz);
buf->size += fldsz;
}
else
clear_padding_type (buf, limbtype, fldsz, for_auto_init);
}
break;
}
default:
gcc_assert ((size_t) sz <= clear_padding_unit);
if ((unsigned HOST_WIDE_INT) sz + buf->size > clear_padding_buf_size)

View file

@ -1647,6 +1647,12 @@ expand_mul_overflow (location_t loc, tree lhs, tree arg0, tree arg1,
int pos_neg0 = get_range_pos_neg (arg0);
int pos_neg1 = get_range_pos_neg (arg1);
/* Unsigned types with smaller than mode precision, even if they have most
significant bit set, are still zero-extended. */
if (uns0_p && TYPE_PRECISION (TREE_TYPE (arg0)) < GET_MODE_PRECISION (mode))
pos_neg0 = 1;
if (uns1_p && TYPE_PRECISION (TREE_TYPE (arg1)) < GET_MODE_PRECISION (mode))
pos_neg1 = 1;
/* s1 * u2 -> ur */
if (!uns0_p && uns1_p && unsr_p)
@ -4906,3 +4912,104 @@ expand_MASK_CALL (internal_fn, gcall *)
/* This IFN should only exist between ifcvt and vect passes. */
gcc_unreachable ();
}
void
expand_MULBITINT (internal_fn, gcall *stmt)
{
rtx_mode_t args[6];
for (int i = 0; i < 6; i++)
args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
(i & 1) ? SImode : ptr_mode);
rtx fun = init_one_libfunc ("__mulbitint3");
emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 6, args);
}
void
expand_DIVMODBITINT (internal_fn, gcall *stmt)
{
rtx_mode_t args[8];
for (int i = 0; i < 8; i++)
args[i] = rtx_mode_t (expand_normal (gimple_call_arg (stmt, i)),
(i & 1) ? SImode : ptr_mode);
rtx fun = init_one_libfunc ("__divmodbitint4");
emit_library_call_value_1 (0, fun, NULL_RTX, LCT_NORMAL, VOIDmode, 8, args);
}
void
expand_FLOATTOBITINT (internal_fn, gcall *stmt)
{
machine_mode mode = TYPE_MODE (TREE_TYPE (gimple_call_arg (stmt, 2)));
rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
rtx arg2 = expand_normal (gimple_call_arg (stmt, 2));
const char *mname = GET_MODE_NAME (mode);
unsigned mname_len = strlen (mname);
int len = 12 + mname_len;
if (DECIMAL_FLOAT_MODE_P (mode))
len += 4;
char *libfunc_name = XALLOCAVEC (char, len);
char *p = libfunc_name;
const char *q;
if (DECIMAL_FLOAT_MODE_P (mode))
{
#if ENABLE_DECIMAL_BID_FORMAT
memcpy (p, "__bid_fix", 9);
#else
memcpy (p, "__dpd_fix", 9);
#endif
p += 9;
}
else
{
memcpy (p, "__fix", 5);
p += 5;
}
for (q = mname; *q; q++)
*p++ = TOLOWER (*q);
memcpy (p, "bitint", 7);
rtx fun = init_one_libfunc (libfunc_name);
emit_library_call (fun, LCT_NORMAL, VOIDmode, arg0, ptr_mode, arg1,
SImode, arg2, mode);
}
void
expand_BITINTTOFLOAT (internal_fn, gcall *stmt)
{
tree lhs = gimple_call_lhs (stmt);
if (!lhs)
return;
machine_mode mode = TYPE_MODE (TREE_TYPE (lhs));
rtx arg0 = expand_normal (gimple_call_arg (stmt, 0));
rtx arg1 = expand_normal (gimple_call_arg (stmt, 1));
const char *mname = GET_MODE_NAME (mode);
unsigned mname_len = strlen (mname);
int len = 14 + mname_len;
if (DECIMAL_FLOAT_MODE_P (mode))
len += 4;
char *libfunc_name = XALLOCAVEC (char, len);
char *p = libfunc_name;
const char *q;
if (DECIMAL_FLOAT_MODE_P (mode))
{
#if ENABLE_DECIMAL_BID_FORMAT
memcpy (p, "__bid_floatbitint", 17);
#else
memcpy (p, "__dpd_floatbitint", 17);
#endif
p += 17;
}
else
{
memcpy (p, "__floatbitint", 13);
p += 13;
}
for (q = mname; *q; q++)
*p++ = TOLOWER (*q);
*p = '\0';
rtx fun = init_one_libfunc (libfunc_name);
rtx target = expand_expr (lhs, NULL_RTX, VOIDmode, EXPAND_WRITE);
rtx val = emit_library_call_value (fun, target, LCT_PURE, mode,
arg0, ptr_mode, arg1, SImode);
if (val != target)
emit_move_insn (target, val);
}

View file

@ -561,6 +561,12 @@ DEF_INTERNAL_FN (ASSUME, ECF_CONST | ECF_LEAF | ECF_NOTHROW
/* For if-conversion of inbranch SIMD clones. */
DEF_INTERNAL_FN (MASK_CALL, ECF_NOVOPS, NULL)
/* _BitInt support. */
DEF_INTERNAL_FN (MULBITINT, ECF_LEAF | ECF_NOTHROW, ". O . R . R . ")
DEF_INTERNAL_FN (DIVMODBITINT, ECF_LEAF, ". O . O . R . R . ")
DEF_INTERNAL_FN (FLOATTOBITINT, ECF_LEAF | ECF_NOTHROW, ". O . . ")
DEF_INTERNAL_FN (BITINTTOFLOAT, ECF_PURE | ECF_LEAF, ". R . ")
#undef DEF_INTERNAL_INT_FN
#undef DEF_INTERNAL_FLT_FN
#undef DEF_INTERNAL_FLT_FLOATN_FN

View file

@ -257,6 +257,10 @@ extern void expand_SPACESHIP (internal_fn, gcall *);
extern void expand_TRAP (internal_fn, gcall *);
extern void expand_ASSUME (internal_fn, gcall *);
extern void expand_MASK_CALL (internal_fn, gcall *);
extern void expand_MULBITINT (internal_fn, gcall *);
extern void expand_DIVMODBITINT (internal_fn, gcall *);
extern void expand_FLOATTOBITINT (internal_fn, gcall *);
extern void expand_BITINTTOFLOAT (internal_fn, gcall *);
extern bool vectorized_internal_fn_supported_p (internal_fn, tree);

View file

@ -6772,6 +6772,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
- 1)); }))))
(if (wi::to_wide (cst) == signed_max
&& TYPE_UNSIGNED (arg1_type)
&& TYPE_MODE (arg1_type) != BLKmode
/* We will flip the signedness of the comparison operator
associated with the mode of @1, so the sign bit is
specified by this mode. Check that @1 is the signed

View file

@ -336,8 +336,23 @@ pp_get_prefix (const pretty_printer *pp) { return pp->prefix; }
#define pp_wide_int(PP, W, SGN) \
do \
{ \
print_dec (W, pp_buffer (PP)->digit_buffer, SGN); \
pp_string (PP, pp_buffer (PP)->digit_buffer); \
const wide_int_ref &pp_wide_int_ref = (W); \
unsigned int pp_wide_int_prec \
= pp_wide_int_ref.get_precision (); \
if ((pp_wide_int_prec + 3) / 4 \
> sizeof (pp_buffer (PP)->digit_buffer) - 3) \
{ \
char *pp_wide_int_buf \
= XALLOCAVEC (char, (pp_wide_int_prec + 3) / 4 + 3);\
print_dec (pp_wide_int_ref, pp_wide_int_buf, SGN); \
pp_string (PP, pp_wide_int_buf); \
} \
else \
{ \
print_dec (pp_wide_int_ref, \
pp_buffer (PP)->digit_buffer, SGN); \
pp_string (PP, pp_buffer (PP)->digit_buffer); \
} \
} \
while (0)
#define pp_vrange(PP, R) \

View file

@ -2148,6 +2148,22 @@ finish_bitfield_representative (tree repr, tree field)
|| GET_MODE_BITSIZE (mode) > maxbitsize
|| GET_MODE_BITSIZE (mode) > MAX_FIXED_MODE_SIZE)
{
if (TREE_CODE (TREE_TYPE (field)) == BITINT_TYPE)
{
struct bitint_info info;
unsigned prec = TYPE_PRECISION (TREE_TYPE (field));
gcc_assert (targetm.c.bitint_type_info (prec, &info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
unsigned lprec = GET_MODE_PRECISION (limb_mode);
if (prec > lprec)
{
/* For middle/large/huge _BitInt prefer bitsize being a multiple
of limb precision. */
unsigned HOST_WIDE_INT bsz = CEIL (bitsize, lprec) * lprec;
if (bsz <= maxbitsize)
bitsize = bsz;
}
}
/* We really want a BLKmode representative only as a last resort,
considering the member b in
struct { int a : 7; int b : 17; int c; } __attribute__((packed));
@ -2393,6 +2409,64 @@ layout_type (tree type)
break;
}
case BITINT_TYPE:
{
struct bitint_info info;
int cnt;
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type), &info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
{
SET_TYPE_MODE (type, limb_mode);
cnt = 1;
}
else
{
SET_TYPE_MODE (type, BLKmode);
cnt = CEIL (TYPE_PRECISION (type), GET_MODE_PRECISION (limb_mode));
}
TYPE_SIZE (type) = bitsize_int (cnt * GET_MODE_BITSIZE (limb_mode));
TYPE_SIZE_UNIT (type) = size_int (cnt * GET_MODE_SIZE (limb_mode));
SET_TYPE_ALIGN (type, GET_MODE_ALIGNMENT (limb_mode));
if (cnt > 1)
{
/* Use same mode as compute_record_mode would use for a structure
containing cnt limb_mode elements. */
machine_mode mode = mode_for_size_tree (TYPE_SIZE (type),
MODE_INT, 1).else_blk ();
if (mode == BLKmode)
break;
finalize_type_size (type);
SET_TYPE_MODE (type, mode);
if (STRICT_ALIGNMENT
&& !(TYPE_ALIGN (type) >= BIGGEST_ALIGNMENT
|| TYPE_ALIGN (type) >= GET_MODE_ALIGNMENT (mode)))
{
/* If this is the only reason this type is BLKmode, then
don't force containing types to be BLKmode. */
TYPE_NO_FORCE_BLK (type) = 1;
SET_TYPE_MODE (type, BLKmode);
}
if (TYPE_NEXT_VARIANT (type) || type != TYPE_MAIN_VARIANT (type))
for (tree variant = TYPE_MAIN_VARIANT (type);
variant != NULL_TREE;
variant = TYPE_NEXT_VARIANT (variant))
{
SET_TYPE_MODE (variant, mode);
if (STRICT_ALIGNMENT
&& !(TYPE_ALIGN (variant) >= BIGGEST_ALIGNMENT
|| (TYPE_ALIGN (variant)
>= GET_MODE_ALIGNMENT (mode))))
{
TYPE_NO_FORCE_BLK (variant) = 1;
SET_TYPE_MODE (variant, BLKmode);
}
}
return;
}
break;
}
case REAL_TYPE:
{
/* Allow the caller to choose the type mode, which is how decimal
@ -2417,6 +2491,18 @@ layout_type (tree type)
case COMPLEX_TYPE:
TYPE_UNSIGNED (type) = TYPE_UNSIGNED (TREE_TYPE (type));
if (TYPE_MODE (TREE_TYPE (type)) == BLKmode)
{
gcc_checking_assert (TREE_CODE (TREE_TYPE (type)) == BITINT_TYPE);
SET_TYPE_MODE (type, BLKmode);
TYPE_SIZE (type)
= int_const_binop (MULT_EXPR, TYPE_SIZE (TREE_TYPE (type)),
bitsize_int (2));
TYPE_SIZE_UNIT (type)
= int_const_binop (MULT_EXPR, TYPE_SIZE_UNIT (TREE_TYPE (type)),
bitsize_int (2));
break;
}
SET_TYPE_MODE (type,
GET_MODE_COMPLEX_MODE (TYPE_MODE (TREE_TYPE (type))));

View file

@ -6249,6 +6249,25 @@ when @var{type} is @code{EXCESS_PRECISION_TYPE_STANDARD},\n\
enum flt_eval_method, (enum excess_precision_type type),
default_excess_precision)
/* Return true if _BitInt(N) is supported and fill details about it into
*INFO. */
DEFHOOK
(bitint_type_info,
"This target hook returns true if @code{_BitInt(@var{N})} is supported and\n\
provides details on it. @code{_BitInt(@var{N})} is to be represented as\n\
series of @code{info->limb_mode}\n\
@code{CEIL (@var{N}, GET_MODE_PRECISION (info->limb_mode))} limbs,\n\
ordered from least significant to most significant if\n\
@code{!info->big_endian}, otherwise from most significant to least\n\
significant. If @code{info->extended} is false, the bits above or equal to\n\
@var{N} are undefined when stored in a register or memory, otherwise they\n\
are zero or sign extended depending on if it is\n\
@code{unsigned _BitInt(@var{N})} or one of @code{_BitInt(@var{N})} or\n\
@code{signed _BitInt(@var{N})}. Alignment of the type is\n\
@code{GET_MODE_ALIGNMENT (info->limb_mode)}.",
bool, (int n, struct bitint_info *info),
default_bitint_type_info)
HOOK_VECTOR_END (c)
/* Functions specific to the C++ frontend. */

View file

@ -68,6 +68,20 @@ union cumulative_args_t { void *p; };
#endif /* !CHECKING_P */
/* Target properties of _BitInt(N) type. _BitInt(N) is to be represented
as series of limb_mode CEIL (N, GET_MODE_PRECISION (limb_mode)) limbs,
ordered from least significant to most significant if !big_endian,
otherwise from most significant to least significant. If extended is
false, the bits above or equal to N are undefined when stored in a register
or memory, otherwise they are zero or sign extended depending on if
it is unsigned _BitInt(N) or _BitInt(N) / signed _BitInt(N). */
struct bitint_info {
machine_mode limb_mode;
bool big_endian;
bool extended;
};
/* Types of memory operation understood by the "by_pieces" infrastructure.
Used by the TARGET_USE_BY_PIECES_INFRASTRUCTURE_P target hook and
internally by the functions in expr.cc. */

View file

@ -2597,6 +2597,14 @@ default_excess_precision (enum excess_precision_type ATTRIBUTE_UNUSED)
return FLT_EVAL_METHOD_PROMOTE_TO_FLOAT;
}
/* Return true if _BitInt(N) is supported and fill details about it into
*INFO. */
bool
default_bitint_type_info (int, struct bitint_info *)
{
return false;
}
/* Default implementation for
TARGET_STACK_CLASH_PROTECTION_ALLOCA_PROBE_RANGE. */
HOST_WIDE_INT

View file

@ -284,6 +284,7 @@ extern unsigned int default_min_arithmetic_precision (void);
extern enum flt_eval_method
default_excess_precision (enum excess_precision_type ATTRIBUTE_UNUSED);
extern bool default_bitint_type_info (int, struct bitint_info *);
extern HOST_WIDE_INT default_stack_clash_protection_alloca_probe_range (void);
extern void default_select_early_remat_modes (sbitmap);
extern tree default_preferred_else_value (unsigned, tree, unsigned, tree *);

View file

@ -1929,6 +1929,7 @@ dump_generic_node (pretty_printer *pp, tree node, int spc, dump_flags_t flags,
case VECTOR_TYPE:
case ENUMERAL_TYPE:
case BOOLEAN_TYPE:
case BITINT_TYPE:
case OPAQUE_TYPE:
{
unsigned int quals = TYPE_QUALS (node);
@ -2043,6 +2044,14 @@ dump_generic_node (pretty_printer *pp, tree node, int spc, dump_flags_t flags,
pp_decimal_int (pp, TYPE_PRECISION (node));
pp_greater (pp);
}
else if (TREE_CODE (node) == BITINT_TYPE)
{
if (TYPE_UNSIGNED (node))
pp_string (pp, "unsigned ");
pp_string (pp, "_BitInt(");
pp_decimal_int (pp, TYPE_PRECISION (node));
pp_right_paren (pp);
}
else if (TREE_CODE (node) == VOID_TYPE)
pp_string (pp, "void");
else
@ -2239,8 +2248,18 @@ dump_generic_node (pretty_printer *pp, tree node, int spc, dump_flags_t flags,
pp_minus (pp);
val = -val;
}
print_hex (val, pp_buffer (pp)->digit_buffer);
pp_string (pp, pp_buffer (pp)->digit_buffer);
unsigned int prec = val.get_precision ();
if ((prec + 3) / 4 > sizeof (pp_buffer (pp)->digit_buffer) - 3)
{
char *buf = XALLOCAVEC (char, (prec + 3) / 4 + 3);
print_hex (val, buf);
pp_string (pp, buf);
}
else
{
print_hex (val, pp_buffer (pp)->digit_buffer);
pp_string (pp, pp_buffer (pp)->digit_buffer);
}
}
if ((flags & TDF_GIMPLE)
&& ! (POINTER_TYPE_P (TREE_TYPE (node))

View file

@ -77,6 +77,7 @@ along with GCC; see the file COPYING3. If not see
#include "alloc-pool.h"
#include "symbol-summary.h"
#include "ipa-prop.h"
#include "target.h"
/* This algorithm is based on the SCC algorithm presented by Keith
Cooper and L. Taylor Simpson in "SCC-Based Value numbering"
@ -7001,8 +7002,14 @@ eliminate_dom_walker::eliminate_stmt (basic_block b, gimple_stmt_iterator *gsi)
|| !DECL_BIT_FIELD_TYPE (TREE_OPERAND (lhs, 1)))
&& !type_has_mode_precision_p (TREE_TYPE (lhs)))
{
if (TREE_CODE (lhs) == COMPONENT_REF
|| TREE_CODE (lhs) == MEM_REF)
if (TREE_CODE (TREE_TYPE (lhs)) == BITINT_TYPE
&& (TYPE_PRECISION (TREE_TYPE (lhs))
> (targetm.scalar_mode_supported_p (TImode)
? GET_MODE_PRECISION (TImode)
: GET_MODE_PRECISION (DImode))))
lookup_lhs = NULL_TREE;
else if (TREE_CODE (lhs) == COMPONENT_REF
|| TREE_CODE (lhs) == MEM_REF)
{
tree ltype = build_nonstandard_integer_type
(TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (lhs))),

View file

@ -1143,32 +1143,89 @@ jump_table_cluster::emit (tree index_expr, tree,
tree default_label_expr, basic_block default_bb,
location_t loc)
{
unsigned HOST_WIDE_INT range = get_range (get_low (), get_high ());
tree low = get_low ();
unsigned HOST_WIDE_INT range = get_range (low, get_high ());
unsigned HOST_WIDE_INT nondefault_range = 0;
bool bitint = false;
gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
/* For large/huge _BitInt, subtract low from index_expr, cast to unsigned
DImode type (get_range doesn't support ranges larger than 64-bits)
and subtract low from all case values as well. */
if (TREE_CODE (TREE_TYPE (index_expr)) == BITINT_TYPE
&& TYPE_PRECISION (TREE_TYPE (index_expr)) > GET_MODE_PRECISION (DImode))
{
bitint = true;
tree this_low = low, type;
gimple *g;
gimple_seq seq = NULL;
if (!TYPE_OVERFLOW_WRAPS (TREE_TYPE (index_expr)))
{
type = unsigned_type_for (TREE_TYPE (index_expr));
index_expr = gimple_convert (&seq, type, index_expr);
this_low = fold_convert (type, this_low);
}
this_low = const_unop (NEGATE_EXPR, TREE_TYPE (this_low), this_low);
index_expr = gimple_build (&seq, PLUS_EXPR, TREE_TYPE (index_expr),
index_expr, this_low);
type = build_nonstandard_integer_type (GET_MODE_PRECISION (DImode), 1);
g = gimple_build_cond (GT_EXPR, index_expr,
fold_convert (TREE_TYPE (index_expr),
TYPE_MAX_VALUE (type)),
NULL_TREE, NULL_TREE);
gimple_seq_add_stmt (&seq, g);
gimple_seq_set_location (seq, loc);
gsi_insert_seq_after (&gsi, seq, GSI_NEW_STMT);
edge e1 = split_block (m_case_bb, g);
e1->flags = EDGE_FALSE_VALUE;
e1->probability = profile_probability::likely ();
edge e2 = make_edge (e1->src, default_bb, EDGE_TRUE_VALUE);
e2->probability = e1->probability.invert ();
gsi = gsi_start_bb (e1->dest);
seq = NULL;
index_expr = gimple_convert (&seq, type, index_expr);
gimple_seq_set_location (seq, loc);
gsi_insert_seq_after (&gsi, seq, GSI_NEW_STMT);
}
/* For jump table we just emit a new gswitch statement that will
be latter lowered to jump table. */
auto_vec <tree> labels;
labels.create (m_cases.length ());
make_edge (m_case_bb, default_bb, 0);
basic_block case_bb = gsi_bb (gsi);
make_edge (case_bb, default_bb, 0);
for (unsigned i = 0; i < m_cases.length (); i++)
{
labels.quick_push (unshare_expr (m_cases[i]->m_case_label_expr));
make_edge (m_case_bb, m_cases[i]->m_case_bb, 0);
tree lab = unshare_expr (m_cases[i]->m_case_label_expr);
if (bitint)
{
CASE_LOW (lab)
= fold_convert (TREE_TYPE (index_expr),
const_binop (MINUS_EXPR,
TREE_TYPE (CASE_LOW (lab)),
CASE_LOW (lab), low));
if (CASE_HIGH (lab))
CASE_HIGH (lab)
= fold_convert (TREE_TYPE (index_expr),
const_binop (MINUS_EXPR,
TREE_TYPE (CASE_HIGH (lab)),
CASE_HIGH (lab), low));
}
labels.quick_push (lab);
make_edge (case_bb, m_cases[i]->m_case_bb, 0);
}
gswitch *s = gimple_build_switch (index_expr,
unshare_expr (default_label_expr), labels);
gimple_set_location (s, loc);
gimple_stmt_iterator gsi = gsi_start_bb (m_case_bb);
gsi_insert_after (&gsi, s, GSI_NEW_STMT);
/* Set up even probabilities for all cases. */
for (unsigned i = 0; i < m_cases.length (); i++)
{
simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
edge case_edge = find_edge (case_bb, sc->m_case_bb);
unsigned HOST_WIDE_INT case_range
= sc->get_range (sc->get_low (), sc->get_high ());
nondefault_range += case_range;
@ -1184,7 +1241,7 @@ jump_table_cluster::emit (tree index_expr, tree,
for (unsigned i = 0; i < m_cases.length (); i++)
{
simple_cluster *sc = static_cast<simple_cluster *> (m_cases[i]);
edge case_edge = find_edge (m_case_bb, sc->m_case_bb);
edge case_edge = find_edge (case_bb, sc->m_case_bb);
case_edge->probability
= profile_probability::always ().apply_scale ((intptr_t)case_edge->aux,
range);

View file

@ -991,6 +991,7 @@ tree_code_size (enum tree_code code)
case VOID_TYPE:
case FUNCTION_TYPE:
case METHOD_TYPE:
case BITINT_TYPE:
case LANG_TYPE: return sizeof (tree_type_non_common);
default:
gcc_checking_assert (code >= NUM_TREE_CODES);
@ -1732,6 +1733,7 @@ wide_int_to_tree_1 (tree type, const wide_int_ref &pcst)
case INTEGER_TYPE:
case OFFSET_TYPE:
case BITINT_TYPE:
if (TYPE_SIGN (type) == UNSIGNED)
{
/* Cache [0, N). */
@ -1915,6 +1917,7 @@ cache_integer_cst (tree t, bool might_duplicate ATTRIBUTE_UNUSED)
case INTEGER_TYPE:
case OFFSET_TYPE:
case BITINT_TYPE:
if (TYPE_UNSIGNED (type))
{
/* Cache 0..N */
@ -2637,7 +2640,7 @@ build_zero_cst (tree type)
{
case INTEGER_TYPE: case ENUMERAL_TYPE: case BOOLEAN_TYPE:
case POINTER_TYPE: case REFERENCE_TYPE:
case OFFSET_TYPE: case NULLPTR_TYPE:
case OFFSET_TYPE: case NULLPTR_TYPE: case BITINT_TYPE:
return build_int_cst (type, 0);
case REAL_TYPE:
@ -6053,7 +6056,16 @@ type_hash_canon_hash (tree type)
hstate.add_object (TREE_INT_CST_ELT (t, i));
break;
}
case BITINT_TYPE:
{
unsigned prec = TYPE_PRECISION (type);
unsigned uns = TYPE_UNSIGNED (type);
hstate.add_object (prec);
hstate.add_int (uns);
break;
}
case REAL_TYPE:
case FIXED_POINT_TYPE:
{
@ -6136,6 +6148,11 @@ type_cache_hasher::equal (type_hash *a, type_hash *b)
|| tree_int_cst_equal (TYPE_MIN_VALUE (a->type),
TYPE_MIN_VALUE (b->type))));
case BITINT_TYPE:
if (TYPE_PRECISION (a->type) != TYPE_PRECISION (b->type))
return false;
return TYPE_UNSIGNED (a->type) == TYPE_UNSIGNED (b->type);
case FIXED_POINT_TYPE:
return TYPE_SATURATING (a->type) == TYPE_SATURATING (b->type);
@ -6236,7 +6253,7 @@ type_hash_canon (unsigned int hashcode, tree type)
/* Free also min/max values and the cache for integer
types. This can't be done in free_node, as LTO frees
those on its own. */
if (TREE_CODE (type) == INTEGER_TYPE)
if (TREE_CODE (type) == INTEGER_TYPE || TREE_CODE (type) == BITINT_TYPE)
{
if (TYPE_MIN_VALUE (type)
&& TREE_TYPE (TYPE_MIN_VALUE (type)) == type)
@ -7154,6 +7171,44 @@ build_nonstandard_boolean_type (unsigned HOST_WIDE_INT precision)
return type;
}
static GTY(()) vec<tree, va_gc> *bitint_type_cache;
/* Builds a signed or unsigned _BitInt(PRECISION) type. */
tree
build_bitint_type (unsigned HOST_WIDE_INT precision, int unsignedp)
{
tree itype, ret;
if (unsignedp)
unsignedp = MAX_INT_CACHED_PREC + 1;
if (bitint_type_cache == NULL)
vec_safe_grow_cleared (bitint_type_cache, 2 * MAX_INT_CACHED_PREC + 2);
if (precision <= MAX_INT_CACHED_PREC)
{
itype = (*bitint_type_cache)[precision + unsignedp];
if (itype)
return itype;
}
itype = make_node (BITINT_TYPE);
TYPE_PRECISION (itype) = precision;
if (unsignedp)
fixup_unsigned_type (itype);
else
fixup_signed_type (itype);
inchash::hash hstate;
inchash::add_expr (TYPE_MAX_VALUE (itype), hstate);
ret = type_hash_canon (hstate.end (), itype);
if (precision <= MAX_INT_CACHED_PREC)
(*bitint_type_cache)[precision + unsignedp] = ret;
return ret;
}
/* Create a range of some discrete type TYPE (an INTEGER_TYPE, ENUMERAL_TYPE
or BOOLEAN_TYPE) with low bound LOWVAL and high bound HIGHVAL. If SHARED
is true, reuse such a type that has already been constructed. */
@ -11041,6 +11096,8 @@ signed_or_unsigned_type_for (int unsignedp, tree type)
else
return NULL_TREE;
if (TREE_CODE (type) == BITINT_TYPE)
return build_bitint_type (bits, unsignedp);
return build_nonstandard_integer_type (bits, unsignedp);
}
@ -13462,6 +13519,7 @@ verify_type_variant (const_tree t, tree tv)
if ((TREE_CODE (t) == ENUMERAL_TYPE && COMPLETE_TYPE_P (t))
|| TREE_CODE (t) == INTEGER_TYPE
|| TREE_CODE (t) == BOOLEAN_TYPE
|| TREE_CODE (t) == BITINT_TYPE
|| SCALAR_FLOAT_TYPE_P (t)
|| FIXED_POINT_TYPE_P (t))
{
@ -14201,6 +14259,7 @@ verify_type (const_tree t)
}
else if (TREE_CODE (t) == INTEGER_TYPE
|| TREE_CODE (t) == BOOLEAN_TYPE
|| TREE_CODE (t) == BITINT_TYPE
|| TREE_CODE (t) == OFFSET_TYPE
|| TREE_CODE (t) == REFERENCE_TYPE
|| TREE_CODE (t) == NULLPTR_TYPE
@ -14260,6 +14319,7 @@ verify_type (const_tree t)
}
if (TREE_CODE (t) != INTEGER_TYPE
&& TREE_CODE (t) != BOOLEAN_TYPE
&& TREE_CODE (t) != BITINT_TYPE
&& TREE_CODE (t) != OFFSET_TYPE
&& TREE_CODE (t) != REFERENCE_TYPE
&& TREE_CODE (t) != NULLPTR_TYPE
@ -15035,6 +15095,7 @@ void
tree_cc_finalize (void)
{
clear_nonstandard_integer_type_cache ();
vec_free (bitint_type_cache);
}
#if CHECKING_P

View file

@ -113,7 +113,7 @@ DEFTREECODE (BLOCK, "block", tcc_exceptional, 0)
/* The ordering of the following codes is optimized for the checking
macros in tree.h. Changing the order will degrade the speed of the
compiler. OFFSET_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, INTEGER_TYPE,
REAL_TYPE, POINTER_TYPE. */
BITINT_TYPE, REAL_TYPE, POINTER_TYPE. */
/* An offset is a pointer relative to an object.
The TREE_TYPE field is the type of the object at the offset.
@ -144,6 +144,13 @@ DEFTREECODE (BOOLEAN_TYPE, "boolean_type", tcc_type, 0)
and TYPE_PRECISION (number of bits used by this type). */
DEFTREECODE (INTEGER_TYPE, "integer_type", tcc_type, 0)
/* Bit-precise integer type. These are similar to INTEGER_TYPEs, but
can have arbitrary user selected precisions and do or can have different
alignment, function argument and return value passing conventions.
Larger BITINT_TYPEs can have BLKmode TYPE_MODE and need to be lowered
by a special BITINT_TYPE lowering pass. */
DEFTREECODE (BITINT_TYPE, "bitint_type", tcc_type, 0)
/* C's float and double. Different floating types are distinguished
by machine mode and by the TYPE_SIZE and the TYPE_PRECISION. */
DEFTREECODE (REAL_TYPE, "real_type", tcc_type, 0)

View file

@ -363,6 +363,14 @@ code_helper::is_builtin_fn () const
(tree_not_check5 ((T), __FILE__, __LINE__, __FUNCTION__, \
(CODE1), (CODE2), (CODE3), (CODE4), (CODE5)))
#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
(tree_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
(CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) \
(tree_not_check6 ((T), __FILE__, __LINE__, __FUNCTION__, \
(CODE1), (CODE2), (CODE3), (CODE4), (CODE5), (CODE6)))
#define CONTAINS_STRUCT_CHECK(T, STRUCT) \
(contains_struct_check ((T), (STRUCT), __FILE__, __LINE__, __FUNCTION__))
@ -485,6 +493,8 @@ extern void omp_clause_range_check_failed (const_tree, const char *, int,
#define TREE_NOT_CHECK4(T, CODE1, CODE2, CODE3, CODE4) (T)
#define TREE_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
#define TREE_NOT_CHECK5(T, CODE1, CODE2, CODE3, CODE4, CODE5) (T)
#define TREE_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
#define TREE_NOT_CHECK6(T, CODE1, CODE2, CODE3, CODE4, CODE5, CODE6) (T)
#define TREE_CLASS_CHECK(T, CODE) (T)
#define TREE_RANGE_CHECK(T, CODE1, CODE2) (T)
#define EXPR_CHECK(T) (T)
@ -528,8 +538,8 @@ extern void omp_clause_range_check_failed (const_tree, const char *, int,
TREE_CHECK2 (T, ARRAY_TYPE, INTEGER_TYPE)
#define NUMERICAL_TYPE_CHECK(T) \
TREE_CHECK5 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
FIXED_POINT_TYPE)
TREE_CHECK6 (T, INTEGER_TYPE, ENUMERAL_TYPE, BOOLEAN_TYPE, REAL_TYPE, \
FIXED_POINT_TYPE, BITINT_TYPE)
/* Here is how primitive or already-canonicalized types' hash codes
are made. */
@ -603,7 +613,8 @@ extern void omp_clause_range_check_failed (const_tree, const char *, int,
#define INTEGRAL_TYPE_P(TYPE) \
(TREE_CODE (TYPE) == ENUMERAL_TYPE \
|| TREE_CODE (TYPE) == BOOLEAN_TYPE \
|| TREE_CODE (TYPE) == INTEGER_TYPE)
|| TREE_CODE (TYPE) == INTEGER_TYPE \
|| TREE_CODE (TYPE) == BITINT_TYPE)
/* Nonzero if TYPE represents an integral type, including complex
and vector integer types. */
@ -614,6 +625,10 @@ extern void omp_clause_range_check_failed (const_tree, const char *, int,
|| VECTOR_TYPE_P (TYPE)) \
&& INTEGRAL_TYPE_P (TREE_TYPE (TYPE))))
/* Nonzero if TYPE is bit-precise integer type. */
#define BITINT_TYPE_P(TYPE) (TREE_CODE (TYPE) == BITINT_TYPE)
/* Nonzero if TYPE represents a non-saturating fixed-point type. */
#define NON_SAT_FIXED_POINT_TYPE_P(TYPE) \
@ -1244,7 +1259,9 @@ extern void omp_clause_range_check_failed (const_tree, const char *, int,
/* True if NODE, a FIELD_DECL, is to be processed as a bitfield for
constructor output purposes. */
#define CONSTRUCTOR_BITFIELD_P(NODE) \
(DECL_BIT_FIELD (FIELD_DECL_CHECK (NODE)) && DECL_MODE (NODE) != BLKmode)
(DECL_BIT_FIELD (FIELD_DECL_CHECK (NODE)) \
&& (DECL_MODE (NODE) != BLKmode \
|| TREE_CODE (TREE_TYPE (NODE)) == BITINT_TYPE))
/* True if NODE is a clobber right hand side, an expression of indeterminate
value that clobbers the LHS in a copy instruction. We use a volatile
@ -3686,6 +3703,38 @@ tree_not_check5 (tree __t, const char *__f, int __l, const char *__g,
return __t;
}
inline tree
tree_check6 (tree __t, const char *__f, int __l, const char *__g,
enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
{
if (TREE_CODE (__t) != __c1
&& TREE_CODE (__t) != __c2
&& TREE_CODE (__t) != __c3
&& TREE_CODE (__t) != __c4
&& TREE_CODE (__t) != __c5
&& TREE_CODE (__t) != __c6)
tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
0);
return __t;
}
inline tree
tree_not_check6 (tree __t, const char *__f, int __l, const char *__g,
enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
{
if (TREE_CODE (__t) == __c1
|| TREE_CODE (__t) == __c2
|| TREE_CODE (__t) == __c3
|| TREE_CODE (__t) == __c4
|| TREE_CODE (__t) == __c5
|| TREE_CODE (__t) == __c6)
tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
__c6, 0);
return __t;
}
inline tree
contains_struct_check (tree __t, const enum tree_node_structure_enum __s,
const char *__f, int __l, const char *__g)
@ -3824,7 +3873,7 @@ any_integral_type_check (tree __t, const char *__f, int __l, const char *__g)
{
if (!ANY_INTEGRAL_TYPE_P (__t))
tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
INTEGER_TYPE, 0);
INTEGER_TYPE, BITINT_TYPE, 0);
return __t;
}
@ -3942,6 +3991,38 @@ tree_not_check5 (const_tree __t, const char *__f, int __l, const char *__g,
return __t;
}
inline const_tree
tree_check6 (const_tree __t, const char *__f, int __l, const char *__g,
enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
{
if (TREE_CODE (__t) != __c1
&& TREE_CODE (__t) != __c2
&& TREE_CODE (__t) != __c3
&& TREE_CODE (__t) != __c4
&& TREE_CODE (__t) != __c5
&& TREE_CODE (__t) != __c6)
tree_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5, __c6,
0);
return __t;
}
inline const_tree
tree_not_check6 (const_tree __t, const char *__f, int __l, const char *__g,
enum tree_code __c1, enum tree_code __c2, enum tree_code __c3,
enum tree_code __c4, enum tree_code __c5, enum tree_code __c6)
{
if (TREE_CODE (__t) == __c1
|| TREE_CODE (__t) == __c2
|| TREE_CODE (__t) == __c3
|| TREE_CODE (__t) == __c4
|| TREE_CODE (__t) == __c5
|| TREE_CODE (__t) == __c6)
tree_not_check_failed (__t, __f, __l, __g, __c1, __c2, __c3, __c4, __c5,
__c6, 0);
return __t;
}
inline const_tree
contains_struct_check (const_tree __t, const enum tree_node_structure_enum __s,
const char *__f, int __l, const char *__g)
@ -4050,7 +4131,7 @@ any_integral_type_check (const_tree __t, const char *__f, int __l,
{
if (!ANY_INTEGRAL_TYPE_P (__t))
tree_check_failed (__t, __f, __l, __g, BOOLEAN_TYPE, ENUMERAL_TYPE,
INTEGER_TYPE, 0);
INTEGER_TYPE, BITINT_TYPE, 0);
return __t;
}
@ -5582,6 +5663,7 @@ extern void build_common_builtin_nodes (void);
extern void tree_cc_finalize (void);
extern tree build_nonstandard_integer_type (unsigned HOST_WIDE_INT, int);
extern tree build_nonstandard_boolean_type (unsigned HOST_WIDE_INT);
extern tree build_bitint_type (unsigned HOST_WIDE_INT, int);
extern tree build_range_type (tree, tree, tree);
extern tree build_nonshared_range_type (tree, tree, tree);
extern bool subrange_type_for_debug_p (const_tree, tree *, tree *);

View file

@ -37,7 +37,8 @@ enum type_class
function_type_class, method_type_class,
record_type_class, union_type_class,
array_type_class, string_type_class,
lang_type_class, opaque_type_class
lang_type_class, opaque_type_class,
bitint_type_class
};
#endif /* GCC_TYPECLASS_H */

View file

@ -5281,6 +5281,61 @@ output_constant (tree exp, unsigned HOST_WIDE_INT size, unsigned int align,
reverse, false);
break;
case BITINT_TYPE:
if (TREE_CODE (exp) != INTEGER_CST)
error ("initializer for %<_BitInt(%d)%> value is not an integer "
"constant", TYPE_PRECISION (TREE_TYPE (exp)));
else
{
struct bitint_info info;
tree type = TREE_TYPE (exp);
gcc_assert (targetm.c.bitint_type_info (TYPE_PRECISION (type),
&info));
scalar_int_mode limb_mode = as_a <scalar_int_mode> (info.limb_mode);
if (TYPE_PRECISION (type) <= GET_MODE_PRECISION (limb_mode))
{
cst = expand_expr (exp, NULL_RTX, VOIDmode, EXPAND_INITIALIZER);
if (reverse)
cst = flip_storage_order (TYPE_MODE (TREE_TYPE (exp)), cst);
if (!assemble_integer (cst, MIN (size, thissize), align, 0))
error ("initializer for integer/fixed-point value is too "
"complicated");
break;
}
int prec = GET_MODE_PRECISION (limb_mode);
int cnt = CEIL (TYPE_PRECISION (type), prec);
tree limb_type = build_nonstandard_integer_type (prec, 1);
int elt_size = GET_MODE_SIZE (limb_mode);
unsigned int nalign = MIN (align, GET_MODE_ALIGNMENT (limb_mode));
thissize = 0;
if (prec == HOST_BITS_PER_WIDE_INT)
for (int i = 0; i < cnt; i++)
{
int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
tree c;
if (idx >= TREE_INT_CST_EXT_NUNITS (exp))
c = build_int_cst (limb_type,
tree_int_cst_sgn (exp) < 0 ? -1 : 0);
else
c = build_int_cst (limb_type,
TREE_INT_CST_ELT (exp, idx));
output_constant (c, elt_size, nalign, reverse, false);
thissize += elt_size;
}
else
for (int i = 0; i < cnt; i++)
{
int idx = (info.big_endian ^ reverse) ? cnt - 1 - i : i;
wide_int w = wi::rshift (wi::to_wide (exp), idx * prec,
TYPE_SIGN (TREE_TYPE (exp)));
tree c = wide_int_to_tree (limb_type,
wide_int::from (w, prec, UNSIGNED));
output_constant (c, elt_size, nalign, reverse, false);
thissize += elt_size;
}
}
break;
case ARRAY_TYPE:
case VECTOR_TYPE:
switch (TREE_CODE (exp))

View file

@ -111,21 +111,21 @@ check_for_binary_op_overflow (range_query *query,
{
/* So far we found that there is an overflow on the boundaries.
That doesn't prove that there is an overflow even for all values
in between the boundaries. For that compute widest_int range
in between the boundaries. For that compute widest2_int range
of the result and see if it doesn't overlap the range of
type. */
widest_int wmin, wmax;
widest_int w[4];
widest2_int wmin, wmax;
widest2_int w[4];
int i;
signop sign0 = TYPE_SIGN (TREE_TYPE (op0));
signop sign1 = TYPE_SIGN (TREE_TYPE (op1));
w[0] = widest_int::from (vr0.lower_bound (), sign0);
w[1] = widest_int::from (vr0.upper_bound (), sign0);
w[2] = widest_int::from (vr1.lower_bound (), sign1);
w[3] = widest_int::from (vr1.upper_bound (), sign1);
w[0] = widest2_int::from (vr0.lower_bound (), sign0);
w[1] = widest2_int::from (vr0.upper_bound (), sign0);
w[2] = widest2_int::from (vr1.lower_bound (), sign1);
w[3] = widest2_int::from (vr1.upper_bound (), sign1);
for (i = 0; i < 4; i++)
{
widest_int wt;
widest2_int wt;
switch (subcode)
{
case PLUS_EXPR:
@ -153,10 +153,10 @@ check_for_binary_op_overflow (range_query *query,
}
/* The result of op0 CODE op1 is known to be in range
[wmin, wmax]. */
widest_int wtmin
= widest_int::from (irange_val_min (type), TYPE_SIGN (type));
widest_int wtmax
= widest_int::from (irange_val_max (type), TYPE_SIGN (type));
widest2_int wtmin
= widest2_int::from (irange_val_min (type), TYPE_SIGN (type));
widest2_int wtmax
= widest2_int::from (irange_val_max (type), TYPE_SIGN (type));
/* If all values in [wmin, wmax] are smaller than
[wtmin, wtmax] or all are larger than [wtmin, wtmax],
the arithmetic operation will always overflow. */
@ -1760,12 +1760,11 @@ simplify_using_ranges::simplify_internal_call_using_ranges
g = gimple_build_assign (gimple_call_lhs (stmt), subcode, op0, op1);
else
{
int prec = TYPE_PRECISION (type);
tree utype = type;
if (ovf
|| !useless_type_conversion_p (type, TREE_TYPE (op0))
|| !useless_type_conversion_p (type, TREE_TYPE (op1)))
utype = build_nonstandard_integer_type (prec, 1);
utype = unsigned_type_for (type);
if (TREE_CODE (op0) == INTEGER_CST)
op0 = fold_convert (utype, op0);
else if (!useless_type_conversion_p (utype, TREE_TYPE (op0)))