My r16-3810-g6456da6bab8a2c changes broke bootstrap for targets that use
the mutex-based atomic helpers. This fixes it by casting away the
unnecessary volatile-qualification on the _Atomic_word* before passing
it to __exchange_and_add_single.
libstdc++-v3/ChangeLog:
* config/cpu/generic/atomicity_mutex/atomicity.h
(__exchange_and_add): Use const_cast to remove volatile.
Given a sequence such as
int foo ()
{
#pragma GCC unroll 4
for (int i = 0; i < N; i++)
if (a[i] == 124)
return 1;
return 0;
}
where a[i] is long long, we will unroll the loop and use an OR reduction for
early break on Adv. SIMD. Afterwards the sequence is followed by a compression
sequence to compress the 128-bit vectors into 64-bits for use by the branch.
However if we have support for add halving and narrowing then we can instead of
using an OR, use an ADDHN which will do the combining and narrowing.
Note that for now I only do the last OR, however if we have more than one level
of unrolling we could technically chain them. I will revisit this in another
up coming early break series, however an unroll of 2 is fairly common.
gcc/ChangeLog:
* internal-fn.def (VEC_TRUNC_ADD_HIGH): New.
* doc/generic.texi: Document it.
* optabs.def (vec_trunc_add_high): New.
* doc/md.texi: Document it.
* tree-vect-stmts.cc (vectorizable_early_exit): Use addhn if supported.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/vect-early-break-addhn_1.c: New test.
* gcc.target/aarch64/vect-early-break-addhn_2.c: New test.
* gcc.target/aarch64/vect-early-break-addhn_3.c: New test.
* gcc.target/aarch64/vect-early-break-addhn_4.c: New test.
This implements the new vector optabs vec_<su>addh_narrow<mode>
adding support for in-vectorizer use for early break.
gcc/ChangeLog:
* config/aarch64/aarch64-simd.md (vec_addh_narrow<mode>): New.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/vect-addhn_1.c: New test.
If the user has requested loop unrolling through pragma GCC unroll then at the
moment we only set LOOP_VINFO_USER_UNROLL if the vectorizer has not overrode the
unroll factor (through backend costing) or if the VF made the requested unroll
factor be 1.
When we have a loop of say int and a pragma unroll 4
If the vectorizer picks V4SI as the mode, the requested unroll ended up exactly
matching the VF. As such the requested unroll is 1 and we don't clear the pragma.
So it did honor the requested unroll factor. However since we didn't set the
unroll amount back and left it at 4 the rtl unroller won't use the rtl cost
model at all and just unroll the vector loop 4 times.
But of these events are costing related, and so it stands to reason that we
should set LOOP_VINFO_USER_UNROLL to we return the RTL unroller to use the
backend costing for any further unrolling.
gcc/ChangeLog:
* tree-vect-loop.cc (vect_analyze_loop_1): If the unroll pragma was set
mark it as handled.
* doc/extend.texi (pragma GCC unroll): Update documentation.
Documentation for `__cmpsf2` and similar functions currently indicate a
return type of `int`. This is not correct however; the `libgcc`
functions return `CMPtype`, the size of which is determined by the
`libgcc_cmp_return` mode.
Update documentation to use `CMPtype` and indicate that this is
target-dependent, also mentioning the usual modes.
Reported-by: beetrees <b@beetr.ee>
Fixes: https://github.com/rust-lang/compiler-builtins/issues/919#issuecomment-2905347318
Signed-off-by: Trevor Gross <tmgross@umich.edu>
* doc/libgcc.texi (Comparison functions): Document functions as
returning CMPtype.
PR fortran/121616
gcc/fortran/ChangeLog:
* primary.cc (gfc_variable_attr): Properly set dimension attribute
from a component ref.
gcc/testsuite/ChangeLog:
* gfortran.dg/alloc_comp_assign_17.f90: New test.
This adds checks when incrementing the shared count and weak count and
will trap if they would be be incremented past its maximum. The maximum
value is the value at which incrementing it produces an invalid
use_count(). So that is either the maximum positive value of
_Atomic_word, or for targets where we now allow the counters to wrap
around to negative values, the "maximum" value is -1, because that is
the value at which one more increment overflows the usable range and
resets the counter to zero.
For the weak count the maximum is always -1 as we always allow that
count to use nagative values, so we only tap if it wraps all the way
back to zero.
libstdc++-v3/ChangeLog:
PR libstdc++/71945
* include/bits/shared_ptr_base.h (_Sp_counted_base::_S_chk):
Trap if a reference count cannot be incremented any higher.
(_Sp_counted_base::_M_add_ref_copy): Use _S_chk.
(_Sp_counted_base::_M_add_weak_ref): Likewise.
(_Sp_counted_base<_S_mutex>::_M_add_ref_lock_nothrow): Likewise.
(_Sp_counted_base<_S_atomic>::_M_add_ref_lock_nothrow): Likewise.
(_Sp_counted_base<_S_single>::_M_add_ref_copy): Use _S_chk.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
This change doubles the effective range of the std::shared_ptr and
std::weak_ptr reference counts for most 64-bit targets.
The counter type, _Atomic_word, is usually a signed 32-bit int (except
on Solaris v9 where it is a signed 64-bit long). The return type of
std::shared_ptr::use_count() is long. For targets where long is wider
than _Atomic_word (most 64-bit targets) we can treat the _Atomic_word
reference counts as unsigned and allow them to wrap around from their
most positive value to their most negative value without any problems.
The logic that operates on the counts only cares if they are zero or
non-zero, and never performs relational comparisons. The atomic
fetch_add operations on integers are required by the standard to behave
like unsigned types, so that overflow is well-defined:
"the result is as if the object value and parameters were converted to
their corresponding unsigned types, the computation performed on those
types, and the result converted back to the signed type."
So if we allow the counts to wrap around to negative values, all we need
to do is cast the value to make_unsigned_t<_Atomic_word> before
returning it as long from the use_count() function.
In practice even exceeding INT_MAX is extremely unlikely, as it would
require billions of shared_ptr or weak_ptr objects to have been
constructed and never destroyed. However, if that happens we now have
double the range before the count returns to zero and causes problems.
Some of the member functions for the _Sp_counted_base<_S_single>
specialization are adusted to use the __atomic_add_single and
__exchange_and_add_single helpers instead of plain ++ and -- operations.
This is done because those helpers use unsigned arithmetic, where the
plain increments and decrements would have undefined behaviour on
overflow.
libstdc++-v3/ChangeLog:
PR libstdc++/71945
* include/bits/shared_ptr_base.h
(_Sp_counted_base::_M_get_use_count): Cast _M_use_count to
unsigned before returning as long.
(_Sp_counted_base<_S_single>::_M_add_ref_copy): Use atomic
helper function to adjust ref count using unsigned arithmetic.
(_Sp_counted_base<_S_single>::_M_weak_release): Likewise.
(_Sp_counted_base<_S_single>::_M_get_use_count): Cast
_M_use_count to unsigned before returning as long.
(_Sp_counted_base<_S_single>::_M_add_ref_lock_nothrow): Use
_M_add_ref_copy to do increment using unsigned arithmetic.
(_Sp_counted_base<_S_single>::_M_release): Use atomic helper and
_M_weak_release to do decrements using unsigned arithmetic.
(_Sp_counted_base<_S_mutex>::_M_release): Add comment.
(_Sp_counted_base<_S_single>::_M_weak_add_ref): Remove
specialization.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
The standard requires that std::atomic<integral-type>::fetch_add does
not have undefined behaviour for signed overflow, instead it wraps like
unsigned integers. The compiler ensures this is true for the atomic
built-ins that std::atomic uses, but it's not currently true for the
__gnu_cxx::__exchange_and_add and __gnu_cxx::__atomic_add functions
defined in libstdc++, which operate on type _Atomic_word.
For the inline __exchange_and_add_single function (used when there's
only one thread in the process), we can copy the value to an unsigned
long and do the addition on that, then assign it back to the
_Atomic_word variable.
The __exchange_and_add in config/cpu/generic/atomicity_mutex/atomicity.h
locks a mutex and then performs exactly the same steps as
__exchange_and_add_single. Calling __exchange_and_add_single instead of
duplicating the code benefits from the fix just made to
__exchange_and_add_single.
For the remaining config/cpu/$arch/atomicity.h implementations, they
either use inline assembly which uses wrapping instructions (so no
changes needed), or we can fix them by compiling with -fwrapv.
After ths change, UBsan no longer gives an error for:
_Atomic_word i = INT_MAX;
__gnu_cxx::__exchange_and_add_dispatch(&i, 1);
/usr/include/c++/14/ext/atomicity.h:85:12: runtime error: signed integer overflow: 2147483647 + 1 cannot be represented in type 'int'
libstdc++-v3/ChangeLog:
PR libstdc++/121148
* config/cpu/generic/atomicity_mutex/atomicity.h
(__exchange_and_add): Call __exchange_and_add_single.
* include/ext/atomicity.h (__exchange_and_add_single): Use an
unsigned type for the addition.
* libsupc++/Makefile.am (atomicity.o): Compile with -fwrapv.
* libsupc++/Makefile.in: Regenerate.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
-mno-direct-extern-access is used to disable direct access to external
symbol from executable with and without PIE for x86. Require PIE and
pass -fPIE to disable direct access to external symbol for other targets.
PR fortran/107421
PR testsuite/121848
* gfortran.dg/gomp/pr107421.f90: Require PIE and pass -fPIE for
non-x86 targets.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
These _S_noexcept() functions are only used in noexcept-specifiers and
never need to be called at runtime. They can be immediate functions,
i.e. consteval.
libstdc++-v3/ChangeLog:
* include/bits/iterator_concepts.h (_IterMove::_S_noexcept)
(_IterSwap::_S_noexcept): Change constexpr to consteval.
* include/bits/ranges_base.h (_Begin::_S_noexcept)
(_End::_S_noexcept, _RBegin::_S_noexcept, _REnd::_S_noexcept)
(_Size::_S_noexcept, _Empty::_S_noexcept, _Data::_S_noexcept):
Likewise.
* include/std/concepts (_Swap::_S_noexcept): Likewise.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
Most of the basis operations for ranges such as ranges::begin and
ranges::next are trivial one-line function bodies, so can be made
always_inline to reduce the abstraction penalty for -O0 code.
Now that we no longer need to support the -fconcepts-ts grammar, we can
also move some [[nodiscard]] attributes to the more natural position
before the function declaration, instead of between the declarator-id
and the function parameters, e.g. we can use:
template<typename T> requires C<T> [[nodiscard]] auto operator()(T&&)
instead of:
template<typename T> requires C<T> auto operator() [[nodiscard]] (T&&)
The latter form was necessary because -fconcepts-ts used a different
grammar for the requires-clause, parsing 'C<T>[[x]]' as a subscripting
operator with an ill-formed argument '[x]'. In the C++20 grammar you
would need to use parentheses to use a subscript in a constraint, so
without parentheses it's parsed as an attribute.
libstdc++-v3/ChangeLog:
* include/bits/ranges_base.h (__detail::__to_unsigned_like)
(__access::__possible_const_range, __access::__as_const)
(__distance_fn::operator(), __next_fn::operator())
(__prev_fn::operator()): Add always_inline attribute.
(_Begin::operator(), _End::operator(), _RBegin::operator())
(_REnd::operator(), _Size::operator(), _SSize::operator())
(_Empty::operator(), _Data::operator(), _SSize::operator()):
Likewise. Move nodiscard attribute to start of declaration.
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
Both C and C++ frontends should set a tentative TLS model in grokvardecl
and update TLS mode with the default TLS access model after a TLS variable
has been fully processed if the default TLS access model is stronger.
PR c/107419
PR c++/107393
* c-c++-common/tls-attr-common.c: New test.
* c-c++-common/tls-attr-le-pic.c: Likewise.
* c-c++-common/tls-attr-le-pie.c: Likewise.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Converting a weak_ptr<Derived> to a weak_ptr<Base> requires calling
lock() on the source object in the general case.
Although the source weak_ptr<Derived> does contain a raw pointer to
Derived, we can't just get it and (up)cast it to Base, as that will
dereference the pointer in case Base is a virtual base class of Derived.
We don't know if the managed object is still alive, and therefore if
this operation is safe to do; we therefore temporarily lock() the source
weak_ptr, do the cast using the resulting shared_ptr, and then discard
this shared_ptr. Simply checking the strong counter isn't sufficient,
because if multiple threads are involved then we'd have a race / TOCTOU
problem; the object may get destroyed after we check the strong counter
and before we cast the pointer.
However lock() is not necessary if we know that Base is *not* a virtual
base class of Derived; in this case we can avoid the relatively
expensive call to lock() and just cast the pointer. This commit uses
the newly added builtin to detect this case and optimize std::weak_ptr's
converting constructors and assignment operations.
Apart from non-virtual bases, there's also another couple of interesting
cases where we can also avoid locking. Specifically:
1) converting a weak_ptr<T[N]> to a weak_ptr<T cv[]>;
2) converting a weak_ptr<T*> to a weak_ptr<T const * const> or similar.
Since this logic is going to be used by multiple places, I've
centralized it in a new static helper.
libstdc++-v3/ChangeLog:
* include/bits/shared_ptr_base.h (__weak_ptr): Avoid calling
lock() when converting or assigning a weak_ptr<Derived> to
a weak_ptr<Base> in case Base is not a virtual base of Derived.
This logic is centralized in _S_safe_upcast, called by the
various converting constructors/assignment operators.
(_S_safe_upcast): New helper function.
* testsuite/20_util/weak_ptr/cons/virtual_bases.cc: New test.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Reviewed-by: Tomasz Kamiński <tkaminsk@redhat.com>
Signed-off-by: Giuseppe D'Angelo <giuseppe.dangelo@kdab.com>
Don't upgrade TLS model when cplus_decl_attributes is called on a thread
local variable whose TLS model isn't set yet.
gcc/cp/
PR c++/121889
* decl2.cc (cplus_decl_attributes): Don't upgrade TLS model if
TLS model isn't set yet.
gcc/testsuite/
PR c++/121889
* g++.dg/tls/pr121889.C: New test.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
Add an expander for isfinite using integer arithmetic. This is
typically faster and avoids generating spurious exceptions on
signaling NaNs. This fixes part of PR66462.
int isfinite1 (float x) { return __builtin_isfinite (x); }
Before:
fabs s0, s0
mov w0, 2139095039
fmov s31, w0
fcmp s0, s31
cset w0, hi
eor w0, w0, 1
ret
After:
fmov w1, s0
mov w0, -16777216
cmp w0, w1, lsl 1
cset w0, hi
ret
gcc:
PR middle-end/66462
* config/aarch64/aarch64.md (isfinite<mode>2): Add new expander.
gcc/testsuite:
PR middle-end/66462
* gcc.target/aarch64/pr66462.c: Add tests for isfinite.
With -fno-trapping-math it is safe to optimize fabs(a + 0.0) as
fabs (a).
PR tree-optimization/121595
* match.pd (fabs(a + 0.0) -> fabs (a)): Optimization pattern limited to
the -fno-trapping-math case.
* gcc.dg/fabs-plus-zero-1.c: New testcase.
* gcc.dg/fabs-plus-zero-2.c: Likewise.
Signed-off-by: Matteo Nicoli <matteo.nicoli001@gmail.com>
Reviewed-by: Andrew Pinski <andrew.pinski@oss.qualcomm.com>
Enable those tests so we won't make too stupid mistakes in 16B atomic
implementation anymore.
All these test passed on a Loongson 3C6000/S except
atomic-other-int128.c. With GDB patched to support sc.q
(https://sourceware.org/pipermail/gdb-patches/2025-August/220034.html)
this test also XPASS.
gcc/testsuite/ChangeLog:
* lib/target-supports.exp
(check_effective_target_loongarch_scq_hw): New.
(check_effective_target_sync_int_128_runtime): Return 1 on
loongarch64-*-* if hardware supports both LSX and SCQ.
* gcc.dg/atomic-compare-exchange-5.c: Pass -mlsx -mscq for
loongarch64-*-*.
* gcc.dg/atomic-exchange-5.c: Likewise.
* gcc.dg/atomic-load-5.c: Likewise.
* gcc.dg/atomic-op-5.c: Likewise.
* gcc.dg/atomic-store-5.c: Likewise.
* gcc.dg/atomic-store-6.c: Likewise.
* gcc.dg/simulate-thread/atomic-load-int128.c: Likewise.
* gcc.dg/simulate-thread/atomic-other-int128.c: Likewise.
(dg-final): xfail on loongarch64-*-* because gdb does not
handle sc.q properly yet.
In a CAS operation, even if expected != *memory we still need to do an
atomic load of *memory into output. But I made a mistake in the initial
implementation, causing the output to contain junk in this situation.
Like a normal atomic load, the atomic load embedded in the CAS semantic
is required to work on read-only page. Thus we cannot rely on sc.q to
ensure the atomicity of the load. Use LSX to perform the load instead,
and also use LSX to compare the 16B values to keep the ll-sc loop body
short.
gcc/ChangeLog:
* config/loongarch/sync.md (atomic_compare_and_swapti_scq):
Require LSX. Change the operands for the output, the memory,
and the expected value to LSX vector modes. Add a FCCmode
output to indicate if CAS has written the desired value into
memory. Use LSX to atomically load both words of the 16B value
in memory.
(atomic_compare_and_swapti): Pun the modes to satisify
the new atomic_compare_and_swapti_scq implementation. Read the
bool return value from the FCC instead of performing a
comparision.
This modifier is intended to output $r0 for (const_int 0), but the
logic:
GET_MODE (op) != TImode || (op != CONST0_RTX (TImode) && code != REG)
will reject (const_int 0) because (const_int 0) actually does not have
a mode and GET_MODE will return VOIDmode for it.
Use reg_or_0_operand instead to fix the issue.
gcc/ChangeLog:
* config/loongarch/loongarch.cc (loongarch_print_operand): Call
reg_or_0_operand for checking the sanity of %t.
The PR reports
vectorizer.h:276:3: runtime error: load of value 32695, which is not a valid value for type 'internal_fn'
which I believe is from
slp_node->data = new vect_load_store_data (std::move (ls));
where 'ls' can be partly uninitialized (and that data will be not
used, but of course the move CTOR doesn't know this). The following
tries to fix that by using value-initialization of 'ls'.
PR tree-optimization/121703
* tree-vect-stmts.cc (vectorizable_store): Value-initialize ls.
(vectorizable_load): Likewise.
In general, tail call optimization requires that the callee's saved
registers are a superset of the caller's.
The Standard Vector Calling Convention Variant (assembler: .variant_cc)
requires that a function with this calling convention preserves vector
registers v1-v7 and v24-v31 across calls (i.e. callee-saved). However,
the same set of registers are (function-local) temporary registers
(i.e. caller-saved) on the normal (non-vector) calling convention.
Even if a function with this calling convention variant calls another
function with a non-vector calling convention, those vector registers
are correctly clobbered -- except when the sibling (tail) call
optimization occurs as it violates the general rule mentioned above.
If this happens, following function body:
1. Save v1-v7 and v24-v31 for clobbering
2. Call another function with a non-vector calling convention
(which may destroy v1-v7 and/or v24-v31)
3. Restore v1-v7 and v24-v31
4. Return.
may be incorrectly optimized into the following sequence:
1. Save v1-v7 and v24-v31 for clobbering
2. Restore v1-v7 and v24-v31 (?!)
3. Jump to another function with a non-vector calling convention
(which may destroy v1-v7 and/or v24-v31).
This commit suppresses cross CC sibling call optimization from
the vector calling convention variant.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_function_ok_for_sibcall):
Suppress cross calling convention sibcall optimization from
the vector calling convention variant.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/abi-call-variant_cc-sibcall.c: New test.
* gcc.target/riscv/rvv/base/abi-call-variant_cc-sibcall-indirect-1.c: Ditto.
* gcc.target/riscv/rvv/base/abi-call-variant_cc-sibcall-indirect-2.c: Ditto.
When the vectorizer removes a forwarder created earlier by split_edge
it uses redirect_edge_pred for convenience and efficiency. That breaks
down when the edge split is originating from an asm goto as that is
a jump that needs adjustments from redirect_edge_and_branch. The
following factores a simple vect_remove_forwarder handling this
situation appropriately.
PR tree-optimization/121829
* cfgloopmanip.cc (create_preheader): Ensure we can insert
at the end of a preheader.
* gcc.dg/torture/pr121829.c: New testcase.
When a dead EH or abnormal edge makes a call queued for noreturn fixup
unreachable, just skip processing it.
PR tree-optimization/121870
* tree-ssa-propagate.cc
(substitute_and_fold_engine::substitute_and_fold): Skip
removed stmts from noreturn fixup.
* g++.dg/torture/pr121870.C: New testcase.
This is a follow-up to a recent change, where a warning was implemented
for huge library-level objects. However it is not given if the objects
are imported, although an indirection is also added for them under the
hood to match the export side.
gcc/ada/ChangeLog:
* gcc-interface/decl.cc (gnat_to_gnu_entity) <E_Variable>: Give a
warning for huge imported objects as well.
The TYPE_ALIGN_OK flag had originally been a GCC flag tested in the RTL
expander and was at some point kicked out of the middle-end to become a
pure Gigi flag. But it's only set for tagged types and CW-equivalent
types and can be replaced by a explicit predicate without too much work.
gcc/ada/ChangeLog:
* gcc-interface/ada-tree.h (TYPE_ALIGN_OK): Delete.
* gcc-interface/decl.cc (gnat_to_gnu_entity): Do not set it.
* gcc-interface/gigi.h (standard_datatypes): Add ADT_tag_name_id.
(tag_name_id): New macro.
(type_is_tagged_or_cw_equivalent): New inline predicate.
* gcc-interface/trans.cc (gigi): Initialize tag_name_id.
(gnat_to_gnu) <N_Unchecked_Type_Conversion>: Replace tests on
TYPE_ALIGN_OK with calls to type_is_tagged_or_cw_equivalent.
(addressable_p): Likewise.
* gcc-interface/utils.cc (convert): Likewise.
* gcc-interface/utils2.cc (build_binary_op): Likewise.
This happens when the object is declared in another compilation unit.
gcc/ada/ChangeLog:
* gcc-interface/misc.cc (gnat_get_array_descr_info): In the record
type case, bail out if the original array type cannot be retrieved.
The implementation is essentially mirrored from the one for signed types.
gcc/ada/ChangeLog:
* gcc-interface/gigi.h (standard_datatypes): Add ADT_uns_mulv64_decl
and ADT_uns_mulv128_decl.
(uns_mulv64_decl): New macro.
(uns_mulv128_decl): Likewise.
* gcc-interface/trans.cc (gigi): Create the uns_mulv64_decl and
uns_mulv128_decl declarations.
(gnat_to_gnu) <N_Op_Add>: Perform an overflow check for unsigned
integer addition, subtraction and multiplication if required.
<N_Op_Minus>: Perform an overflow check for unsigned integer
negation if required.
(build_unary_op_trapv): Add support for unsigned types.
(build_binary_op_trapv): Likewise.
<MINUS_EXPR>: Perform the check if the LHS is zero in the signed
case as well.
In the case of a call to a subprogram that has an out (or in-out) parameter
that is passed by copy, the caller performs copy-back after the call returns.
If the actual parameter is a view conversion to a subtype that has an enabled
predicate, then the predicate check performed at that point should be
performed before, not after, the operand of the view conversion is updated.
gcc/ada/ChangeLog:
* exp_ch6.adb (Expand_Actuals): After building the tree for a
predicate check, call Prepend_To instead of Append_To so that the
check is performed before, instead of after, the corresponding
parameter copy-back.
Create a ghost region for pragma annotate so that we are able to analyze
the entity references correctly inside the pragma.
gcc/ada/ChangeLog:
* sem_prag.adb: Create a ghost region for pragma annotate before
analyzing its arguments.
Since we do not analyze the policy errors for expanded code we need to
check the functions specified in the Iterable aspect whenever we are
analyzing an iterator spcification with that aspect.
gcc/ada/ChangeLog:
* sem_ch5.adb (Analyze_Iterator_Specification): Check ghost context
of Iterable functions when handling iterator specifications with an
Iterable aspect.
It is OK to define a checked ghost type with an iterable aspect
that has ignored Iterable functions.
gcc/ada/ChangeLog:
* ghost.adb (Check_Ghost_Policy): Avoid triggering a ghost
policy error if the policy is referenced within the Iterable
aspect.
Check that entities on the RHS are ghost level dependent on the
entities on the LHS of the assignemnt.
gcc/ada/ChangeLog:
* ghost.adb (Is_OK_Statement): Check the levels of the
assignee with the levels of the entity are ghost level dependent.
(Check_Assignement_Levels): New function for checking the level
dependencies.
When frontend is operating in GNATprove mode (where expander is disabled), it
should check ghost policy for assignment statements just like it does for other
statements. This is because we want ghost policy errors to be reported not just
by GNAT, but also by GNATprove.
Additionally we need to perform the checks for valid location of ghost assigments
based on the region around the assigment before we create the region for
the assignment itself.
gcc/ada/ChangeLog:
* ghost.adb (Mark_And_Set_Ghost_Assignment): Create a ghost region
for an assigment irregardless of whether the expander is active.
Relocate the Assignment validity checks from Is_OK_Statement to
this subprogram.
The compiler blows up on a container aggregate with a container element
association that has a key_choice given by a nonstatic key expression.
This happens in the size computation for the aggregate due to calling
Update_Choices with the nonstatic expression. The fix is simply to
condition the call to Update_Choices on whether the choice expression
is static.
gcc/ada/ChangeLog:
* exp_aggr.adb (Build_Container_Aggr_Code.Build_Size_Expr): In the case
of an association with a single choice, only call Update_Choices when
the choice expression is nonstatic.
This patch fixes the following bug:
If the right-hand side of an expression contains a target name
(i.e. "@"), and also contains a reference to a user-defined operator
that is directly visible because of a "use type" clause on a renaming of
the package where the operator is declared, the compiler gives an
incorrect error saying that the renamed package is not visible.
It turns out that setting Entity of resolved nodes is unnecessary
and wrong; the fix is to simply remove that code.
gcc/ada/ChangeLog:
* exp_ch5.adb
(Expand_Assign_With_Target_Names.Replace_Target):
Remove code setting Entity to Empty.
* sinfo.ads (Has_Target_Names):
Improve comment: add "@" to clarify what "target name"
means, and remove the content-free phrase "and must
be expanded accordingly."
Recent changes "Fix regression in Root_Type" and
"Crash on b3a1004 with assertions enabled" are partially
redundant; they are addressing the same bug.
This patch adjusts the former in the case of Root_Type.
But we leave Root_Type_If_Set alone; debugging printouts
should survive bugs when possible.
gcc/ada/ChangeLog:
* einfo-utils.adb (Root_Type): Do not deal with missing Etype.
Previous change, "Make pp and friends more robust (base type only)"
introduced a bug in Root_Type. Etype (T) can, in fact, be Empty
(but only in case of errors.) This patch fixes it.
gcc/ada/ChangeLog:
* einfo-utils.adb (Root_Type): Deal with missing Etype.
(Root_Type_If_Set): Likewise.
The compilation of files b3a10041.ads and b3a10042.adb crash when
the compiler is built with assertions enabled.
gcc/ada/ChangeLog:
* freeze.adb (Freeze_Entity): Protect call to Associated_Storage_Pool
since it cannot be used when the Etype is not set.
* sem_ch3.adb (Access_Type_Declaration): Ditto.
* sem_aux.adb (Is_Derived_Type): Protect call to Root_Type since it
cannot be used when the Etype is not set.
For Implicit_Packing, do not require the Size clause to exactly match
the packed size.
For example, an array of 7 Booleans will fit in
7 bits if packed, or 7*8=56 bits if not packed.
This patch allows "for T'Size use 8;" to force packing
in Implicit_Packing mode; previously, the compiler
ignored Implicit_Packing unless it was exactly "use 7".
Apparently, customers have that sort of code, and the
whole point of Implicit_Packing is to allow such legacy
code to work.
We already do the right thing for records, at least in
cases tested.
We deliberately avoid changing the error messages given here.
They could possibly use some work, but there are subtle interactions
with the messages given in Sem_Ch13 for the same thing.
gcc/ada/ChangeLog:
* freeze.adb (Freeze_Entity): Change "=" to ">=" in
size comparison for Implicit_Packing mode.
Keep it as "=" for giving error messages.
* opt.ads (Implicit_Packing): Minor: correct obsolete
comment.
A compiler built with assertions enabled crashes processing
a null aggregate of multidimensional type.
gcc/ada/ChangeLog:
* sem_aggr.adb (Report_Null_Array_Constraint_Error): Adjust code
for reporting the error on enumeration types.
(Resolve_Null_Array_Aggregate): On multidimiensional arrays, avoid
reporting the same error several times. Flag the node as raising
constraint error when the bounds are known and some of them is
known to raise constraint error.
Prior to this fix, if pp(N) tried to print a "base type only" field, and
Base_Type(N) was not yet set, it would raise an exception, which was
confusing. This patch makes it simply ignore such fields. Similarly
for Impl_Base_Type_Only and Root_Type_Only fields.
We do this by having alternative versions of Base_Type,
Implementation_Base_Type, and Root_Type that return Empty
in error cases, and call these alteratives from Treepr.
We don't want to Base_Type and friends to return Empty;
we want them to blow up when called from anywhere but
Treepr.
gcc/ada/ChangeLog:
* atree.ads (Node_To_Fetch_From_If_Set): Alternative to
Node_To_Fetch_From that returns Empty in error cases.
For use only in Treepr.
* treepr.adb (Print_Entity_Field): Avoid printing field
if Node_To_Fetch_From_If_Set returns Empty.
* einfo-utils.ads (Base_Type_If_Set): Alternative to
Base_Type that returns Empty in error cases.
(Implementation_Base_Type_If_Set): Likewise.
(Root_Type_If_Set): Likewise.
(Underlying_Type): Use more accurate result subtype.
* einfo-utils.adb (Base_Type): Add Asserts.
(Implementation_Base_Type): Add Assert; minor cleanup.
(Root_Type): Add Assert; minor cleanup. Remove Assert that
is redundant with predicate.
(Base_Type_If_Set): Body of new function.
(Implementation_Base_Type_If_Set): Body of new function.
(Root_Type_If_Set): Body of new function.