r13-3299 changed our internal declaration of __dynamic_cast to reside
inside the abi/__cxxabiv1:: namespace instead of the global namespace,
matching the real declaration. This inadvertently made us now attempt
constexpr evaluation of user-written calls to abi::__dynamic_cast since
cxx_dynamic_cast_fn_p now also returns true for them, but we're not
prepared to handle arbitrary calls to __dynamic_cast, and therefore ICE.
This patch restores cxx_dynamic_cast_fn_p to return true only for
synthesized calls to __dynamic_cast, which can be distinguished by
DECL_ARTIFICIAL, since apparently the synthesized declaration of
__dynamic_cast doesn't get merged with the actual declaration.
PR c++/120620
gcc/cp/ChangeLog:
* constexpr.cc (cxx_dynamic_cast_fn_p): Return true only
for synthesized __dynamic_cast.
gcc/testsuite/ChangeLog:
* g++.dg/cpp2a/constexpr-dynamic19.C: New test.
* g++.dg/cpp2a/constexpr-dynamic1a.C: New test.
Reviewed-by: Jason Merrill <jason@redhat.com>
The old GET_MODE_SIZE (DImode) (i.e. 64) made sense before
64-bitters. Now the default is just a trap: when using the
default 64, things like TImode (128 bits) still mostly works,
but general corner cases related to computing large-size objects
numbers, like (1 << 64)/8 break, as exposed by
gcc.dg/pr105094.c.
So, keep the floor at 64 for 32-bitters and smaller targets, but
for larger targets, make it 2 * BITS_PER_WORD. Also, express it
more directly with focus on BITS_PER_WORD, not the size of a
mode. Add "by GCC internally" in an attempt to tell that this
is when gcc cooks something up, not when plain input uses a type
with such a mode.
* defaults.h (MAX_FIXED_MODE_SIZE): Default to 2 * BITS_PER_WORD
for larger-than-32-bitters.
* doc/tm.texi.in (MAX_FIXED_MODE_SIZE): Adjust accordingly. Tweak
wording.
* doc/tm.texi: Regenerate.
On Sat, Aug 02, 2025 at 09:05:07PM +0200, Jakub Jelinek wrote:
> Wonder how to automatically discover other missing exports (like in PR121373
> std::byteswap), maybe one could dig that stuff somehow from the raw
> dump (look for identifiers in std namespace (and perhaps inlined namespaces
> thereof at least) which don't start with underscore.
To answer that question, I wrote a simple plugin which just dumps the names
(which do not start with underscore) in std namespace (and its inlined
namespaces) and for non-inline namespaces in there which do not start with
underscore also recurses on those namespaces.
Plugin source in
https://gcc.gnu.org/pipermail/libstdc++/2025-August/062859.html
I went through it all now, using cppreference as a quick check for stuff
removed in C++17/C++20 and for everything added verified it is in
corresponding eel.is/c++-draft/*.syn etc. and looked it up in the libstdc++
headers for guarding macros.
After all the additions I've compiled std.cc with -std=c++20, -std=c++23 and
-std=c++26, the first one revealed std::ranges::shift_{left,right} emitted an
error in that case, the patch fixes that too.
2025-08-04 Jakub Jelinek <jakub@redhat.com>
hexne <printfne@gmail.com>
PR libstdc++/121373
* src/c++23/std.cc.in (std::ranges::shift_left,
std::ranges::shift_right): Only export for C++23 and later.
(std::ranges::fold_left_first_with_iter_result,
std::ranges::fold_left_with_iter_result): Export.
(std::byteswap): Export for C++23 and later.
(std::ranges::iter_move, std::ranges::iter_swap): Export.
(std::projected_value_t): Export for C++26 and later.
(std::out_ptr_t, std::inout_ptr_t): Export.
(std::ranges::iota_result): Export.
(std::regex_constants): Export a lot of constants.
(std::is_scoped_enum, std::is_scoped_enum_v): Export.
Modernization; no functional change intended.
gcc/ChangeLog:
* dump-context.h: Convert "enum optinfo_item_kind" into
"enum class kind" within class optinfo_item.
* dumpfile.cc: Likewise. Use "auto" in a few places.
Convert "enum optinfo_kind" to "enum class kind" within
class optinfo.
* opt-problem.cc: Likewise.
* optinfo-emit-json.cc: Likewise.
* optinfo.cc: Likewise.
* optinfo.h: Likewise.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This patch adds support to sarif-replay for "nestingLevel"
from "P3358R0 SARIF for Structured Diagnostics"
https://wg21.link/P3358R0
Doing so revealed a bug where libgdiagnostics was always
creating new location_t values (and thus also
diagnostic_physical_location instances), rather than reusing
existing location_t values, leading to excess source printing.
The patch also fixes this bug, adding a new flag to libgdiagnostics
for debugging physical locations, and exposing this in sarif-replay
via a new "-fdebug-physical-locations" maintainer option.
Finally, the patch adds test coverage for the HTML sink's output
of nested diagnostics (both from a GCC plugin, and from sarif-replay).
gcc/ChangeLog:
PR diagnostics/116253
* diagnostics/context.cc (context::set_nesting_level): New.
* diagnostics/context.h (context::set_nesting_level): New decl.
* doc/libgdiagnostics/topics/compatibility.rst
(LIBGDIAGNOSTICS_ABI_5): New.
* doc/libgdiagnostics/topics/physical-locations.rst
(diagnostic_manager_set_debug_physical_locations): New.
* libgdiagnostics++.h (manager::set_debug_physical_locations):
New.
* libgdiagnostics-private.h
(private_diagnostic_set_nesting_level): New decl.
* libgdiagnostics.cc (diagnostic_manager::diagnostic_manager):
Initialize m_debug_physical_locations.
(diagnostic_manager::new_location_from_file_and_line): Add debug
printing.
(diagnostic_manager::new_location_from_file_line_column):
Likewise.
(diagnostic_manager::new_location_from_range): Likewise.
(diagnostic_manager::set_debug_physical_locations): New.
(diagnostic_manager::ensure_linemap_for_file_and_line): Avoid
redundant calls to linemap_add.
(diagnostic_manager::new_location): Add debug printing.
(diagnostic_manager::m_debug_physical_locations): New field.
(diagnostic::diagnostic): Initialize m_nesting_level.
(diagnostic::get_nesting_level): New accessor.
(diagnostic::set_nesting_level): New.
(diagnostic::m_nesting_level): New field.
(diagnostic_manager::emit_va): Set and reset the nesting level
of the context from that of the diagnostic.
(diagnostic_manager_set_debug_physical_locations): New.
(private_diagnostic_set_nesting_level): New.
* libgdiagnostics.h
(diagnostic_manager_set_debug_physical_locations): New decl.
* libgdiagnostics.map (LIBGDIAGNOSTICS_ABI_5): New.
* libsarifreplay.cc (sarif_replayer::handle_result_obj): Support
the "nestingLevel" property.
* libsarifreplay.h (replay_options::m_debug_physical_locations):
New field.
* sarif-replay.cc: Add -fdebug-physical-locations.
gcc/testsuite/ChangeLog:
PR diagnostics/116253
* gcc.dg/plugin/diagnostic-test-nesting-html.c: New test.
* gcc.dg/plugin/diagnostic-test-nesting-html.py: New test script.
* gcc.dg/plugin/plugin.exp: Add it.
* libgdiagnostics.dg/test-multiple-lines.c: Update expected output
to show fix-it hint.
* sarif-replay.dg/2.1.0-valid/nested-diagnostics-1.sarif: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
For the common case where a diagnostic has no metadata, sarif-replay's
html output was adding a stray space followed by an empty <div> for
the metadata.
Fixed thusly.
gcc/ChangeLog:
PR diagnostics/116792
* diagnostics/html-sink.cc
(html_builder::make_element_for_diagnostic): Don't add the
metadata element if it's empty.
(html_builder::make_element_for_metadata): Return null rather than
an empty element.
gcc/testsuite/ChangeLog:
PR diagnostics/116792
* gcc.dg/plugin/diagnostic-test-graphs-html.py: Remove trailing
space from expected text of message.
* sarif-replay.dg/2.1.0-valid/embedded-links-check-html.py:
Likewise.
* sarif-replay.dg/2.1.0-valid/graphs-check-html.py: Likewise.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
gcc/ChangeLog:
* diagnostics/context.h: Move struct counters to its own header
and include it.
* diagnostics/counters.h: New file, from the above.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
No functional change intended.
gcc/ChangeLog:
* diagnostics/context.h: Split struct source_printing_options out
into "diagnostics/source-printing-options.h" and include it.
* diagnostics/source-printing-options.h: New file, from the above.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
This patch splits out class option_manager to its own header,
and renames it to class option_id_manager to better describe its
purpose.
No functional change intended.
gcc/ChangeLog:
* diagnostics/context.cc: Update for renaming of option_manager to
option_id_manager and of context::m_option_mgr to
context::m_option_id_mgr.
* diagnostics/context.h: Likewise, moving class declaration to a
new diagnostics/option-id-manager.h.
* diagnostics/lazy-paths.cc: Likewise.
* diagnostics/option-id-manager.h: New file, from material in
diagnostics/context.h.
* lto-wrapper.cc: Update for renaming of option_manager to
option_id_manager.
* opts-common.cc: Likewise.
* opts-diagnostic.h: Likewise.
* opts.cc: Likewise.
* toplev.cc: Likewise.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
gcc/ChangeLog:
* diagnostics/buffering.h: Update comment to refer to output sinks
rather than output formats.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
We were calling `is_store_forwarding` with a NULL value for `off_val`,
which was causing a null pointer dereference in `is_constant`, leading
to an ICE.
This patch updates the call to `is_constant` in `is_store_forwarding`
and adds a check for `off_val`, in order to update it with the right
value.
Bootstrapped/regtested on AArch64 and x86_64.
PR rtl-optimization/121303
gcc/ChangeLog:
* avoid-store-forwarding.cc (is_store_forwarding): Add check
for `off_val` in `is_store_forwarding`.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr121303.c: New test.
Contrary to what the paper says, I think for #line directives we diagnose
everything we should (sure, some diagnostics are pedwarns).
2025-08-04 Jakub Jelinek <jakub@redhat.com>
PR preprocessor/120778
* g++.dg/DRs/dr2580.C: New test.
Previously in GNATProve_Mode the frontend would overwrite all of
the assertion policies to check in order to force the generation
of all of the assertions.
This however prevents GNATProve from performing policy related
checks in the tool. Since they are all artificially changed to
check.
This patch removes the modifications to the applicable assertion
policies and instead prevents code from ignored entities being
removed when in GNATProve_Mode.
gcc/ada/ChangeLog:
* contracts.adb: Use Is_Ignored_In_Codegen instead of just
using Is_Ignored.
* exp_ch6.adb: Likewise.
* exp_prag.adb: Likewise.
* exp_util.adb: Likewise.
* frontend.adb: Avoid removal of ignored nodes in GNATProve_Mode.
* gnat1drv.adb: Avoid forcing Assertions_Enabled in GNATProve_Mode.
* lib-writ.adb (Write_With_File_Names): Avoid early exit
with ignored entities in GNATProve_Mode.
* lib-xref.adb: Likewise.
* opt.adb: Remove check for Assertions_Enabled.
* sem_attr.adb: Use Is_Ignored_In_Codegen instead of Is_Ignored.
* sem_ch13.adb: Likewise. Additionally always add predicates in
GNATProve_Mode.
* sem_prag.adb: Likewise. Additionally remove modifications
to applied policies in GNATProve_Mode.
* sem_util.adb (Is_Ignored_In_Codegen): New function that overrides
Is_Ignored in GNATProve_Mode and Codepeer_Mode.
(Is_Ignored_Ghost_Pragma_In_Codegen): Likewise for
Is_Ignored_Ghost_Pragma.
(Is_Ignored_Ghost_Entity_In_Codegen): Likewise for
Is_Ignored_Ghost_Entity.
(Policy_In_List): Remove overriding of policies in GNATProve_Mode.
* sem_util.ads: Add specs for new functions.
* (Predicates_Enabled): Always generate predicates in
GNATProve_Mode.
Print_Node_Ref, which is called by pp, sometimes calls
Compile_Time_Known_Value, which blows up if Entity (N)
is empty. Rearrange the tests here, and test for
Present (Entity (N)) before calling Compile_Time_Known_Value.
Remove test "Nkind (N) in N_Subexpr", which is redundant with other
tests.
We don't want to make Compile_Time_Known_Value more
robust; you shouldn't call it on half-baked nodes.
But ideally pp should be able to print such nodes.
This change fixes one of many such cases.
gcc/ada/ChangeLog:
* treepr.adb (Print_Node_Ref): Protect against
Entity (N) being empty before calling
Compile_Time_Known_Value.
gcc/ada/ChangeLog:
* sem_prag.adb (Validate_Compile_Time_Warning_Errors):
Check if the original compile time pragma was replaced and
validate the original node instead.
Simplify the creation of the control characters in
Validate_Compile_Time_Warning_Or_Error.
gcc/ada/ChangeLog:
* sem_prag.adb (Validate_Compile_Time_Warning_Or_Error):
simplify the implementation.
If a function result type has an access discriminant, then we already
generate a run-time accessibility check for a return statement. But if
we know statically that the check (if executed) is going to fail, then
that should be rejected at compile-time as a violation of RM 6.5(5.9).
Add this additional compile-time check.
gcc/ada/ChangeLog:
* exp_ch6.adb (Apply_Access_Discrims_Accessibility_Check): If the
accessibility level being checked is known statically, then
statically check it against the level of the function being
returned from.
Simplify the storing process for ghost mode related variables and
make the process more extendable if new ghost mode related features
are added.
gcc/ada/ChangeLog:
* atree.adb: update references to Ghost_Mode.
* exp_ch3.adb: use a structure type to store all of the existing
ghost mode related state variables.
* exp_disp.adb: Likewise.
* exp_spark.adb: Likewise.
* exp_util.adb: Likewise.
* expander.adb: Likewise.
* freeze.adb: Likewise and replace references to existing ghost
mode variables.
* ghost.adb (Install_Ghost_Region): install the changes of
the region in to the new Ghost_Config structure.
(Restore_Ghost_Region): Use the new Ghost_Config instead.
In general replace all references to the existing ghost mode
variables with the new structure equivalent.
* ghost.ads (Restore_Ghost_Region): update the spec.
* opt.ads (Ghost_Config_Type): A new type that has two of the
previous ghost code related global variables as memembers -
Ghost_Mode and Ignored_Ghost_Region.
(Ghost_Config) New variable to store the previous Ghost_Mode and
Ignored_Ghost_Region info.
* rtsfind.adb: Replace references to existing ghost mode variables.
* sem.adb: Likewise.
* sem_ch12.adb: Likewise.
* sem_ch13.adb: Likewise.
* sem_ch3.adb: Likewise.
* sem_ch5.adb: Likewise.
* sem_ch6.adb: Likewise.
* sem_ch7.adb: Likewise.
* sem_prag.adb: Likewise.
* sem_util.adb: Likewise.
Do not generate a warning stating that the size of a formal parameter
is 8 bits unless the size of the formal parameter is 8 bits.
gcc/ada/ChangeLog:
* freeze.adb (Freeze_Profile): Do not emit a warning stating that
a formal parameter's size is 8 if the parameter's size is not 8.
gcc/ada/ChangeLog:
* table.adb (Max): Move variable to the body and initialize
it with the same value as in the Init function.
* table.ads (Max): Likewise.
...which might make it easier to deal with incorrectly shared
subtrees created during parsing.
There were several Idents arrays, with duplicated code and commentary.
And the related code had somewhat diverged -- different comments,
different index subtypes (Pos vs. Int), etc.
DRY: Move at least some of the code into Par.Util. Raise
Program_Error if the array overflows; there is really no
reason not to check, along with several comments saying
we don't check. In the unlikely event that the array
overflows, the compiler will now crash, which seems better
than erroneous execution (which could conceivably cause
bad code to be generated).
Move the block comments titled
"Handling Semicolon Used in Place of IS" and
"Handling IS Used in Place of Semicolon" so they
are together, which seems obviously desirable.
Rewrite the latter comment.
No need to denigrate other parsers.
gcc/ada/ChangeLog:
* par.adb: Move and rewrite some comments.
(Util): Shared code and comments for dealing with
defining_identifier_lists.
* par-util.adb (Append): Shared code for appending
one identifier onto Defining_Identifiers.
(P_Def_Ids): Shared code for parsing a defining_identifier_list.
Unfortunately, this is not used in all cases, because some of
them mix in sophisticated error recovery, which we do not
modify here.
* par-ch12.adb (P_Formal_Object_Declarations):
Use Defining_Identifiers and related code.
* par-ch3.adb (P_Identifier_Declarations): Likewise.
(P_Known_Discriminant_Part_Opt): Likewise.
(P_Component_Items): Likewise.
* par-ch6.adb (P_Formal_Part): Likewise.
The following makes us fail earlier when parts of the SLP build fails.
Currently we rely on hybrid stmt detection later to discover not all
stmts are covered by SLP, but this code should go away. I've also
seen a case of a missed gcond SLP build that went undetected. So
the following makes us fail during vect_analyze_slp if any of the
SLP instances we expect to discover fails.
* tree-vect-slp.cc (vect_analyze_slp): When analyzing a loop
and slp instance discovery fails, immediately fail the whole
process.
This is another case which changed from compile time undefined behavior
to ill-formed, diagnostic required. Now, we warn on this, so pedantically
that is good enough, maybe all we need is a testcase, but the following
patch changes it to a pedwarn for C++26.
2025-08-04 Jakub Jelinek <jakub@redhat.com>
PR preprocessor/120778
* macro.cc (stringify_arg): For C++26 emit a pedarn instead of warning
for \ at the end of stringification.
* g++.dg/DRs/dr2578.C: New test.
Forr rvalues the _Self parameter deduces a non-reference type. Consequently,
((_Self)__self) moved the object to a temporary, which then destroyed on
function exit.
This patch fixes this by using a C-style cast __self to (const indirect&).
This not only resolves the above issue but also correctly handles types that
are derived (publicly and privately) from indirect. Allocator requirements in
[allocator.requirements.general] p22 guarantee that dereferencing const _M_objp
works with equivalent semantics to dereferencing _M_objp.
PR libstdc++/121128
libstdc++-v3/ChangeLog:
* include/bits/indirect.h (indirect::operator*):
Cast __self to approparietly qualified indirect.
* testsuite/std/memory/indirect/access.cc: New test.
* testsuite/std/memory/polymorphic/access.cc: New test.
Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Signed-off-by: Tomasz Kamiński <tkaminsk@redhat.com>
After previous patches, we should always get a VNx16BI result
for ACLE intrinsics that return svbool_t. This patch adds
an assert that checks a more general condition than that.
gcc/
* config/aarch64/aarch64-sve-builtins.cc
(function_expander::expand): Assert that the return value
has an appropriate mode.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the predicate forms of svdupq.
The general predicate expansion builds an equivalent integer vector
and then compares it with zero. This patch therefore relies on
the earlier patches to the comparison patterns.
gcc/
* config/aarch64/aarch64-protos.h
(aarch64_convert_sve_data_to_pred): Remove the mode argument.
* config/aarch64/aarch64.cc
(aarch64_sve_emit_int_cmp): Allow PRED_MODE to be VNx16BI or
the natural predicate mode for the data mode.
(aarch64_convert_sve_data_to_pred): Remove the mode argument
and instead always create a VNx16BI result.
(aarch64_expand_sve_const_pred): Update call accordingly.
* config/aarch64/aarch64-sve-builtins-base.cc
(svdupq_impl::expand): Likewise, ensuring that the result
has mode VNx16BI.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/dupq_13.c: New test.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the predicate forms of svdup.
gcc/
* config/aarch64/aarch64-protos.h
(aarch64_emit_sve_pred_vec_duplicate): Declare.
* config/aarch64/aarch64.cc
(aarch64_emit_sve_pred_vec_duplicate): New function.
* config/aarch64/aarch64-sve.md (vec_duplicate<PRED_ALL:mode>): Use it.
* config/aarch64/aarch64-sve-builtins-base.cc
(svdup_impl::expand): Handle boolean values specially. Check for
constants and fall back on aarch64_emit_sve_pred_vec_duplicate
for the variable case, ensuring that the result has mode VNx16BI.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/dup_1.c: New test.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the svpnext* intrinsics.
gcc/
* config/aarch64/iterators.md (PNEXT_ONLY): New int iterator.
* config/aarch64/aarch64-sve.md
(@aarch64_sve_<sve_pred_op><mode>): Restrict SVE_PITER pattern
to VNx16BI_ONLY.
(@aarch64_sve_<sve_pred_op><mode>): New PNEXT_ONLY pattern for
PRED_HSD.
(*aarch64_sve_<sve_pred_op><mode>): Likewise.
(*aarch64_sve_<sve_pred_op><mode>_cc): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/pnext_3.c: New test.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the svmatch* and svnmatch*
intrinsics.
gcc/
* config/aarch64/aarch64-sve2.md (@aarch64_pred_<sve_int_op><mode>):
Split SVE2_MATCH pattern into a VNx16QI_ONLY define_ins and a
VNx8HI_ONLY define_expand. Use a VNx16BI destination for the latter.
(*aarch64_pred_<sve_int_op><mode>): New SVE2_MATCH pattern for
VNx8HI_ONLY.
(*aarch64_pred_<sve_int_op><mode>_cc): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve2/acle/general/match_4.c: New test.
* gcc.target/aarch64/sve2/acle/general/nmatch_1.c: Likewise.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the svac* intrinsics (floating-
point compare absolute).
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_pred_fac<cmp_op><mode>):
Replace with...
(@aarch64_pred_fac<cmp_op><mode>_acle): ...this new expander.
(*aarch64_pred_fac<cmp_op><mode>_strict_acle): New pattern.
* config/aarch64/aarch64-sve-builtins-base.cc
(svac_impl::expand): Update accordingly.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/acge_1.c: New test.
* gcc.target/aarch64/sve/acle/general/acgt_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/acle_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/aclt_1.c: Likewise.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the floating-point forms of svcmp*.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_pred_fcm<cmp_op><mode>_acle)
(*aarch64_pred_fcm<cmp_op><mode>_acle, @aarch64_pred_fcmuo<mode>_acle)
(*aarch64_pred_fcmuo<mode>_acle): New patterns.
* config/aarch64/aarch64-sve-builtins-base.cc
(svcmp_impl::expand, svcmpuo_impl::expand): Use them.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/cmpeq_6.c: New test.
* gcc.target/aarch64/sve/acle/general/cmpge_9.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_9.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_9.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_9.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpne_5.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpuo_1.c: Likewise.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the svcmp*_wide intrinsics.
Since the only uses of these patterns are for ACLE intrinsics,
there didn't seem much point adding an "_acle" suffix.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_pred_cmp<cmp_op><mode>_wide):
Split into VNx16QI_ONLY and SVE_FULL_HSI patterns. Use VNx16BI
results for both.
(*aarch64_pred_cmp<cmp_op><mode>_wide): New pattern.
(*aarch64_pred_cmp<cmp_op><mode>_wide_cc): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/cmpeq_5.c: New test.
* gcc.target/aarch64/sve/acle/general/cmpge_7.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpge_8.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_7.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_8.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_7.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_8.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_7.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_8.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpne_4.c: Likewise.
Patterns that fuse a predicate operation P with a PTEST use
aarch64_sve_same_pred_for_ptest_p to test whether the governing
predicates of P and the PTEST are compatible. Most patterns were also
written as define_insn_and_rewrites, with the rewrite replacing P's
original governing predicate with PTEST's. This ensures that we don't,
for example, have both a .H PTRUE for the PTEST and a .B PTRUE for a
comparison that feeds the PTEST.
The svcmp_wide* patterns were missing this rewrite, meaning that we did
have redundant PTRUEs.
gcc/
* config/aarch64/aarch64-sve.md
(*aarch64_pred_cmp<cmp_op><mode>_wide_cc): Turn into a
define_insn_and_rewrite and rewrite the governing predicate
of the comparison so that it is identical to the PTEST's.
(*aarch64_pred_cmp<cmp_op><mode>_wide_ptest): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/cmpeq_1.c: Check the number
of PTRUEs.
* gcc.target/aarch64/sve/acle/general/cmpge_5.c: New test.
* gcc.target/aarch64/sve/acle/general/cmpge_6.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_5.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_6.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_5.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_6.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_5.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_6.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpne_3.c: Likewise.
The patterns for the svcmp_wide intrinsics used a VNx16BI
input predicate for all modes, instead of the usual <VPRED>.
That unnecessarily made some input bits significant, but more
importantly, it triggered an ICE in aarch64_sve_same_pred_for_ptest_p
when testing whether a comparison pattern could be fused with a PTEST.
A later patch will add tests for other comparisons.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_pred_cmp<cmp_op><mode>_wide)
(*aarch64_pred_cmp<cmp_op><mode>_wide_cc): Use <VPRED> instead of
VNx16BI for the governing predicate.
(*aarch64_pred_cmp<cmp_op><mode>_wide_ptest): Likewise.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/cmpeq_1.c: Add more tests.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the non-widening integer forms
of svcmp*. The handling of the PTEST patterns is similar to that
for the earlier svwhile* patch.
Unfortunately, on its own, this triggers a failure in the
pred_clobber_*.c tests. The problem is that, after the patch,
we have a comparison instruction followed by a move into p0.
Combine combines the instructions together, so that the destination
of the comparison is the hard register p0 rather than a pseudo.
This defeats IRA's make_early_clobber_and_input_conflicts, which
requires the source and destination to be pseudo registers.
Before the patch, there was a subreg move between the comparison
and the move into p0, so it was that subreg move that ended up
with a hard register destination.
Arguably the fix for PR87600 should be extended to destination
registers as well as source registers, but in the meantime,
the patch just disables combine for these tests. The tests are
really testing the constraints and register allocation.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_pred_cmp<cmp_op><mode>_acle)
(*aarch64_pred_cmp<cmp_op><mode>_acle, *cmp<cmp_op><mode>_acle_cc)
(*cmp<cmp_op><mode>_acle_and): New patterns that yield VNx16BI
results for all element types.
* config/aarch64/aarch64-sve-builtins-base.cc
(svcmp_impl::expand): Use them.
(svcmp_wide_impl::expand): Likewise when implementing an svcmp_wide
against an in-range constant.
gcc/testsuite/
* gcc.target/aarch64/sve/pred_clobber_1.c: Disable combine.
* gcc.target/aarch64/sve/pred_clobber_2.c: Likewise.
* gcc.target/aarch64/sve/pred_clobber_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpeq_2.c: Add more cases.
* gcc.target/aarch64/sve/acle/general/cmpeq_4.c: New test.
* gcc.target/aarch64/sve/acle/general/cmpge_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpge_2.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpge_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpge_4.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_2.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpgt_4.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_2.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmple_4.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_2.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmplt_4.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpne_1.c: Likewise.
* gcc.target/aarch64/sve/acle/general/cmpne_2.c: Likewise.
This patch continues the work of making ACLE intrinsics use VNx16BI
for svbool_t results. It deals with the svunpk* intrinsics.
gcc/
* config/aarch64/aarch64-sve.md (@aarch64_sve_punpk<perm_hilo>_acle)
(*aarch64_sve_punpk<perm_hilo>_acle): New patterns.
* config/aarch64/aarch64-sve-builtins-base.cc
(svunpk_impl::expand): Use them for boolean svunpk*.
gcc/testsuite/
* gcc.target/aarch64/sve/acle/general/unpkhi_1.c: New test.
* gcc.target/aarch64/sve/acle/general/unpklo_1.c: Likewise.
The previous patch for PR121294 handled svtrn1/2, svuzp1/2, and svzip1/2.
This one extends it to handle svrev intrinsics, where the same kind of
wrong code can be generated.
gcc/
PR target/121294
* config/aarch64/aarch64.md (UNSPEC_REV_PRED): New unspec.
* config/aarch64/aarch64-sve.md (@aarch64_sve_rev<mode>_acle)
(*aarch64_sve_rev<mode>_acle): New patterns.
* config/aarch64/aarch64-sve-builtins-base.cc
(svrev_impl::expand): Use the new patterns for boolean svrev.
gcc/testsuite/
PR target/121294
* gcc.target/aarch64/sve/acle/general/rev_2.c: New test.
The patterns for the predicate forms of svtrn1/2, svuzp1/2,
and svzip1/2 are shared with aarch64_vectorize_vec_perm_const.
The .H, .S, and .D forms operate on VNx8BI, VNx4BI, and VNx2BI
respectively. Thus, for all four element widths, there is one
significant bit per element, for both the inputs and the output.
That's appropriate for aarch64_vectorize_vec_perm_const but not
for the ACLE intrinsics, where every bit of the output is
significant, and where every bit of the selected input elements
is therefore also significant. The current expansion can lead
the optimisers to simplify inputs by changing the upper bits
of the input elements (since the current patterns claim that
those bits don't matter), which in turn leads to wrong code.
The ACLE expansion should operate on VNx16BI instead, for all
element widths.
There was already a pattern for a VNx16BI-only form of TRN1, for
constructing certain predicate constants. The patch generalises it to
handle the other five permutations as well. For the reasons given in
the comments, this is done by making the permutation unspec an operand
to a new UNSPEC_PERMUTE_PRED, rather than overloading the existing
unspecs, and rather than adding a new unspec for each permutation.
gcc/
PR target/121294
* config/aarch64/iterators.md (UNSPEC_TRN1_CONV): Delete.
(UNSPEC_PERMUTE_PRED): New unspec.
* config/aarch64/aarch64-sve.md (@aarch64_sve_trn1_conv<mode>):
Replace with...
(@aarch64_sve_<perm_insn><mode>_acle)
(*aarch64_sve_<perm_insn><mode>_acle): ...these new patterns.
* config/aarch64/aarch64.cc (aarch64_expand_sve_const_pred_trn):
Update accordingly.
* config/aarch64/aarch64-sve-builtins-functions.h
(binary_permute::expand): Use the new _acle patterns for
predicate operations.
gcc/testsuite/
PR target/121294
* gcc.target/aarch64/sve/acle/general/perm_2.c: New test.
* gcc.target/aarch64/sve/acle/general/perm_3.c: Likewise.
* gcc.target/aarch64/sve/acle/general/perm_4.c: Likewise.
* gcc.target/aarch64/sve/acle/general/perm_5.c: Likewise.
* gcc.target/aarch64/sve/acle/general/perm_6.c: Likewise.
* gcc.target/aarch64/sve/acle/general/perm_7.c: Likewise.
PR121118 is about a case where we try to construct a predicate
constant using a permutation of a PFALSE and a WHILELO. The WHILELO
is a .H operation and its result has mode VNx8BI. However, the
permute instruction expects both inputs to be VNx16BI, leading to
an unrecognisable insn ICE.
VNx8BI is effectively a form of VNx16BI in which every odd-indexed
bit is insignificant. In the PR's testcase that's OK, since those
bits will be dropped by the permutation. But if the WHILELO had been a
VNx4BI, so that only every fourth bit is significant, the input to the
permutation would have had undefined bits. The testcase in the patch
has an example of this.
This feeds into a related ACLE problem that I'd been meaning to
fix for a long time: every bit of an svbool_t result is significant,
and so every ACLE intrinsic that returns an svbool_t should return a
VNx16BI. That doesn't currently happen for ACLE svwhile* intrinsics.
This patch fixes both issues together.
We still need to keep the current WHILE* patterns for autovectorisation,
where the result mode should match the element width. The patch
therefore adds a new set of patterns that are defined to return
VNx16BI instead. For want of a better scheme, it uses an "_acle"
suffix to distinguish these new patterns from the "normal" ones.
The formulation used is:
(and:VNx16BI (subreg:VNx16BI normal-pattern 0) C)
where C has mode VNx16BI and is a canonical ptrue for normal-pattern's
element width (so that the low bit of each element is set and the upper
bits are clear).
This is a bit clunky, and leads to some repetition. But it has two
advantages:
* After g:965564eafb721f8000013a3112f1bba8d8fae32b, converting the
above expression back to normal-pattern's mode will reduce to
normal-pattern, so that the pattern for testing the result using a
PTEST doesn't change.
* It gives RTL optimisers a bit more information, as the new tests
demonstrate.
In the expression above, C is matched using a new "special" predicate
aarch64_ptrue_all_operand, where "special" means that the mode on the
predicate is not necessarily the mode of the expression. In this case,
C always has mode VNx16BI, but the mode on the predicate indicates which
kind of canonical PTRUE is needed.
gcc/
PR testsuite/121118
* config/aarch64/iterators.md (VNx16BI_ONLY): New mode iterator.
* config/aarch64/predicates.md (aarch64_ptrue_all_operand): New
predicate.
* config/aarch64/aarch64-sve.md
(@aarch64_sve_while_<while_optab_cmp><GPI:mode><VNx16BI_ONLY:mode>_acle)
(@aarch64_sve_while_<while_optab_cmp><GPI:mode><PRED_HSD:mode>_acle)
(*aarch64_sve_while_<while_optab_cmp><GPI:mode><PRED_HSD:mode>_acle)
(*while_<while_optab_cmp><GPI:mode><PRED_HSD:mode>_acle_cc): New
patterns.
* config/aarch64/aarch64-sve-builtins-functions.h
(while_comparison::expand): Use the new _acle patterns that
always return a VNx16BI.
* config/aarch64/aarch64-sve-builtins-sve2.cc
(svwhilerw_svwhilewr_impl::expand): Likewise.
* config/aarch64/aarch64.cc
(aarch64_sve_move_pred_via_while): Likewise.
gcc/testsuite/
PR testsuite/121118
* gcc.target/aarch64/sve/acle/general/pr121118_1.c: New test.
* gcc.target/aarch64/sve/acle/general/whilele_13.c: Likewise.
* gcc.target/aarch64/sve/acle/general/whilelt_6.c: Likewise.
* gcc.target/aarch64/sve2/acle/general/whilege_1.c: Likewise.
* gcc.target/aarch64/sve2/acle/general/whilegt_1.c: Likewise.
* gcc.target/aarch64/sve2/acle/general/whilerw_5.c: Likewise.
* gcc.target/aarch64/sve2/acle/general/whilewr_5.c: Likewise.
If the index to svdupq_lane is variable, or is outside the range of
the .Q form of DUP, the fallback expansion is to convert to VNx2DI and
use TBL. The problem in this PR was that the conversion used subregs,
and on big-endian targets, a bitcast from VNx2DI to another element size
requires a REV[BHW] in the best case or a spill and reload in the worst
case. (See the comment at the head of aarch64-sve.md for details.)
Here we want the conversion to act like svreinterpret, so it should
use aarch64_sve_reinterpret instead of subregs.
gcc/
PR target/121293
* config/aarch64/aarch64-sve-builtins-base.cc (svdupq_lane::expand):
Use aarch64_sve_reinterpret instead of subregs. Explicitly
reinterpret the result back to the required mode, rather than
leaving the caller to take a subreg.
gcc/testsuite/
PR target/121293
* gcc.target/aarch64/sve/acle/general/dupq_lane_9.c: New test.
The following streamlines and generalizes how we find the common
base of the lookup ref and a kill ref when looking through
aggregate copies. In particular this tries to deal with all
variants of punning that happens on the inner MEM_REF after
forwarding of address taken components of the common base.
PR tree-optimization/121362
* tree-ssa-sccvn.cc (vn_reference_lookup_3): Generalize
aggregate copy handling.
* gcc.dg/tree-ssa/ssa-fre-105.c: New testcase.
* gcc.dg/tree-ssa/ssa-fre-106.c: Likewise.
This patch changes two things. Firstly, we document
-fdump-rtl-<whatever>-graph and other such options under -fdump-tree.
At least write a remark about this under -fdump-rtl. Secondly, the
documentation incorrectly says that -fdump-tree-<whatever>-graph is not
implemented. Change that.
gcc/ChangeLog:
* doc/invoke.texi: Add remark about -options being documented
under -fdump-tree. Remove remark about -graph working only for
RTL.
Signed-off-by: Filip Kastl <fkastl@suse.cz>
Don't hoist non all 0s/1s vector set outside of the loop to avoid extra
spills.
gcc/
PR target/120941
* config/i386/i386-features.cc (x86_cse_kind): Moved before
ix86_place_single_vector_set.
(redundant_load): Likewise.
(ix86_place_single_vector_set): Replace the last argument to the
pointer to redundant_load. For X86_CSE_VEC_DUP, don't place the
vector set outside of the loop to avoid extra spills.
(remove_redundant_vector_load): Pass load to
ix86_place_single_vector_set.
gcc/testsuite/
PR target/120941
* gcc.target/i386/pr120941-1.c: New test.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
The CWG1709 just codifies existing GCC (and clang) behavior, so this
just adds a testcase for that.
2025-08-03 Jakub Jelinek <jakub@redhat.com>
PR preprocessor/120778
* g++.dg/DRs/dr1709.C: New test.
My changes for "Module Declarations Shouldn’t be Macros" paper broke
the following testcase. The backup handling intentionally tries to
drop CPP_PRAGMA_EOL token if things go wrong, which is desirable for the
case where we haven't committed to the module preprocessing directive
(i.e. changed the first token to the magic one). In that case there is
no preprocessing directive start and so CPP_PRAGMA_EOL would be wrong.
If there is a premature new-line after we've changed the first token though,
we shouldn't drop CPP_PRAGMA_EOL, because otherwise we ICE in the FE.
While clang++ and MSVC accept the testcase, in my reading it is incorrect
at least in the C++23 and newer wordings and I think the changes have been
a DR, https://eel.is/c++draft/cpp.module has no exception for new-lines
and https://eel.is/c++draft/cpp.pre#1.sentence-2 says that new-line (unless
deleted during phase 2 when after backslash) ends the preprocessing
directive.
The patch arranges for eol being set only in the not_module case.
2025-08-03 Jakub Jelinek <jakub@redhat.com>
PR c++/120845
libcpp/
* lex.cc (cpp_maybe_module_directive): Move eol variable declaration
to the start of the function, initialize to false and only set it to
peek->type == CPP_PRAGMA_EOL in the not_module case. Formatting fix.
gcc/testsuite/
* g++.dg/modules/cpp-21.C: New test.
I've tried compiling
#include <bits/stdc++.h>
with -std=c++26 -fdump-lang-all
and
for i in `grep ^Class.std::[^_] *.C.001l.class | sed 's/^Class //;s/[< ].*$//' | sort -u | grep -v ::.*::`; do grep -q $i /usr/src/gcc/libstdc++-v3/src/c++23/std.cc.in || echo $i;
+done
This printed
std::auto_ptr
std::binary_function
std::owner_equal
std::owner_hash
std::unary_function
where auto_ptr, binary_function and unary_function have been removed in earlier
versions of C++ and owner_equal and owner_hash are missing.
The following patch adds them.
Wonder how to automatically discover other missing exports (like in PR121373
std::byteswap), maybe one could dig that stuff somehow from the raw
dump (look for identifiers in std namespace (and perhaps inlined namespaces
thereof at least) which don't start with underscore.
2025-08-03 Jakub Jelinek <jakub@redhat.com>
* src/c++23/std.cc.in (std::owner_equal, std::owner_hash): Export.
There are many post-reload define_insn_and_split's that just append
a (clobber (reg:CC REG_CC)) to the pattern. Instead of repeating
the original patterns, avr_add_ccclobber (curr_insn) is used to do
that job.
This avoids repeating patterns all over the place, and splits that do
something different (like using a canonical form) stand out clearly.
gcc/
* config/avr/avr.md (define_insn_and_split) [reload_completed]:
For splits that just append a (clobber (reg:CC REG_CC)) to
the pattern, use avr_add_ccclobber (curr_insn) instead of
repeating the original pattern.
* config/avr/avr-dimode.md: Same.
* config/avr/avr-fixed.md: Same.