options

Stylizer

oneapi_2025.1_native_fast.13904.all.one.maqaooneapi_2025.1_native.13904.all.one.maqaooneapi_2025.1_avx2.13904.all.one.maqaooneapi_2025.1_novect.13904.all.one.maqaollvm_20.1.1_native_fast.13904.all.one.maqaollvm_20.1.1_native.13904.all.one.maqaollvm_20.1.1_avx2.13904.all.one.maqaollvm_20.1.1_novect.13904.all.one.maqaogcc_14.2.0_native_fast.13904.all.one.maqaogcc_14.2.0_avx2.13904.all.one.maqaogcc_14.2.0_novect.13904.all.one.maqao

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics.

Not available for this run

Not available for this run

Not available for this run

Not available for this run

Not available for this run

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

Not available for this run

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

[ 2.87 / 3 ] Architecture specific option -x Host is used

[ 2.87 / 3 ] Architecture specific option -x Host is used

[ 2.88 / 3 ] Architecture specific option -x CORE is used

[ 0 / 3 ] Compilation of some functions is not optimized for the target processor

Architecture specific options are needed to produce efficient code for a specific processor ( -x(target) or -ax(target) ).

[ 3.00 / 3 ] Architecture specific option -march=native is used

[ 3.00 / 3 ] Architecture specific option -march=native is used

[ 3.00 / 3 ] Architecture specific option -march=core-avx2 is used

[ 0 / 3 ] Compilation of some functions is not optimized for the target processor

Architecture specific options are needed to produce efficient code for a specific processor ( -x(target) or -ax(target) ).

[ 3.00 / 3 ] Architecture specific option -march=sapphirerapids is used

[ 3.00 / 3 ] Architecture specific option -march=core-avx2 is used

[ 0 / 3 ] Compilation of some functions is not optimized for the target processor

-march=x86-64 option is used but it is not specific enough to produce efficient code. Architecture specific options are needed to produce efficient code for a specific processor ( -x(target) or -ax(target) ).

[ 2.87 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 2.87 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 2.88 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 2.80 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 3.00 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer

-g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling.

[ 4 / 4 ] Application profile is long enough (198.56 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (198.71 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (204.15 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (234.58 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (209.39 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (215.25 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (216.26 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (231.95 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (220.14 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (218.58 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (240.03 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 5.65 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 5.27 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 4.49 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 4.51 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 7.41 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 10.82 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 8.91 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 9.76 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 9.03 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 6.47 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 6.39 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2.87 / 3 ] Optimization level option is correctly used

[ 2.87 / 3 ] Optimization level option is correctly used

[ 2.88 / 3 ] Optimization level option is correctly used

[ 2.80 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.

Strategizer

oneapi_2025.1_native_fast.13904.all.one.maqaooneapi_2025.1_native.13904.all.one.maqaooneapi_2025.1_avx2.13904.all.one.maqaooneapi_2025.1_novect.13904.all.one.maqaollvm_20.1.1_native_fast.13904.all.one.maqaollvm_20.1.1_native.13904.all.one.maqaollvm_20.1.1_avx2.13904.all.one.maqaollvm_20.1.1_novect.13904.all.one.maqaogcc_14.2.0_native_fast.13904.all.one.maqaogcc_14.2.0_avx2.13904.all.one.maqaogcc_14.2.0_novect.13904.all.one.maqao

[ 3 / 4 ] CPU activity is below 90% (80.78%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (80.98%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (81.45%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (83.58%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (81.57%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (82.26%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (82.11%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (83.34%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (82.45%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (82.07%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] CPU activity is below 90% (83.55%)

CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 4 / 4 ] Affinity is good (97.94%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.12%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.13%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.26%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.12%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.28%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.17%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.39%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.08%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.24%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Affinity is good (98.44%)

Threads are not migrating to CPU cores: probably successfully pinned

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (64.76%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (64.82%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (65.87%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (65.67%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (63.24%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (61.93%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (62.45%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (62.99%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (64.75%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (67.92%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.84%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.59%) lower than cumulative innermost loop coverage (63.17%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.59%) lower than cumulative innermost loop coverage (63.23%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.89%) lower than cumulative innermost loop coverage (63.99%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (1.64%) lower than cumulative innermost loop coverage (64.04%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.93%) lower than cumulative innermost loop coverage (53.30%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.73%) lower than cumulative innermost loop coverage (52.20%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.89%) lower than cumulative innermost loop coverage (52.56%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (7.80%) lower than cumulative innermost loop coverage (55.20%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (8.62%) lower than cumulative innermost loop coverage (56.14%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (8.45%) lower than cumulative innermost loop coverage (59.47%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (9.48%) lower than cumulative innermost loop coverage (61.36%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 4 ] A significant amount of threads are idle (19.15%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (18.96%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (18.49%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (16.36%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (18.34%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (17.65%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (17.82%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (16.57%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (17.54%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (17.87%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 3 / 4 ] A significant amount of threads are idle (16.37%)

On average, more than 10% of observed threads are idle. Such threads are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (63.17%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (63.23%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (63.99%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.04%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (53.30%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (52.20%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (52.56%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (55.20%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (56.14%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (59.47%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (61.36%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.21%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.12%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.21%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.60%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.12%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.15%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.16%) is spend in Libm/SVML (special functions)

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (46.28%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (46.34%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (46.52%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (46.59%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (44.31%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (43.42%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (43.73%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (44.65%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (42.30%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (45.69%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (48.69%), representing an hotspot for the application

Optimizer

Analysisr_1r_2r_3r_4r_5r_6r_7r_8r_9r_10r_11
Loop Computation IssuesPresence of expensive FP instructions33334444444
Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA00061117117
Presence of a large number of scalar integer instructions55442224322
Control Flow IssuesPresence of calls33352225333
Presence of 2 to 4 paths22222222110
Presence of more than 4 paths22221112336
Non-innermost loop11112222222
Data Access IssuesPresence of constant non-unit stride data access33554444442
Presence of indirect access44221111110
More than 10% of the vector loads instructions are unaligned22251116220
Presence of expensive instructions: scatter/gather22000000000
Presence of special instructions executing on a single port66654444550
More than 20% of the loads are accessing the stack33442225455
Vectorization RoadblocksPresence of calls33352225333
Presence of 2 to 4 paths22222222110
Presence of more than 4 paths22221112336
Non-innermost loop11112222222
Presence of constant non-unit stride data access33554444442
Presence of indirect access44221111110
Out of user code11110000000
Inefficient VectorizationPresence of expensive instructions: scatter/gather22000000000
Presence of special instructions executing on a single port66654444550
×