options

Stylizer

gcc_o3_ov1_o52gcc_o3-ffastmath_ov1_o52gcc_ofast_ov1_o52icx_o3_ov1_o52icx_o3-ffastmath_ov1_o52icx_fast_ov1_o52

[ 3.00 / 3 ] Architecture specific option -march=skylake-avx512 is used

[ 3.00 / 3 ] Architecture specific option -march=skylake-avx512 is used

[ 3.00 / 3 ] Architecture specific option -march=skylake-avx512 is used

[ 3.00 / 3 ] Architecture specific option -march=native is used

[ 3.00 / 3 ] Architecture specific option -march=native is used

[ 3.00 / 3 ] Architecture specific option -x HOST is used

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information

Functions without compilation information (typically not compiled with -g) cumulate 0.01% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.

[ 0 / 0 ] Fastmath not used

Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions.

Not available for this run

Not available for this run

Not available for this run

Not available for this run

Not available for this run

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.16 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.17 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.15 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.19 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.20 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.17 % of the execution time)

To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 3 / 3 ] Optimization level option is correctly used

[ 4 / 4 ] Application profile is long enough (38.60 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (37.34 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (42.63 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (36.17 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (33.67 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

[ 4 / 4 ] Application profile is long enough (38.03 s)

To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.

Strategizer

gcc_o3_ov1_o52gcc_o3-ffastmath_ov1_o52gcc_ofast_ov1_o52icx_o3_ov1_o52icx_o3-ffastmath_ov1_o52icx_fast_ov1_o52

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (86.64%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (52.73%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (55.63%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (50.33%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (47.95%), representing an hotspot for the application

[ 4 / 4 ] Loop profile is not flat

At least one loop coverage is greater than 4% (51.82%), representing an hotspot for the application

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations

BLAS2 calls usually could make a poor cache usage and could benefit from inlining.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (86.65%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (52.73%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (55.64%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (66.46%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (63.53%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (68.56%)

If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations

It could be more efficient to inline by hand BLAS1 operations

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (90.07%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (88.10%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (88.86%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (87.83%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (86.40%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (88.78%)

If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (3.42%) lower than cumulative innermost loop coverage (86.65%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (35.36%) lower than cumulative innermost loop coverage (52.73%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (33.22%) lower than cumulative innermost loop coverage (55.64%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (21.37%) lower than cumulative innermost loop coverage (66.46%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (22.87%) lower than cumulative innermost loop coverage (63.53%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

[ 3 / 3 ] Cumulative Outermost/In between loops coverage (20.21%) lower than cumulative innermost loop coverage (68.56%)

Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex

Optimizer

Analysisr_1r_2r_3r_4r_5r_6
Loop Computation IssuesLess than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA000111
Presence of a large number of scalar integer instructions222222
Low iteration count000111
Control Flow IssuesPresence of calls111111
Presence of 2 to 4 paths300222
Presence of more than 4 paths033111
Non-innermost loop333333
Low iteration count000111
Data Access IssuesPresence of constant non-unit stride data access200000
Presence of indirect access311222
Presence of expensive instructions: scatter/gather000111
Presence of special instructions executing on a single port022111
More than 20% of the loads are accessing the stack122333
Vectorization RoadblocksPresence of calls111111
Presence of 2 to 4 paths300222
Presence of more than 4 paths033111
Non-innermost loop333333
Presence of constant non-unit stride data access200000
Presence of indirect access311222
Inefficient VectorizationPresence of expensive instructions: scatter/gather000111
Presence of special instructions executing on a single port022111
×