armclang_o3_ov1_o96/ | gcc_o3_ov1_o96/ |
---|---|
[ 0 / 3 ] Compilation of some functions is not optimized for the target processor Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ). | [ 3.00 / 3 ] Architecture specific option -mcpu is used |
[ 0 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information Functions without compilation information (typically not compiled with -g) cumulate 100.00% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case. | [ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information Functions without compilation information (typically not compiled with -g) cumulate 0.00% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case. |
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.18 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.18 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code |
[ 0 / 3 ] Some functions are compiled with a low optimization level (O0 or O1) To have better performances, it is advised to help the compiler by using a proper optimization level (-O2 of higher). Warning, depending on compilers, faster optimization levels can decrease numeric accuracy. | [ 3 / 3 ] Optimization level option is correctly used |
[ 0 / 4 ] Application profile is too short (9.30 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop. | [ 0 / 4 ] Application profile is too short (8.91 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset or include a repetition loop. |
armclang_o3_ov1_o96/ | gcc_o3_ov1_o96/ |
---|---|
[ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (58.68%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (81.69%), representing an hotspot for the application |
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. |
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (77.74%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (81.69%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. |
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations |
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) |
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (84.23%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (85.34%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. |
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (6.49%) lower than cumulative innermost loop coverage (77.74%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (3.65%) lower than cumulative innermost loop coverage (81.69%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex |
Analysis | r_1 | r_2 | |
---|---|---|---|
Loop Computation Issues | Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA | 0 | 1 |
Presence of a large number of scalar integer instructions | 1 | 1 | |
Control Flow Issues | Presence of 2 to 4 paths | 0 | 2 |
Presence of more than 4 paths | 2 | 0 | |
Non-innermost loop | 2 | 2 | |
Data Access Issues | Presence of constant non-unit stride data access | 1 | 1 |
Presence of indirect access | 2 | 3 | |
Vectorization Roadblocks | Presence of 2 to 4 paths | 0 | 2 |
Presence of more than 4 paths | 2 | 0 | |
Non-innermost loop | 2 | 2 | |
Presence of constant non-unit stride data access | 1 | 1 | |
Presence of indirect access | 2 | 3 |