Help is available by moving the cursor above any symbol or by checking MAQAO website.
[ 4 / 4 ] Application profile is long enough (4712.19 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.39 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.33% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 2.99 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.42%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] CPU activity is good
CPU cores are active 99.60% of time
[ 4 / 4 ] Threads activity is good
On average, more than 99.60% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.97%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.13%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.42%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.41%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.41%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
[ 4 / 4 ] Application profile is long enough (2664.69 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.39 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.25% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 2.99 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.36%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] CPU activity is good
CPU cores are active 97.38% of time
[ 4 / 4 ] Threads activity is good
On average, more than 194.76% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.96%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.11%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.36%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.36%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.36%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
[ 4 / 4 ] Application profile is long enough (1619.36 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.17% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 3.00 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.30%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 4 / 4 ] CPU activity is good
CPU cores are active 94.48% of time
[ 4 / 4 ] Threads activity is good
On average, more than 377.88% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.93%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.09%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.30%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.30%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.30%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
[ 4 / 4 ] Application profile is long enough (964.37 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.17% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 3.00 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.27%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 3 / 4 ] CPU activity is below 90% (89.84%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Threads activity is good
On average, more than 718.59% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.90%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.08%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.27%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.27%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.27%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
[ 4 / 4 ] Application profile is long enough (613.34 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.15% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 3.00 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.23%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 3 / 4 ] CPU activity is below 90% (83.51%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Threads activity is good
On average, more than 1335.93% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.83%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.07%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.23%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.22%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.22%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
[ 4 / 4 ] Application profile is long enough (492.24 s)
To have good quality measurements, it is advised that the application profiling time is greater than 10 seconds.
[ 2.40 / 3 ] Most of time spent in analyzed modules comes from functions without compilation information
Functions without compilation information (typically not compiled with -g) cumulate 0.17% of the time spent in analyzed modules. Check that -g is present. Remark: if -g is indeed used, this can also be due to some compiler built-in functions (typically math) or statically linked libraries. This warning can be ignored in that case.
[ 3 / 3 ] Optimization level option is correctly used
[ 2 / 3 ] Security settings from the host restrict profiling. Some metrics will be missing or incomplete.
Current value for kernel.perf_event_paranoid is 2. If possible, set it to 1 or check with your system administrator which flag can be used to achieve this.
[ 2.99 / 3 ] Architecture specific option -mcpu is used
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time)
To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated.
[ 0 / 4 ] Too little time of the experiment time spent in analyzed loops (0.20%)
If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances.
[ 3 / 4 ] CPU activity is below 90% (79.10%)
CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling.
[ 4 / 4 ] Threads activity is good
On average, more than 1897.86% of observed threads are actually active
[ 4 / 4 ] Affinity is good (99.80%)
Threads are not migrating to CPU cores: probably successfully pinned
[ 0 / 4 ] Loop profile is flat
No hotspot found in the application (greatest loop coverage is 0.06%), and the twenty hottest loops cumulated coverage is lower than 20% of the application profiled time (0.20%)
[ 0 / 4 ] Too little time of the experiment time spent in analyzed innermost loops (0.20%)
If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances.
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.00%) lower than cumulative innermost loop coverage (0.20%)
Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations
It could be more efficient to inline by hand BLAS1 operations
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations
BLAS2 calls usually could make a poor cache usage and could benefit from inlining.
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions)
Loop ID | Analysis | Penalty Score |
---|---|---|
►Loop 301 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 49.62 % | |
►Data Access Issues | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Vectorization Roadblocks | 124 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 62 issues ( = data accesses) costing 2 point each. | 124 |
►Loop 338 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Vectorization Roadblocks | 126 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 63 issues ( = data accesses) costing 2 point each. | 126 |
►Loop 315 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Data Access Issues | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Vectorization Roadblocks | 64 | |
○ | [SA] Presence of constant non unit stride data access - Use array restructuring, perform loop interchange or use gather instructions to lower a bit the cost. There are 32 issues ( = data accesses) costing 2 point each. | 64 |
►Loop 161 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 46.74 % | |
○Control Flow Issues | 0 | |
►Vectorization Roadblocks | 1000 | |
○ | [SA] Too many paths (at least 1000 paths) - Simplify control structure. There are at least 1000 issues ( = paths) costing 1 point. | 1000 |
►Loop 132 - xhpl | Execution Time: 0 % - Vectorization Ratio: 0.00 % - Vector Length Use: 50.00 % | |
►Control Flow Issues | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |
►Vectorization Roadblocks | 1 | |
○ | [SA] Presence of calls - Inline either by compiler or by hand and use SVML for libm calls. There are 1 issues (= calls) costing 1 point each. | 1 |