Neoverse V1 GCC O2 | Neoverse V1 GCC O3 | Neoverse V1 GCC Ofast | Neoverse V1 ACFL O2 | Neoverse V1 ACFL O3 | Neoverse V1 ACFL Ofast |
---|---|---|---|---|---|
[ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. | [ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. | [ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. | [ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. | [ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. | [ 3 / 3 ] Host configuration allows retrieval of all necessary metrics. |
[ 0 / 0 ] Fastmath not used Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions. | [ 0 / 0 ] Fastmath not used Consider to add ffast-math to compilation flags (or replace -O3 with -Ofast) to unlock potential extra speedup by relaxing floating-point computation consistency. Warning: floating-point accuracy may be reduced and the compliance to IEEE/ISO rules/specifications for math functions will be relaxed, typically 'errno' will no longer be set after calling some math functions. | Not available for this run | Not available for this run | Not available for this run | Not available for this run |
[ 0 / 3 ] Compilation of some functions is not optimized for the target processor Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ). | [ 0 / 3 ] Compilation of some functions is not optimized for the target processor Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ). | [ 0 / 3 ] Compilation of some functions is not optimized for the target processor Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ). | [ 3 / 3 ] Architecture specific option -mcpu is used | [ 3 / 3 ] Architecture specific option -mcpu is used | [ 0 / 3 ] Compilation of some functions is not optimized for the target processor Architecture specific options are needed to produce efficient code for a specific processor ( -mcpu=native ). |
[ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling. | [ 2.70 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling. | [ 2.82 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improve the accuracy of callchains found during the application profiling. | [ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling. | [ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling. | [ 3 / 3 ] Most of time spent in analyzed modules comes from functions compiled with -g and -fno-omit-frame-pointer -g option gives access to debugging informations, such are source locations. -fno-omit-frame-pointer improves the accuracy of callchains found during the application profiling. |
[ 0 / 4 ] Application profile is too short (2.23 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. | [ 0 / 4 ] Application profile is too short (2.04 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. | [ 0 / 4 ] Application profile is too short (2.94 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. | [ 0 / 4 ] Application profile is too short (2.20 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. | [ 0 / 4 ] Application profile is too short (2.02 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. | [ 0 / 4 ] Application profile is too short (2.10 s) If the overall application profiling time is less than 10 seconds, many of the measurements at function or loop level will very likely be under the measurement quality threshold (0,1 seconds). Rerun to increase runtime duration: for example use a larger dataset, include a repetition loop or change profile_start settings. |
[ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code | [ 2 / 2 ] Application is correctly profiled ("Others" category represents 0.00 % of the execution time) To have a representative profiling, it is advised that the category "Others" represents less than 20% of the execution time in order to analyze as much as possible of the user code |
[ 3 / 3 ] Optimization level option is correctly used | [ 3 / 3 ] Optimization level option is correctly used | [ 3 / 3 ] Optimization level option is correctly used | [ 3 / 3 ] Optimization level option is correctly used | [ 3 / 3 ] Optimization level option is correctly used | [ 3 / 3 ] Optimization level option is correctly used |
[ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. | [ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. | [ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. | [ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. | [ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. | [ 1 / 1 ] Lstopo present. The Topology lstopo report will be generated. |
Neoverse V1 GCC O2 | Neoverse V1 GCC O3 | Neoverse V1 GCC Ofast | Neoverse V1 ACFL O2 | Neoverse V1 ACFL O3 | Neoverse V1 ACFL Ofast |
---|---|---|---|---|---|
[ 0 / 4 ] CPU activity is below 90% (3.69%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. | [ 0 / 4 ] CPU activity is below 90% (3.39%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. | [ 2 / 4 ] CPU activity is below 90% (60.89%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. | [ 0 / 4 ] CPU activity is below 90% (3.39%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. | [ 0 / 4 ] CPU activity is below 90% (3.23%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. | [ 1 / 4 ] CPU activity is below 90% (47.99%) CPU cores are idle more than 10% of time. Threads supposed to run on these cores are probably IO/sync waiting. Some hints: use faster filesystems to read/write data, improve parallel load balancing and/or scheduling. |
[ 0 / 4 ] Affinity stability is lower than 90% (4.75%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. | [ 0 / 4 ] Affinity stability is lower than 90% (4.49%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. | [ 3 / 4 ] Affinity stability is lower than 90% (77.98%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. | [ 0 / 4 ] Affinity stability is lower than 90% (4.74%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. | [ 0 / 4 ] Affinity stability is lower than 90% (4.55%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. | [ 3 / 4 ] Affinity stability is lower than 90% (72.12%) Threads are often migrating to other CPU cores/threads. For OpenMP, typically set (OMP_PLACES=cores OMP_PROC_BIND=close) or (OMP_PLACES=threads OMP_PROC_BIND=spread). With OpenMPI + OpenMP, use --bind-to core --map-by node:PE=$OMP_NUM_THREADS --report-bindings. With IntelMPI + OpenMP, set I_MPI_PIN_DOMAIN=omp:compact or I_MPI_PIN_DOMAIN=omp:scatter and use -print-rank-map. |
[ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (1.96%) | [ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.04%) | [ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.55%) | [ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.27%) | [ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.55%) | [ 3 / 3 ] Functions mostly use all threads Functions running on a reduced number of threads (typically sequential code) cover less than 10% of application walltime (0.19%) |
[ 3 / 3 ] Cumulative Outermost/In between loops coverage (5.66%) lower than cumulative innermost loop coverage (64.92%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (6.16%) lower than cumulative innermost loop coverage (70.09%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.94%) lower than cumulative innermost loop coverage (74.11%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.57%) lower than cumulative innermost loop coverage (77.09%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.66%) lower than cumulative innermost loop coverage (80.60%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex | [ 3 / 3 ] Cumulative Outermost/In between loops coverage (0.64%) lower than cumulative innermost loop coverage (78.10%) Having cumulative Outermost/In between loops coverage greater than cumulative innermost loop coverage will make loop optimization more complex |
[ 4 / 4 ] Threads activity is good On average, more than 236.01% of observed threads are actually active | [ 4 / 4 ] Threads activity is good On average, more than 216.65% of observed threads are actually active | [ 4 / 4 ] Threads activity is good On average, more than 304.28% of observed threads are actually active | [ 4 / 4 ] Threads activity is good On average, more than 216.62% of observed threads are actually active | [ 4 / 4 ] Threads activity is good On average, more than 206.97% of observed threads are actually active | [ 4 / 4 ] Threads activity is good On average, more than 192.78% of observed threads are actually active |
[ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. | [ 2 / 2 ] Less than 10% (0.00%) is spend in BLAS2 operations BLAS2 calls usually could make a poor cache usage and could benefit from inlining. |
[ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (64.92%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (70.09%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (74.11%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (77.09%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (80.60%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed innermost loops (78.10%) If the time spent in analyzed innermost loops is less than 15%, standard innermost loop optimizations such as vectorisation will have a limited impact on application performances. |
[ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations | [ 3 / 3 ] Less than 10% (0.00%) is spend in BLAS1 operations It could be more efficient to inline by hand BLAS1 operations |
[ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) | [ 2 / 2 ] Less than 10% (0.00%) is spend in Libm/SVML (special functions) |
[ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (58.09%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (62.50%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (69.64%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (70.29%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (74.52%), representing an hotspot for the application | [ 4 / 4 ] Loop profile is not flat At least one loop coverage is greater than 4% (70.23%), representing an hotspot for the application |
[ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (70.58%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (76.25%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (75.05%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (77.66%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (81.26%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. | [ 4 / 4 ] Enough time of the experiment time spent in analyzed loops (78.75%) If the time spent in analyzed loops is less than 30%, standard loop optimizations will have a limited impact on application performances. |
Analysis | r0 | r1 | r2 | r3 | r4 | r5 | |
---|---|---|---|---|---|---|---|
Loop Computation Issues | Less than 10% of the FP ADD/SUB/MUL arithmetic operations are performed using FMA | 2 | 1 | 1 | 2 | 1 | 2 |
Presence of a large number of scalar integer instructions | 2 | 1 | 2 | 2 | 2 | 2 | |
Control Flow Issues | Presence of 2 to 4 paths | 0 | 0 | 2 | 2 | 2 | 2 |
Non-innermost loop | 1 | 1 | 1 | 1 | 1 | 1 | |
Data Access Issues | Presence of constant non-unit stride data access | 0 | 0 | 1 | 1 | 0 | 0 |
Presence of indirect access | 1 | 0 | 0 | 1 | 1 | 1 | |
Vectorization Roadblocks | Presence of 2 to 4 paths | 0 | 0 | 2 | 2 | 2 | 2 |
Presence of more than 4 paths | 2 | 2 | 0 | 0 | 0 | 0 | |
Non-innermost loop | 1 | 1 | 1 | 1 | 1 | 1 | |
Presence of constant non-unit stride data access | 0 | 0 | 1 | 1 | 0 | 0 | |
Presence of indirect access | 1 | 0 | 0 | 1 | 1 | 1 | |
Out of user code | 0 | 0 | 0 | 1 | 0 | 0 |