options

AVBP_V7.16.0.KRAKEN - 2025-06-19 10:58:44 - MAQAO 2025.1.0

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Total Time (s)136.27
Max (Thread Active Time) (s)131.35
Average Active Time (s)131.00
Activity Ratio (%)96.2
Average number of active threads123.051
Affinity Stability (%)97.7
Time in analyzed loops (%)51.9
Time in analyzed innermost loops (%)36.1
Time in user code (%)51.0
Compilation Options Score (%)100
Array Access Efficiency (%)81.1
Potential Speedups
Perfect Flow Complexity1.00
Perfect OpenMP + MPI + Pthread1.17
Perfect OpenMP + MPI + Pthread + Perfect Load Distribution1.20
No Scalar IntegerPotential Speedup1.11
Nb Loops to get 80%18
FP VectorisedPotential Speedup1.07
Nb Loops to get 80%23
Fully VectorisedPotential Speedup1.29
Nb Loops to get 80%41
FP Arithmetic OnlyPotential Speedup1.30
Nb Loops to get 80%41

CQA Potential Speedups Summary

Average Active Threads Count

Loop Based Profile

Innermost Loop Based Profile

Application Categorization

Compilation Options

Source ObjectIssue
AVBP_V7.16.0.KRAKEN
cons_tens.f90
msource_cell.f90
scatter_o_add.f90
gradqen.f90
mass_product.f90
euler_timestep.f90
mod_adj_graph.f90
specsource_cell.f90
specflux_invc.f90
compute_FE_implicit_residual.f90
heatflux_nv2.f90
mod_pmesh_transfer.f90
temperature.f90
grad_4obj.f90
div.f90
FE_add_dw.f90
wtowp.f90
get_Y.f90
calc_visc_eff.f90
central.f90
efcy_dyn.f90
update_rho.f90
update.f90
rot_2delta.f90
specflux_visc_c_nv.f90
rrate_cell.f90
ave.f90
nsflux_les.f90
wale_cell.f90
scatter_o_sub.f90
gather_o_cpy.f90
avis_lp.f90
scatter_grad.f90
thermo_variables.f90
prebound.f90
get_uvwT.f90
central_nv.f90
laxwe.f90
savis_Colin_NS.f90
scale.f90
boxe_2delta.f90
calc_diffus.f90
avis_lp_rre.f90
stress_nv2.f90
eflux.f90
scatter_add.f90
mod_pmesh_scatter_add.f90
mod_copy.f90
savis_spec.f90
velocity_group.f90
cons_tens_cell.f90
scheme.f90
compute_diffus_max.f90

Loop Path Count Profile

Cumulated Speedup If No Scalar Integer

Cumulated Speedup If FP Vectorized

Cumulated Speedup If Fully Vectorized

Cumulated Speedup If FP Arithmetic Only

Experiment Summary

Experiment Name
Application/scratch/exter/camus/avbp/avbp-7.16.0/HOST/KRAKEN/BIN/AVBP_V7.16.0.KRAKEN
Timestamp2025-06-19 10:58:44 Universal Timestamp1750323524
Number of processes observed128 Number of threads observed128
Experiment TypeMPI;
Machinekrakenepyc1.cluster
Model NameAMD EPYC 7702 64-Core Processor
Architecturex86_64 Micro ArchitectureZEN_V2
Cache Size512 KB Number of Cores64
OS VersionLinux 4.18.0-553.el8_10.x86_64 #1 SMP Fri May 24 13:05:10 UTC 2024
Architecture used during static analysisx86_64 Micro Architecture used during static analysisZEN_V2
Frequency Driveracpi-cpufreq Frequency Governorperformance
Huge Pagesalways Hyperthreadingoff
Number of sockets2 Number of cores per socket64
Compilation OptionsAVBP_V7.16.0.KRAKEN: Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.10.0 Build 20230609_000000 -I/softs/local_intel/phdf5/1.8.20/include -I/softs/local_intel/parmetis/403_64/include -I/softs/local_intel/ptscotch/6.0.5a/include -I. -I../SOURCES/GENERIC/ -IAMR_INTERFACE/ -IBNDY/ -ICFD/ -ICHEM/ -ICHEM/ANALYTIC/ -ICHEM/ANALYTIC/LIB/ -ICHEM/HYB/ -ICHEM/NOX/ -ICHEM/SOOT_ANALYTIC/ -ICOMMON/ -ICONDUCTION/ -ICOUPLING/ -IGENERIC/ -IIO/ -ILAGRANGE/ -ILAGRANGE/SOOT_EL/ -ILES/ -IMAIN/ -IMAIN/COMPUTE/ -IMAIN/SLAVE/ -INUMERICS/ -IPARSER/ -IPLASMA/ -IPLASMA/CHEMISTRY/ -IPLASMA/CHEMISTRY/CUSTOM_KINETICS_LIB/ -IPLASMA/DRIFTDIFFUSION/ -IPLASMA/DRIFTDIFFUSION/SCHEMES/ -IPLASMA/ELECTROMAG/ -IPLASMA/EULER/ -IPLASMA/FREEZE/ -IPLASMA/PHOTO/ -IPLASMA/THERMO/ -IPMESH/generic/ -IPMESH/interf_avbp/ -IPMESH/interp_tree_search/ -IPMESH/pmeshlib/ -IPMESH/pproc/ -ISMOOTH/ -ITTC/ -ITTC/LES/ -I/softs/intel/oneapi/mpi/2021.10.0//include -I/softs/intel/oneapi/mpi/2021.10.0/include -g -O3 -fpp -traceback -fno-alias -ip -assume byterecl -convert big_endian -align -march=core-avx2 -fma -axCORE-AVX2 -DHAS_PMETIS -DPARMETIS4 -DMETIS5 -DHAS_PTSCOTCH -c -o GENERIC/gather_o_cpy.o
Comments

Configuration Summary

Dataset
Run Command<executable>
MPI Commandmpirun -np <number_processes>
Number Processes128
Number Nodes1
Number Processes per Nodes36
FilterNot Used
Profile StartNot Used
Maximal Path Number4
×