* Info: Selecting the 'perf-low-ppn' engine for node inti6224
* Info: "ref-cycles" not supported on inti6224: fallback to "cpu-clock"
* Warning: Found no event able to derive walltime: prepending cpu-clock
* Info: Process launched (host inti6224, process 1426795) :-) GROMACS - gmx mdrun, 2023.1 (-:
Executable: /ccc/work/cont001/ocre/oserete/gromacs-2023.1-install-icx/bin/gmx
Data prefix: /ccc/work/cont001/ocre/oserete/gromacs-2023.1-install-icx
Working dir: /ccc/work/cont001/ocre/oserete/GROMACS_DATA
Command line:
gmx mdrun -s ion_channel.tpr -ntmpi 1 -nsteps 10000 -pin on -deffnm icx
Back Off! I just backed up icx.log to ./#icx.log.26#
Reading file ion_channel.tpr, VERSION 2020.3 (single precision)
Note: file tpx version 119, software tpx version 129
Overriding nsteps with value passed on the command line: 10000 steps, 25 ps
Changing nstlist from 10 to 50, rlist from 1 to 1.095
Update groups can not be used for this system because there are three or more consecutively coupled constraints
Using 1 MPI thread
Using 128 OpenMP threads
Overriding thread affinity set outside gmx mdrun
Back Off! I just backed up icx.edr to ./#icx.edr.26#
starting mdrun 'Protein'
10000 steps, 25.0 ps.
Writing final coordinates.
Back Off! I just backed up icx.gro to ./#icx.gro.24#
Core t (s) Wall t (s) (%)
Time: 7247.476 56.621 12799.9
(ns/day) (hour/ns)
Performance: 38.152 0.629
GROMACS reminds you: "Water is just water" (Berk Hess)
* Info: Process finished (host inti6224, process 1426795)
* Warning: Collected empty callchains for 63.9% of 1st-event samples
* Info: Callchains info will be incomplete
* Info: Try to recompile your application with -fno-omit-frame-pointer or to rerun with btm=stack
* Info: Dumping samples (host inti6224, process 1426795)
* Info: Dumping source info for callchain nodes (host inti6224, process 1426795)
* Info: Building/writing metadata (host inti6224)
* Info: Finished collect step (host inti6224, process 1426795)
Your experiment path is /ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0
To display your profiling results:
###########################################################################################################################################################
# LEVEL | REPORT | COMMAND #
###########################################################################################################################################################
# Functions | Cluster-wide | maqao lprof -df xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Functions | Per-node | maqao lprof -df -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Functions | Per-process | maqao lprof -df -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Functions | Per-thread | maqao lprof -df -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Loops | Cluster-wide | maqao lprof -dl xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Loops | Per-node | maqao lprof -dl -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Loops | Per-process | maqao lprof -dl -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
# Loops | Per-thread | maqao lprof -dl -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_WS_GROMACS_icx_AMD_FLOPS/tools/lprof_npsu_run_0 #
###########################################################################################################################################################