* Info: Selecting the 'perf-low-ppn' engine for node inti6206
* Info: "ref-cycles" not supported on inti6206: fallback to "cpu-clock"
* Warning: Found no event able to derive walltime: prepending cpu-clock :-) GROMACS - gmx mdrun, 2022.4 (-:
Executable: /ccc/work/cont001/ocre/oserete/gromacs-2022.4-install-gcc-ompi/bin/gmx_mpi
Data prefix: /ccc/work/cont001/ocre/oserete/gromacs-2022.4-install-gcc-ompi
Working dir: /ccc/work/cont001/ocre/oserete/GROMACS_DATA
Command line:
gmx_mpi mdrun -s ion_channel.tpr -nsteps 10000 -pin on -deffnm gcc
Back Off! I just backed up gcc.log to ./#gcc.log.1#
Reading file ion_channel.tpr, VERSION 2020.3 (single precision)
Note: file tpx version 119, software tpx version 127
Overriding nsteps with value passed on the command line: 10000 steps, 25 ps
Changing nstlist from 10 to 80, rlist from 1 to 1.129
Using 1 MPI process
Using 52 OpenMP threads
Overriding thread affinity set outside gmx mdrun
Back Off! I just backed up gcc.edr to ./#gcc.edr.1#
starting mdrun 'Protein'
10000 steps, 25.0 ps.
Writing final coordinates.
Back Off! I just backed up gcc.gro to ./#gcc.gro.1#
Core t (s) Wall t (s) (%)
Time: 2760.626 53.089 5200.0
(ns/day) (hour/ns)
Performance: 40.690 0.590
GROMACS reminds you: "Problems worthy of attack prove their worth by hitting back." (Piet Hein)
* Info: Process launched (host inti6206, process 1393281)
* Info: Process finished (host inti6206, process 1393281)
Your experiment path is /ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0
To display your profiling results:
#########################################################################################################################################################
# LEVEL | REPORT | COMMAND #
#########################################################################################################################################################
# Functions | Cluster-wide | maqao lprof -df xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Functions | Per-node | maqao lprof -df -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Functions | Per-process | maqao lprof -df -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Functions | Per-thread | maqao lprof -df -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Loops | Cluster-wide | maqao lprof -dl xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Loops | Per-node | maqao lprof -dl -dn xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Loops | Per-process | maqao lprof -dl -dp xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
# Loops | Per-thread | maqao lprof -dl -dt xp=/ccc/work/cont001/ocre/oserete/GROMACS_DATA/OV1_GROMACS_ZEN3_gcc_OMP52/tools/lprof_npsu_run_0 #
#########################################################################################################################################################