options

Executable Output


* [MAQAO] Info: Detected 8 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1075596  gmz17.benchmarkcenter.megware.com  {0-23,192-215}
[0] MPI startup(): 1       1075594  gmz17.benchmarkcenter.megware.com  {24-47,216-239}
[0] MPI startup(): 2       1075593  gmz17.benchmarkcenter.megware.com  {48-71,240-263}
[0] MPI startup(): 3       1075597  gmz17.benchmarkcenter.megware.com  {72-95,264-287}
[0] MPI startup(): 4       1075592  gmz17.benchmarkcenter.megware.com  {96-119,288-311}
[0] MPI startup(): 5       1075591  gmz17.benchmarkcenter.megware.com  {120-143,312-335}
[0] MPI startup(): 6       1075590  gmz17.benchmarkcenter.megware.com  {144-167,336-359}
[0] MPI startup(): 7       1075595  gmz17.benchmarkcenter.megware.com  {168-191,360-383}
OMP: pid 1075592 tid 1075592 thread 0 bound to OS proc set {96}
OMP: pid 1075594 tid 1075594 thread 0 bound to OS proc set {24}
OMP: pid 1075591 tid 1075591 thread 0 bound to OS proc set {120}
OMP: pid 1075597 tid 1075597 thread 0 bound to OS proc set {72}
OMP: pid 1075596 tid 1075596 thread 0 bound to OS proc set {0}
OMP: pid 1075590 tid 1075590 thread 0 bound to OS proc set {144}
OMP: pid 1075595 tid 1075595 thread 0 bound to OS proc set {168}
OMP: pid 1075593 tid 1075593 thread 0 bound to OS proc set {48}
LAMMPS (22 Jul 2025)
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  2 by 2 by 2 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.180 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 3.55e+04 | 3.55e+04 | 3.55e+04 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 11.8806 on 8 procs for 10 steps with 32768000 atoms

Performance: 0.364 ns/day, 66.004 hours/ns, 0.842 timesteps/s, 27.581 Matom-step/s
99.5% CPU use with 8 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 10.334     | 10.365     | 10.388     |   0.6 | 87.25
Neigh   | 0.6649     | 0.66842    | 0.6741     |   0.3 |  5.63
Comm    | 0.55084    | 0.5743     | 0.60463    |   2.5 |  4.83
Output  | 0.013369   | 0.013463   | 0.013592   |   0.1 |  0.11
Modify  | 0.21237    | 0.21469    | 0.21573    |   0.2 |  1.81
Other   |            | 0.04441    |            |       |  0.37

Nlocal:      4.096e+06 ave 4.09634e+06 max 4.09569e+06 min
Histogram: 1 0 1 0 3 2 0 0 0 1
Nghost:         463851 ave      464163 max      463508 min
Histogram: 1 0 0 0 2 3 0 1 0 1
Neighs:    1.54181e+08 ave 1.54189e+08 max 1.54171e+08 min
Histogram: 1 0 1 0 1 2 0 2 0 1

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 3.55e+04 | 3.55e+04 | 3.55e+04 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.072    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.824    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.469    
       400   797.74784     -1.1257875e+08  0             -1.0919981e+08  51540.4      
       450   797.84599     -1.1257917e+08  0             -1.0919981e+08  51536.933    
       500   797.72447     -1.1257864e+08  0             -1.091998e+08   51543.025    
       510   797.68252     -1.1257846e+08  0             -1.091998e+08   51545.151    
Loop time of 643.465 on 8 procs for 500 steps with 32768000 atoms

Performance: 0.336 ns/day, 71.496 hours/ns, 0.777 timesteps/s, 25.462 Matom-step/s
99.5% CPU use with 8 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 528.9      | 530.86     | 532.22     |   4.5 | 82.50
Neigh   | 57.277     | 57.443     | 57.669     |   1.9 |  8.93
Comm    | 40.851     | 42.407     | 44.265     |  16.5 |  6.59
Output  | 0.148      | 0.14816    | 0.14832    |   0.0 |  0.02
Modify  | 10.641     | 10.728     | 10.773     |   1.2 |  1.67
Other   |            | 1.874      |            |       |  0.29

Nlocal:      4.096e+06 ave 4.09732e+06 max 4.09526e+06 min
Histogram: 1 1 3 1 0 0 1 0 0 1
Nghost:         463820 ave      464564 max      462507 min
Histogram: 1 0 0 1 0 0 1 3 1 1
Neighs:    1.54612e+08 ave 1.54672e+08 max 1.54579e+08 min
Histogram: 2 0 2 2 0 0 1 0 0 1

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747091
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:10:59


[MAQAO] Info: 7/8 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_0  #
#######################################################################################################################################################################################################################


* [MAQAO] Info: Detected 64 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1076012  gmz17.benchmarkcenter.megware.com  {0-2,192-194}
[0] MPI startup(): 1       1076025  gmz17.benchmarkcenter.megware.com  {3-5,195-197}
[0] MPI startup(): 2       1076065  gmz17.benchmarkcenter.megware.com  {6-8,198-200}
[0] MPI startup(): 3       1076029  gmz17.benchmarkcenter.megware.com  {9-11,201-203}
[0] MPI startup(): 4       1076046  gmz17.benchmarkcenter.megware.com  {12-14,204-206}
[0] MPI startup(): 5       1076026  gmz17.benchmarkcenter.megware.com  {15-17,207-209}
[0] MPI startup(): 6       1076050  gmz17.benchmarkcenter.megware.com  {18-20,210-212}
[0] MPI startup(): 7       1076031  gmz17.benchmarkcenter.megware.com  {21-23,213-215}
[0] MPI startup(): 8       1076058  gmz17.benchmarkcenter.megware.com  {24-26,216-218}
[0] MPI startup(): 9       1076071  gmz17.benchmarkcenter.megware.com  {27-29,219-221}
[0] MPI startup(): 10      1076020  gmz17.benchmarkcenter.megware.com  {30-32,222-224}
[0] MPI startup(): 11      1076041  gmz17.benchmarkcenter.megware.com  {33-35,225-227}
[0] MPI startup(): 12      1076055  gmz17.benchmarkcenter.megware.com  {36-38,228-230}
[0] MPI startup(): 13      1076008  gmz17.benchmarkcenter.megware.com  {39-41,231-233}
[0] MPI startup(): 14      1076064  gmz17.benchmarkcenter.megware.com  {42-44,234-236}
[0] MPI startup(): 15      1076030  gmz17.benchmarkcenter.megware.com  {45-47,237-239}
[0] MPI startup(): 16      1076054  gmz17.benchmarkcenter.megware.com  {48-50,240-242}
[0] MPI startup(): 17      1076034  gmz17.benchmarkcenter.megware.com  {51-53,243-245}
[0] MPI startup(): 18      1076042  gmz17.benchmarkcenter.megware.com  {54-56,246-248}
[0] MPI startup(): 19      1076027  gmz17.benchmarkcenter.megware.com  {57-59,249-251}
[0] MPI startup(): 20      1076043  gmz17.benchmarkcenter.megware.com  {60-62,252-254}
[0] MPI startup(): 21      1076032  gmz17.benchmarkcenter.megware.com  {63-65,255-257}
[0] MPI startup(): 22      1076028  gmz17.benchmarkcenter.megware.com  {66-68,258-260}
[0] MPI startup(): 23      1076053  gmz17.benchmarkcenter.megware.com  {69-71,261-263}
[0] MPI startup(): 24      1076017  gmz17.benchmarkcenter.megware.com  {72-74,264-266}
[0] MPI startup(): 25      1076063  gmz17.benchmarkcenter.megware.com  {75-77,267-269}
[0] MPI startup(): 26      1076021  gmz17.benchmarkcenter.megware.com  {78-80,270-272}
[0] MPI startup(): 27      1076035  gmz17.benchmarkcenter.megware.com  {81-83,273-275}
[0] MPI startup(): 28      1076067  gmz17.benchmarkcenter.megware.com  {84-86,276-278}
[0] MPI startup(): 29      1076011  gmz17.benchmarkcenter.megware.com  {87-89,279-281}
[0] MPI startup(): 30      1076009  gmz17.benchmarkcenter.megware.com  {90-92,282-284}
[0] MPI startup(): 31      1076060  gmz17.benchmarkcenter.megware.com  {93-95,285-287}
[0] MPI startup(): 32      1076059  gmz17.benchmarkcenter.megware.com  {96-98,288-290}
[0] MPI startup(): 33      1076049  gmz17.benchmarkcenter.megware.com  {99-101,291-293}
[0] MPI startup(): 34      1076066  gmz17.benchmarkcenter.megware.com  {102-104,294-296}
[0] MPI startup(): 35      1076024  gmz17.benchmarkcenter.megware.com  {105-107,297-299}
[0] MPI startup(): 36      1076069  gmz17.benchmarkcenter.megware.com  {108-110,300-302}
[0] MPI startup(): 37      1076045  gmz17.benchmarkcenter.megware.com  {111-113,303-305}
[0] MPI startup(): 38      1076033  gmz17.benchmarkcenter.megware.com  {114-116,306-308}
[0] MPI startup(): 39      1076068  gmz17.benchmarkcenter.megware.com  {117-119,309-311}
[0] MPI startup(): 40      1076015  gmz17.benchmarkcenter.megware.com  {120-122,312-314}
[0] MPI startup(): 41      1076022  gmz17.benchmarkcenter.megware.com  {123-125,315-317}
[0] MPI startup(): 42      1076016  gmz17.benchmarkcenter.megware.com  {126-128,318-320}
[0] MPI startup(): 43      1076018  gmz17.benchmarkcenter.megware.com  {129-131,321-323}
[0] MPI startup(): 44      1076062  gmz17.benchmarkcenter.megware.com  {132-134,324-326}
[0] MPI startup(): 45      1076037  gmz17.benchmarkcenter.megware.com  {135-137,327-329}
[0] MPI startup(): 46      1076036  gmz17.benchmarkcenter.megware.com  {138-140,330-332}
[0] MPI startup(): 47      1076070  gmz17.benchmarkcenter.megware.com  {141-143,333-335}
[0] MPI startup(): 48      1076038  gmz17.benchmarkcenter.megware.com  {144-146,336-338}
[0] MPI startup(): 49      1076048  gmz17.benchmarkcenter.megware.com  {147-149,339-341}
[0] MPI startup(): 50      1076047  gmz17.benchmarkcenter.megware.com  {150-152,342-344}
[0] MPI startup(): 51      1076010  gmz17.benchmarkcenter.megware.com  {153-155,345-347}
[0] MPI startup(): 52      1076044  gmz17.benchmarkcenter.megware.com  {156-158,348-350}
[0] MPI startup(): 53      1076052  gmz17.benchmarkcenter.megware.com  {159-161,351-353}
[0] MPI startup(): 54      1076014  gmz17.benchmarkcenter.megware.com  {162-164,354-356}
[0] MPI startup(): 55      1076061  gmz17.benchmarkcenter.megware.com  {165-167,357-359}
[0] MPI startup(): 56      1076056  gmz17.benchmarkcenter.megware.com  {168-170,360-362}
[0] MPI startup(): 57      1076019  gmz17.benchmarkcenter.megware.com  {171-173,363-365}
[0] MPI startup(): 58      1076040  gmz17.benchmarkcenter.megware.com  {174-176,366-368}
[0] MPI startup(): 59      1076039  gmz17.benchmarkcenter.megware.com  {177-179,369-371}
[0] MPI startup(): 60      1076023  gmz17.benchmarkcenter.megware.com  {180-182,372-374}
[0] MPI startup(): 61      1076051  gmz17.benchmarkcenter.megware.com  {183-185,375-377}
[0] MPI startup(): 62      1076057  gmz17.benchmarkcenter.megware.com  {186-188,378-380}
[0] MPI startup(): 63      1076013  gmz17.benchmarkcenter.megware.com  {189-191,381-383}
OMP: pid 1076065 tid 1076065 thread 0 bound to OS proc set {6}
OMP: pid 1076046 tid 1076046 thread 0 bound to OS proc set {12}
OMP: pid 1076050 tid 1076050 thread 0 bound to OS proc set {18}
OMP: pid 1076055 tid 1076055 thread 0 bound to OS proc set {36}
OMP: pid 1076030 tid 1076030 thread 0 bound to OS proc set {45}
OMP: pid 1076031 tid 1076031 thread 0 bound to OS proc set {21}
OMP: pid 1076029 tid 1076029 thread 0 bound to OS proc set {9}
OMP: pid 1076064 tid 1076064 thread 0 bound to OS proc set {42}
OMP: pid 1076020 tid 1076020 thread 0 bound to OS proc set {30}
OMP: pid 1076043 tid 1076043 thread 0 bound to OS proc set {60}
OMP: pid 1076008 tid 1076008 thread 0 bound to OS proc set {39}
OMP: pid 1076025 tid 1076025 thread 0 bound to OS proc set {3}
OMP: pid 1076069 tid 1076069 thread 0 bound to OS proc set {108}
OMP: pid 1076027 tid 1076027 thread 0 bound to OS proc set {57}
OMP: pid 1076053 tid 1076053 thread 0 bound to OS proc set {69}
OMP: pid 1076015 tid 1076015 thread 0 bound to OS proc set {120}
OMP: pid 1076042 tid 1076042 thread 0 bound to OS proc set {54}
OMP: pid 1076035 tid 1076035 thread 0 bound to OS proc set {81}
OMP: pid 1076060 tid 1076060 thread 0 bound to OS proc set {93}
OMP: pid 1076061 tid 1076061 thread 0 bound to OS proc set {165}
OMP: pid 1076017 tid 1076017 thread 0 bound to OS proc set {72}
OMP: pid 1076049 tid 1076049 thread 0 bound to OS proc set {99}
OMP: pid 1076034 tid 1076034 thread 0 bound to OS proc set {51}
OMP: pid 1076014 tid 1076014 thread 0 bound to OS proc set {162}
OMP: pid 1076058 tid 1076058 thread 0 bound to OS proc set {24}
OMP: pid 1076063 tid 1076063 thread 0 bound to OS proc set {75}
OMP: pid 1076044 tid 1076044 thread 0 bound to OS proc set {156}
OMP: pid 1076033 tid 1076033 thread 0 bound to OS proc set {114}
OMP: pid 1076024 tid 1076024 thread 0 bound to OS proc set {105}
OMP: pid 1076067 tid 1076067 thread 0 bound to OS proc set {84}
OMP: pid 1076041 tid 1076041 thread 0 bound to OS proc set {33}
OMP: pid 1076045 tid 1076045 thread 0 bound to OS proc set {111}
OMP: pid 1076054 tid 1076054 thread 0 bound to OS proc set {48}
OMP: pid 1076019 tid 1076019 thread 0 bound to OS proc set {171}
OMP: pid 1076028 tid 1076028 thread 0 bound to OS proc set {66}
OMP: pid 1076021 tid 1076021 thread 0 bound to OS proc set {78}
OMP: pid 1076012 tid 1076012 thread 0 bound to OS proc set {0}
OMP: pid 1076048 tid 1076048 thread 0 bound to OS proc set {147}
OMP: pid 1076047 tid 1076047 thread 0 bound to OS proc set {150}
OMP: pid 1076022 tid 1076022 thread 0 bound to OS proc set {123}
OMP: pid 1076009 tid 1076009 thread 0 bound to OS proc set {90}
OMP: pid 1076018 tid 1076018 thread 0 bound to OS proc set {129}
OMP: pid 1076023 tid 1076023 thread 0 bound to OS proc set {180}
OMP: pid 1076068 tid 1076068 thread 0 bound to OS proc set {117}
OMP: pid 1076032 tid 1076032 thread 0 bound to OS proc set {63}
LAMMPS (22 Jul 2025)
OMP: pid 1076039 tid 1076039 thread 0 bound to OS proc set {177}
OMP: pid 1076071 tid 1076071 thread 0 bound to OS proc set {27}
OMP: pid 1076011 tid 1076011 thread 0 bound to OS proc set {87}
OMP: pid 1076026 tid 1076026 thread 0 bound to OS proc set {15}
OMP: pid 1076052 tid 1076052 thread 0 bound to OS proc set {159}
OMP: pid 1076062 tid 1076062 thread 0 bound to OS proc set {132}
OMP: pid 1076056 tid 1076056 thread 0 bound to OS proc set {168}
OMP: pid 1076036 tid 1076036 thread 0 bound to OS proc set {138}
OMP: pid 1076038 tid 1076038 thread 0 bound to OS proc set {144}
OMP: pid 1076010 tid 1076010 thread 0 bound to OS proc set {153}
OMP: pid 1076057 tid 1076057 thread 0 bound to OS proc set {186}
OMP: pid 1076013 tid 1076013 thread 0 bound to OS proc set {189}
OMP: pid 1076040 tid 1076040 thread 0 bound to OS proc set {174}
OMP: pid 1076016 tid 1076016 thread 0 bound to OS proc set {126}
OMP: pid 1076070 tid 1076070 thread 0 bound to OS proc set {141}
OMP: pid 1076066 tid 1076066 thread 0 bound to OS proc set {102}
OMP: pid 1076037 tid 1076037 thread 0 bound to OS proc set {135}
OMP: pid 1076059 tid 1076059 thread 0 bound to OS proc set {96}
OMP: pid 1076051 tid 1076051 thread 0 bound to OS proc set {183}
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  4 by 4 by 4 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.028 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 4322 | 4455 | 4591 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 1.7064 on 64 procs for 10 steps with 32768000 atoms

Performance: 2.532 ns/day, 9.480 hours/ns, 5.860 timesteps/s, 192.030 Matom-step/s
99.2% CPU use with 64 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 1.3693     | 1.3842     | 1.4043     |   0.7 | 81.12
Neigh   | 0.088037   | 0.08963    | 0.092193   |   0.3 |  5.25
Comm    | 0.10461    | 0.13018    | 0.15871    |   3.7 |  7.63
Output  | 0.0029897  | 0.0036883  | 0.0054849  |   1.1 |  0.22
Modify  | 0.046914   | 0.066914   | 0.08643    |   3.5 |  3.92
Other   |            | 0.03176    |            |       |  1.86

Nlocal:         512000 ave      512183 max      511838 min
Histogram: 3 6 5 12 10 6 10 7 3 2
Nghost:         120011 ave      120173 max      119828 min
Histogram: 2 2 8 10 6 9 13 5 6 3
Neighs:    1.92726e+07 ave 1.92962e+07 max 1.92481e+07 min
Histogram: 5 10 9 7 1 1 6 8 11 6

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 4322 | 4455 | 4591 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.071    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.824    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.47     
       400   797.74785     -1.1257875e+08  0             -1.0919981e+08  51540.399    
       450   797.84599     -1.1257917e+08  0             -1.0919981e+08  51536.933    
       500   797.72449     -1.1257864e+08  0             -1.091998e+08   51543.024    
       510   797.68252     -1.1257846e+08  0             -1.091998e+08   51545.152    
Loop time of 91.3992 on 64 procs for 500 steps with 32768000 atoms

Performance: 2.363 ns/day, 10.155 hours/ns, 5.471 timesteps/s, 179.258 Matom-step/s
99.2% CPU use with 64 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 70.274     | 70.967     | 71.495     |   3.4 | 77.65
Neigh   | 7.6056     | 7.7278     | 7.9537     |   2.5 |  8.46
Comm    | 6.6444     | 7.4861     | 8.4114     |  16.0 |  8.19
Output  | 0.033934   | 0.04123    | 0.051551   |   2.4 |  0.05
Modify  | 3.08       | 3.8381     | 4.4121     |  20.6 |  4.20
Other   |            | 1.339      |            |       |  1.46

Nlocal:         512000 ave      512615 max      511468 min
Histogram: 4 6 7 6 8 19 8 0 3 3
Nghost:         120003 ave      120534 max      119389 min
Histogram: 3 1 2 7 20 8 6 7 6 4
Neighs:    1.93265e+07 ave 1.93692e+07 max 1.92929e+07 min
Histogram: 7 6 8 7 9 3 14 7 2 1

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747091
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:01:33


[MAQAO] Info: 63/64 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_1  #
#######################################################################################################################################################################################################################


* [MAQAO] Info: Detected 96 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1078191  gmz17.benchmarkcenter.megware.com  {0-1,192-193}
[0] MPI startup(): 1       1078236  gmz17.benchmarkcenter.megware.com  {2-3,194-195}
[0] MPI startup(): 2       1078198  gmz17.benchmarkcenter.megware.com  {4-5,196-197}
[0] MPI startup(): 3       1078164  gmz17.benchmarkcenter.megware.com  {6-7,198-199}
[0] MPI startup(): 4       1078163  gmz17.benchmarkcenter.megware.com  {8-9,200-201}
[0] MPI startup(): 5       1078144  gmz17.benchmarkcenter.megware.com  {10-11,202-203}
[0] MPI startup(): 6       1078181  gmz17.benchmarkcenter.megware.com  {12-13,204-205}
[0] MPI startup(): 7       1078197  gmz17.benchmarkcenter.megware.com  {14-15,206-207}
[0] MPI startup(): 8       1078200  gmz17.benchmarkcenter.megware.com  {16-17,208-209}
[0] MPI startup(): 9       1078162  gmz17.benchmarkcenter.megware.com  {18-19,210-211}
[0] MPI startup(): 10      1078169  gmz17.benchmarkcenter.megware.com  {20-21,212-213}
[0] MPI startup(): 11      1078218  gmz17.benchmarkcenter.megware.com  {22-23,214-215}
[0] MPI startup(): 12      1078204  gmz17.benchmarkcenter.megware.com  {24-25,216-217}
[0] MPI startup(): 13      1078233  gmz17.benchmarkcenter.megware.com  {26-27,218-219}
[0] MPI startup(): 14      1078155  gmz17.benchmarkcenter.megware.com  {28-29,220-221}
[0] MPI startup(): 15      1078153  gmz17.benchmarkcenter.megware.com  {30-31,222-223}
[0] MPI startup(): 16      1078202  gmz17.benchmarkcenter.megware.com  {32-33,224-225}
[0] MPI startup(): 17      1078215  gmz17.benchmarkcenter.megware.com  {34-35,226-227}
[0] MPI startup(): 18      1078226  gmz17.benchmarkcenter.megware.com  {36-37,228-229}
[0] MPI startup(): 19      1078213  gmz17.benchmarkcenter.megware.com  {38-39,230-231}
[0] MPI startup(): 20      1078190  gmz17.benchmarkcenter.megware.com  {40-41,232-233}
[0] MPI startup(): 21      1078223  gmz17.benchmarkcenter.megware.com  {42-43,234-235}
[0] MPI startup(): 22      1078159  gmz17.benchmarkcenter.megware.com  {44-45,236-237}
[0] MPI startup(): 23      1078172  gmz17.benchmarkcenter.megware.com  {46-47,238-239}
[0] MPI startup(): 24      1078168  gmz17.benchmarkcenter.megware.com  {48-49,240-241}
[0] MPI startup(): 25      1078214  gmz17.benchmarkcenter.megware.com  {50-51,242-243}
[0] MPI startup(): 26      1078206  gmz17.benchmarkcenter.megware.com  {52-53,244-245}
[0] MPI startup(): 27      1078152  gmz17.benchmarkcenter.megware.com  {54-55,246-247}
[0] MPI startup(): 28      1078165  gmz17.benchmarkcenter.megware.com  {56-57,248-249}
[0] MPI startup(): 29      1078234  gmz17.benchmarkcenter.megware.com  {58-59,250-251}
[0] MPI startup(): 30      1078201  gmz17.benchmarkcenter.megware.com  {60-61,252-253}
[0] MPI startup(): 31      1078188  gmz17.benchmarkcenter.megware.com  {62-63,254-255}
[0] MPI startup(): 32      1078166  gmz17.benchmarkcenter.megware.com  {64-65,256-257}
[0] MPI startup(): 33      1078178  gmz17.benchmarkcenter.megware.com  {66-67,258-259}
[0] MPI startup(): 34      1078187  gmz17.benchmarkcenter.megware.com  {68-69,260-261}
[0] MPI startup(): 35      1078176  gmz17.benchmarkcenter.megware.com  {70-71,262-263}
[0] MPI startup(): 36      1078157  gmz17.benchmarkcenter.megware.com  {72-73,264-265}
[0] MPI startup(): 37      1078231  gmz17.benchmarkcenter.megware.com  {74-75,266-267}
[0] MPI startup(): 38      1078146  gmz17.benchmarkcenter.megware.com  {76-77,268-269}
[0] MPI startup(): 39      1078156  gmz17.benchmarkcenter.megware.com  {78-79,270-271}
[0] MPI startup(): 40      1078154  gmz17.benchmarkcenter.megware.com  {80-81,272-273}
[0] MPI startup(): 41      1078211  gmz17.benchmarkcenter.megware.com  {82-83,274-275}
[0] MPI startup(): 42      1078227  gmz17.benchmarkcenter.megware.com  {84-85,276-277}
[0] MPI startup(): 43      1078167  gmz17.benchmarkcenter.megware.com  {86-87,278-279}
[0] MPI startup(): 44      1078225  gmz17.benchmarkcenter.megware.com  {88-89,280-281}
[0] MPI startup(): 45      1078148  gmz17.benchmarkcenter.megware.com  {90-91,282-283}
[0] MPI startup(): 46      1078232  gmz17.benchmarkcenter.megware.com  {92-93,284-285}
[0] MPI startup(): 47      1078147  gmz17.benchmarkcenter.megware.com  {94-95,286-287}
[0] MPI startup(): 48      1078208  gmz17.benchmarkcenter.megware.com  {96-97,288-289}
[0] MPI startup(): 49      1078143  gmz17.benchmarkcenter.megware.com  {98-99,290-291}
[0] MPI startup(): 50      1078207  gmz17.benchmarkcenter.megware.com  {100-101,292-293}
[0] MPI startup(): 51      1078199  gmz17.benchmarkcenter.megware.com  {102-103,294-295}
[0] MPI startup(): 52      1078228  gmz17.benchmarkcenter.megware.com  {104-105,296-297}
[0] MPI startup(): 53      1078186  gmz17.benchmarkcenter.megware.com  {106-107,298-299}
[0] MPI startup(): 54      1078220  gmz17.benchmarkcenter.megware.com  {108-109,300-301}
[0] MPI startup(): 55      1078221  gmz17.benchmarkcenter.megware.com  {110-111,302-303}
[0] MPI startup(): 56      1078179  gmz17.benchmarkcenter.megware.com  {112-113,304-305}
[0] MPI startup(): 57      1078219  gmz17.benchmarkcenter.megware.com  {114-115,306-307}
[0] MPI startup(): 58      1078203  gmz17.benchmarkcenter.megware.com  {116-117,308-309}
[0] MPI startup(): 59      1078161  gmz17.benchmarkcenter.megware.com  {118-119,310-311}
[0] MPI startup(): 60      1078196  gmz17.benchmarkcenter.megware.com  {120-121,312-313}
[0] MPI startup(): 61      1078141  gmz17.benchmarkcenter.megware.com  {122-123,314-315}
[0] MPI startup(): 62      1078173  gmz17.benchmarkcenter.megware.com  {124-125,316-317}
[0] MPI startup(): 63      1078150  gmz17.benchmarkcenter.megware.com  {126-127,318-319}
[0] MPI startup(): 64      1078183  gmz17.benchmarkcenter.megware.com  {128-129,320-321}
[0] MPI startup(): 65      1078170  gmz17.benchmarkcenter.megware.com  {130-131,322-323}
[0] MPI startup(): 66      1078174  gmz17.benchmarkcenter.megware.com  {132-133,324-325}
[0] MPI startup(): 67      1078193  gmz17.benchmarkcenter.megware.com  {134-135,326-327}
[0] MPI startup(): 68      1078149  gmz17.benchmarkcenter.megware.com  {136-137,328-329}
[0] MPI startup(): 69      1078217  gmz17.benchmarkcenter.megware.com  {138-139,330-331}
[0] MPI startup(): 70      1078145  gmz17.benchmarkcenter.megware.com  {140-141,332-333}
[0] MPI startup(): 71      1078235  gmz17.benchmarkcenter.megware.com  {142-143,334-335}
[0] MPI startup(): 72      1078158  gmz17.benchmarkcenter.megware.com  {144-145,336-337}
[0] MPI startup(): 73      1078142  gmz17.benchmarkcenter.megware.com  {146-147,338-339}
[0] MPI startup(): 74      1078205  gmz17.benchmarkcenter.megware.com  {148-149,340-341}
[0] MPI startup(): 75      1078182  gmz17.benchmarkcenter.megware.com  {150-151,342-343}
[0] MPI startup(): 76      1078216  gmz17.benchmarkcenter.megware.com  {152-153,344-345}
[0] MPI startup(): 77      1078212  gmz17.benchmarkcenter.megware.com  {154-155,346-347}
[0] MPI startup(): 78      1078222  gmz17.benchmarkcenter.megware.com  {156-157,348-349}
[0] MPI startup(): 79      1078171  gmz17.benchmarkcenter.megware.com  {158-159,350-351}
[0] MPI startup(): 80      1078229  gmz17.benchmarkcenter.megware.com  {160-161,352-353}
[0] MPI startup(): 81      1078224  gmz17.benchmarkcenter.megware.com  {162-163,354-355}
[0] MPI startup(): 82      1078194  gmz17.benchmarkcenter.megware.com  {164-165,356-357}
[0] MPI startup(): 83      1078195  gmz17.benchmarkcenter.megware.com  {166-167,358-359}
[0] MPI startup(): 84      1078180  gmz17.benchmarkcenter.megware.com  {168-169,360-361}
[0] MPI startup(): 85      1078151  gmz17.benchmarkcenter.megware.com  {170-171,362-363}
[0] MPI startup(): 86      1078192  gmz17.benchmarkcenter.megware.com  {172-173,364-365}
[0] MPI startup(): 87      1078185  gmz17.benchmarkcenter.megware.com  {174-175,366-367}
[0] MPI startup(): 88      1078189  gmz17.benchmarkcenter.megware.com  {176-177,368-369}
[0] MPI startup(): 89      1078160  gmz17.benchmarkcenter.megware.com  {178-179,370-371}
[0] MPI startup(): 90      1078230  gmz17.benchmarkcenter.megware.com  {180-181,372-373}
[0] MPI startup(): 91      1078175  gmz17.benchmarkcenter.megware.com  {182-183,374-375}
[0] MPI startup(): 92      1078184  gmz17.benchmarkcenter.megware.com  {184-185,376-377}
[0] MPI startup(): 93      1078210  gmz17.benchmarkcenter.megware.com  {186-187,378-379}
[0] MPI startup(): 94      1078177  gmz17.benchmarkcenter.megware.com  {188-189,380-381}
[0] MPI startup(): 95      1078209  gmz17.benchmarkcenter.megware.com  {190-191,382-383}
OMP: pid 1078144 tid 1078144 thread 0 bound to OS proc set {10}
OMP: pid 1078162 tid 1078162 thread 0 bound to OS proc set {18}
OMP: pid 1078198 tid 1078198 thread 0 bound to OS proc set {4}
OMP: pid 1078202 tid 1078202 thread 0 bound to OS proc set {32}
OMP: pid 1078215 tid 1078215 thread 0 bound to OS proc set {34}
OMP: pid 1078226 tid 1078226 thread 0 bound to OS proc set {36}
OMP: pid 1078218 tid 1078218 thread 0 bound to OS proc set {22}
OMP: pid 1078200 tid 1078200 thread 0 bound to OS proc set {16}
OMP: pid 1078159 tid 1078159 thread 0 bound to OS proc set {44}
OMP: pid 1078181 tid 1078181 thread 0 bound to OS proc set {12}
OMP: pid 1078178 tid 1078178 thread 0 bound to OS proc set {66}
OMP: pid 1078170 tid 1078170 thread 0 bound to OS proc set {130}
OMP: pid 1078167 tid 1078167 thread 0 bound to OS proc set {86}
OMP: pid 1078213 tid 1078213 thread 0 bound to OS proc set {38}
OMP: pid 1078229 tid 1078229 thread 0 bound to OS proc set {160}
OMP: pid 1078187 tid 1078187 thread 0 bound to OS proc set {68}
OMP: pid 1078184 tid 1078184 thread 0 bound to OS proc set {184}
OMP: pid 1078214 tid 1078214 thread 0 bound to OS proc set {50}
OMP: pid 1078231 tid 1078231 thread 0 bound to OS proc set {74}
OMP: pid 1078147 tid 1078147 thread 0 bound to OS proc set {94}
OMP: pid 1078157 tid 1078157 thread 0 bound to OS proc set {72}
OMP: pid 1078166 tid 1078166 thread 0 bound to OS proc set {64}
OMP: pid 1078230 tid 1078230 thread 0 bound to OS proc set {180}
OMP: pid 1078149 tid 1078149 thread 0 bound to OS proc set {136}
OMP: pid 1078227 tid 1078227 thread 0 bound to OS proc set {84}
OMP: pid 1078190 tid 1078190 thread 0 bound to OS proc set {40}
OMP: pid 1078232 tid 1078232 thread 0 bound to OS proc set {92}
OMP: pid 1078207 tid 1078207 thread 0 bound to OS proc set {100}
OMP: pid 1078146 tid 1078146 thread 0 bound to OS proc set {76}
OMP: pid 1078161 tid 1078161 thread 0 bound to OS proc set {118}
OMP: pid 1078191 tid 1078191 thread 0 bound to OS proc set {0}
OMP: pid 1078203 tid 1078203 thread 0 bound to OS proc set {116}
OMP: pid 1078210 tid 1078210 thread 0 bound to OS proc set {186}
LAMMPS (22 Jul 2025)
OMP: pid 1078197 tid 1078197 thread 0 bound to OS proc set {14}
OMP: pid 1078201 tid 1078201 thread 0 bound to OS proc set {60}
OMP: pid 1078225 tid 1078225 thread 0 bound to OS proc set {88}
OMP: pid 1078151 tid 1078151 thread 0 bound to OS proc set {170}
OMP: pid 1078145 tid 1078145 thread 0 bound to OS proc set {140}
OMP: pid 1078206 tid 1078206 thread 0 bound to OS proc set {52}
OMP: pid 1078216 tid 1078216 thread 0 bound to OS proc set {152}
OMP: pid 1078174 tid 1078174 thread 0 bound to OS proc set {132}
OMP: pid 1078169 tid 1078169 thread 0 bound to OS proc set {20}
OMP: pid 1078192 tid 1078192 thread 0 bound to OS proc set {172}
OMP: pid 1078209 tid 1078209 thread 0 bound to OS proc set {190}
OMP: pid 1078175 tid 1078175 thread 0 bound to OS proc set {182}
OMP: pid 1078163 tid 1078163 thread 0 bound to OS proc set {8}
OMP: pid 1078143 tid 1078143 thread 0 bound to OS proc set {98}
OMP: pid 1078164 tid 1078164 thread 0 bound to OS proc set {6}
OMP: pid 1078221 tid 1078221 thread 0 bound to OS proc set {110}
OMP: pid 1078219 tid 1078219 thread 0 bound to OS proc set {114}
OMP: pid 1078179 tid 1078179 thread 0 bound to OS proc set {112}
OMP: pid 1078188 tid 1078188 thread 0 bound to OS proc set {62}
OMP: pid 1078211 tid 1078211 thread 0 bound to OS proc set {82}
OMP: pid 1078160 tid 1078160 thread 0 bound to OS proc set {178}
OMP: pid 1078194 tid 1078194 thread 0 bound to OS proc set {164}
OMP: pid 1078236 tid 1078236 thread 0 bound to OS proc set {2}
OMP: pid 1078223 tid 1078223 thread 0 bound to OS proc set {42}
OMP: pid 1078186 tid 1078186 thread 0 bound to OS proc set {106}
OMP: pid 1078193 tid 1078193 thread 0 bound to OS proc set {134}
OMP: pid 1078235 tid 1078235 thread 0 bound to OS proc set {142}
OMP: pid 1078233 tid 1078233 thread 0 bound to OS proc set {26}
OMP: pid 1078148 tid 1078148 thread 0 bound to OS proc set {90}
OMP: pid 1078142 tid 1078142 thread 0 bound to OS proc set {146}
OMP: pid 1078220 tid 1078220 thread 0 bound to OS proc set {108}
OMP: pid 1078205 tid 1078205 thread 0 bound to OS proc set {148}
OMP: pid 1078196 tid 1078196 thread 0 bound to OS proc set {120}
OMP: pid 1078173 tid 1078173 thread 0 bound to OS proc set {124}
OMP: pid 1078141 tid 1078141 thread 0 bound to OS proc set {122}
OMP: pid 1078171 tid 1078171 thread 0 bound to OS proc set {158}
OMP: pid 1078234 tid 1078234 thread 0 bound to OS proc set {58}
OMP: pid 1078228 tid 1078228 thread 0 bound to OS proc set {104}
OMP: pid 1078153 tid 1078153 thread 0 bound to OS proc set {30}
OMP: pid 1078212 tid 1078212 thread 0 bound to OS proc set {154}
OMP: pid 1078195 tid 1078195 thread 0 bound to OS proc set {166}
OMP: pid 1078150 tid 1078150 thread 0 bound to OS proc set {126}
OMP: pid 1078165 tid 1078165 thread 0 bound to OS proc set {56}
OMP: pid 1078156 tid 1078156 thread 0 bound to OS proc set {78}
OMP: pid 1078176 tid 1078176 thread 0 bound to OS proc set {70}
OMP: pid 1078208 tid 1078208 thread 0 bound to OS proc set {96}
OMP: pid 1078183 tid 1078183 thread 0 bound to OS proc set {128}
OMP: pid 1078224 tid 1078224 thread 0 bound to OS proc set {162}
OMP: pid 1078168 tid 1078168 thread 0 bound to OS proc set {48}
OMP: pid 1078217 tid 1078217 thread 0 bound to OS proc set {138}
OMP: pid 1078182 tid 1078182 thread 0 bound to OS proc set {150}
OMP: pid 1078185 tid 1078185 thread 0 bound to OS proc set {174}
OMP: pid 1078177 tid 1078177 thread 0 bound to OS proc set {188}
OMP: pid 1078172 tid 1078172 thread 0 bound to OS proc set {46}
OMP: pid 1078189 tid 1078189 thread 0 bound to OS proc set {176}
OMP: pid 1078155 tid 1078155 thread 0 bound to OS proc set {28}
OMP: pid 1078158 tid 1078158 thread 0 bound to OS proc set {144}
OMP: pid 1078222 tid 1078222 thread 0 bound to OS proc set {156}
OMP: pid 1078180 tid 1078180 thread 0 bound to OS proc set {168}
OMP: pid 1078204 tid 1078204 thread 0 bound to OS proc set {24}
OMP: pid 1078154 tid 1078154 thread 0 bound to OS proc set {80}
OMP: pid 1078152 tid 1078152 thread 0 bound to OS proc set {54}
OMP: pid 1078199 tid 1078199 thread 0 bound to OS proc set {102}
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  6 by 4 by 4 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.021 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2885 | 2973 | 3054 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 1.32124 on 96 procs for 10 steps with 32768000 atoms

Performance: 3.270 ns/day, 7.340 hours/ns, 7.569 timesteps/s, 248.009 Matom-step/s
99.0% CPU use with 96 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.97552    | 1.0034     | 1.0285     |   0.9 | 75.94
Neigh   | 0.064363   | 0.066173   | 0.069192   |   0.4 |  5.01
Comm    | 0.10767    | 0.13504    | 0.17496    |   3.3 | 10.22
Output  | 0.0027758  | 0.0038293  | 0.0065941  |   1.4 |  0.29
Modify  | 0.039595   | 0.076495   | 0.09315    |   3.6 |  5.79
Other   |            | 0.0363     |            |       |  2.75

Nlocal:         341333 ave      342574 max      340648 min
Histogram: 45 19 0 0 0 0 0 0 13 19
Nghost:        87167.4 ave       88307 max       85075 min
Histogram: 32 0 0 0 0 0 0 0 1 63
Neighs:    1.28484e+07 ave 1.29035e+07 max  1.2809e+07 min
Histogram: 16 16 17 13 2 0 4 11 2 15

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2885 | 2973 | 3055 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.071    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.824    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.47     
       400   797.74785     -1.1257875e+08  0             -1.0919981e+08  51540.399    
       450   797.84599     -1.1257917e+08  0             -1.0919981e+08  51536.933    
       500   797.72449     -1.1257864e+08  0             -1.091998e+08   51543.024    
       510   797.68252     -1.1257846e+08  0             -1.091998e+08   51545.152    
Loop time of 70.2843 on 96 procs for 500 steps with 32768000 atoms

Performance: 3.073 ns/day, 7.809 hours/ns, 7.114 timesteps/s, 233.110 Matom-step/s
99.1% CPU use with 96 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 50.943     | 51.577     | 52.604     |   4.4 | 73.38
Neigh   | 5.5821     | 5.7055     | 6.0507     |   3.2 |  8.12
Comm    | 6.2066     | 7.3833     | 8.7036     |  14.7 | 10.50
Output  | 0.031429   | 0.04107    | 0.060113   |   3.1 |  0.06
Modify  | 2.4896     | 3.9916     | 4.6182     |  19.2 |  5.68
Other   |            | 1.586      |            |       |  2.26

Nlocal:         341333 ave      342794 max      340344 min
Histogram: 13 20 22 9 0 0 3 6 15 8
Nghost:        87081.4 ave       88642 max       84850 min
Histogram: 16 13 3 0 0 0 4 18 29 13
Neighs:    1.28843e+07 ave 1.29513e+07 max 1.28331e+07 min
Histogram: 8 12 18 18 8 2 7 11 6 6

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747091
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:01:12


[MAQAO] Info: 95/96 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_2  #
#######################################################################################################################################################################################################################


* [MAQAO] Info: Detected 128 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1081060  gmz17.benchmarkcenter.megware.com  {0-1,192-193}
[0] MPI startup(): 1       1081065  gmz17.benchmarkcenter.megware.com  {2-3,194-195}
[0] MPI startup(): 2       1081152  gmz17.benchmarkcenter.megware.com  {4-5,196-197}
[0] MPI startup(): 3       1081139  gmz17.benchmarkcenter.megware.com  {6-7,198-199}
[0] MPI startup(): 4       1081041  gmz17.benchmarkcenter.megware.com  {8-9,200-201}
[0] MPI startup(): 5       1081153  gmz17.benchmarkcenter.megware.com  {10-11,202-203}
[0] MPI startup(): 6       1081061  gmz17.benchmarkcenter.megware.com  {12-13,204-205}
[0] MPI startup(): 7       1081075  gmz17.benchmarkcenter.megware.com  {14-15,206-207}
[0] MPI startup(): 8       1081080  gmz17.benchmarkcenter.megware.com  {16,208}
[0] MPI startup(): 9       1081067  gmz17.benchmarkcenter.megware.com  {17,209}
[0] MPI startup(): 10      1081146  gmz17.benchmarkcenter.megware.com  {18,210}
[0] MPI startup(): 11      1081143  gmz17.benchmarkcenter.megware.com  {19,211}
[0] MPI startup(): 12      1081100  gmz17.benchmarkcenter.megware.com  {20,212}
[0] MPI startup(): 13      1081096  gmz17.benchmarkcenter.megware.com  {21,213}
[0] MPI startup(): 14      1081124  gmz17.benchmarkcenter.megware.com  {22,214}
[0] MPI startup(): 15      1081091  gmz17.benchmarkcenter.megware.com  {23,215}
[0] MPI startup(): 16      1081166  gmz17.benchmarkcenter.megware.com  {24-25,216-217}
[0] MPI startup(): 17      1081131  gmz17.benchmarkcenter.megware.com  {26-27,218-219}
[0] MPI startup(): 18      1081130  gmz17.benchmarkcenter.megware.com  {28-29,220-221}
[0] MPI startup(): 19      1081159  gmz17.benchmarkcenter.megware.com  {30-31,222-223}
[0] MPI startup(): 20      1081089  gmz17.benchmarkcenter.megware.com  {32-33,224-225}
[0] MPI startup(): 21      1081054  gmz17.benchmarkcenter.megware.com  {34-35,226-227}
[0] MPI startup(): 22      1081106  gmz17.benchmarkcenter.megware.com  {36-37,228-229}
[0] MPI startup(): 23      1081051  gmz17.benchmarkcenter.megware.com  {38-39,230-231}
[0] MPI startup(): 24      1081107  gmz17.benchmarkcenter.megware.com  {40,232}
[0] MPI startup(): 25      1081162  gmz17.benchmarkcenter.megware.com  {41,233}
[0] MPI startup(): 26      1081076  gmz17.benchmarkcenter.megware.com  {42,234}
[0] MPI startup(): 27      1081138  gmz17.benchmarkcenter.megware.com  {43,235}
[0] MPI startup(): 28      1081086  gmz17.benchmarkcenter.megware.com  {44,236}
[0] MPI startup(): 29      1081092  gmz17.benchmarkcenter.megware.com  {45,237}
[0] MPI startup(): 30      1081156  gmz17.benchmarkcenter.megware.com  {46,238}
[0] MPI startup(): 31      1081157  gmz17.benchmarkcenter.megware.com  {47,239}
[0] MPI startup(): 32      1081165  gmz17.benchmarkcenter.megware.com  {48-49,240-241}
[0] MPI startup(): 33      1081052  gmz17.benchmarkcenter.megware.com  {50-51,242-243}
[0] MPI startup(): 34      1081148  gmz17.benchmarkcenter.megware.com  {52-53,244-245}
[0] MPI startup(): 35      1081125  gmz17.benchmarkcenter.megware.com  {54-55,246-247}
[0] MPI startup(): 36      1081114  gmz17.benchmarkcenter.megware.com  {56-57,248-249}
[0] MPI startup(): 37      1081047  gmz17.benchmarkcenter.megware.com  {58-59,250-251}
[0] MPI startup(): 38      1081071  gmz17.benchmarkcenter.megware.com  {60-61,252-253}
[0] MPI startup(): 39      1081158  gmz17.benchmarkcenter.megware.com  {62-63,254-255}
[0] MPI startup(): 40      1081155  gmz17.benchmarkcenter.megware.com  {64,256}
[0] MPI startup(): 41      1081132  gmz17.benchmarkcenter.megware.com  {65,257}
[0] MPI startup(): 42      1081049  gmz17.benchmarkcenter.megware.com  {66,258}
[0] MPI startup(): 43      1081064  gmz17.benchmarkcenter.megware.com  {67,259}
[0] MPI startup(): 44      1081154  gmz17.benchmarkcenter.megware.com  {68,260}
[0] MPI startup(): 45      1081093  gmz17.benchmarkcenter.megware.com  {69,261}
[0] MPI startup(): 46      1081108  gmz17.benchmarkcenter.megware.com  {70,262}
[0] MPI startup(): 47      1081145  gmz17.benchmarkcenter.megware.com  {71,263}
[0] MPI startup(): 48      1081042  gmz17.benchmarkcenter.megware.com  {72-73,264-265}
[0] MPI startup(): 49      1081078  gmz17.benchmarkcenter.megware.com  {74-75,266-267}
[0] MPI startup(): 50      1081077  gmz17.benchmarkcenter.megware.com  {76-77,268-269}
[0] MPI startup(): 51      1081150  gmz17.benchmarkcenter.megware.com  {78-79,270-271}
[0] MPI startup(): 52      1081126  gmz17.benchmarkcenter.megware.com  {80-81,272-273}
[0] MPI startup(): 53      1081102  gmz17.benchmarkcenter.megware.com  {82-83,274-275}
[0] MPI startup(): 54      1081151  gmz17.benchmarkcenter.megware.com  {84-85,276-277}
[0] MPI startup(): 55      1081101  gmz17.benchmarkcenter.megware.com  {86-87,278-279}
[0] MPI startup(): 56      1081068  gmz17.benchmarkcenter.megware.com  {88,280}
[0] MPI startup(): 57      1081147  gmz17.benchmarkcenter.megware.com  {89,281}
[0] MPI startup(): 58      1081056  gmz17.benchmarkcenter.megware.com  {90,282}
[0] MPI startup(): 59      1081046  gmz17.benchmarkcenter.megware.com  {91,283}
[0] MPI startup(): 60      1081072  gmz17.benchmarkcenter.megware.com  {92,284}
[0] MPI startup(): 61      1081111  gmz17.benchmarkcenter.megware.com  {93,285}
[0] MPI startup(): 62      1081097  gmz17.benchmarkcenter.megware.com  {94,286}
[0] MPI startup(): 63      1081104  gmz17.benchmarkcenter.megware.com  {95,287}
[0] MPI startup(): 64      1081050  gmz17.benchmarkcenter.megware.com  {96-97,288-289}
[0] MPI startup(): 65      1081112  gmz17.benchmarkcenter.megware.com  {98-99,290-291}
[0] MPI startup(): 66      1081110  gmz17.benchmarkcenter.megware.com  {100-101,292-293}
[0] MPI startup(): 67      1081123  gmz17.benchmarkcenter.megware.com  {102-103,294-295}
[0] MPI startup(): 68      1081098  gmz17.benchmarkcenter.megware.com  {104-105,296-297}
[0] MPI startup(): 69      1081149  gmz17.benchmarkcenter.megware.com  {106-107,298-299}
[0] MPI startup(): 70      1081113  gmz17.benchmarkcenter.megware.com  {108-109,300-301}
[0] MPI startup(): 71      1081121  gmz17.benchmarkcenter.megware.com  {110-111,302-303}
[0] MPI startup(): 72      1081095  gmz17.benchmarkcenter.megware.com  {112,304}
[0] MPI startup(): 73      1081079  gmz17.benchmarkcenter.megware.com  {113,305}
[0] MPI startup(): 74      1081048  gmz17.benchmarkcenter.megware.com  {114,306}
[0] MPI startup(): 75      1081053  gmz17.benchmarkcenter.megware.com  {115,307}
[0] MPI startup(): 76      1081163  gmz17.benchmarkcenter.megware.com  {116,308}
[0] MPI startup(): 77      1081136  gmz17.benchmarkcenter.megware.com  {117,309}
[0] MPI startup(): 78      1081119  gmz17.benchmarkcenter.megware.com  {118,310}
[0] MPI startup(): 79      1081105  gmz17.benchmarkcenter.megware.com  {119,311}
[0] MPI startup(): 80      1081070  gmz17.benchmarkcenter.megware.com  {120-121,312-313}
[0] MPI startup(): 81      1081099  gmz17.benchmarkcenter.megware.com  {122-123,314-315}
[0] MPI startup(): 82      1081134  gmz17.benchmarkcenter.megware.com  {124-125,316-317}
[0] MPI startup(): 83      1081074  gmz17.benchmarkcenter.megware.com  {126-127,318-319}
[0] MPI startup(): 84      1081129  gmz17.benchmarkcenter.megware.com  {128-129,320-321}
[0] MPI startup(): 85      1081122  gmz17.benchmarkcenter.megware.com  {130-131,322-323}
[0] MPI startup(): 86      1081083  gmz17.benchmarkcenter.megware.com  {132-133,324-325}
[0] MPI startup(): 87      1081109  gmz17.benchmarkcenter.megware.com  {134-135,326-327}
[0] MPI startup(): 88      1081066  gmz17.benchmarkcenter.megware.com  {136,328}
[0] MPI startup(): 89      1081045  gmz17.benchmarkcenter.megware.com  {137,329}
[0] MPI startup(): 90      1081043  gmz17.benchmarkcenter.megware.com  {138,330}
[0] MPI startup(): 91      1081088  gmz17.benchmarkcenter.megware.com  {139,331}
[0] MPI startup(): 92      1081135  gmz17.benchmarkcenter.megware.com  {140,332}
[0] MPI startup(): 93      1081090  gmz17.benchmarkcenter.megware.com  {141,333}
[0] MPI startup(): 94      1081094  gmz17.benchmarkcenter.megware.com  {142,334}
[0] MPI startup(): 95      1081084  gmz17.benchmarkcenter.megware.com  {143,335}
[0] MPI startup(): 96      1081115  gmz17.benchmarkcenter.megware.com  {144-145,336-337}
[0] MPI startup(): 97      1081103  gmz17.benchmarkcenter.megware.com  {146-147,338-339}
[0] MPI startup(): 98      1081133  gmz17.benchmarkcenter.megware.com  {148-149,340-341}
[0] MPI startup(): 99      1081069  gmz17.benchmarkcenter.megware.com  {150-151,342-343}
[0] MPI startup(): 100     1081073  gmz17.benchmarkcenter.megware.com  {152-153,344-345}
[0] MPI startup(): 101     1081044  gmz17.benchmarkcenter.megware.com  {154-155,346-347}
[0] MPI startup(): 102     1081057  gmz17.benchmarkcenter.megware.com  {156-157,348-349}
[0] MPI startup(): 103     1081144  gmz17.benchmarkcenter.megware.com  {158-159,350-351}
[0] MPI startup(): 104     1081137  gmz17.benchmarkcenter.megware.com  {160,352}
[0] MPI startup(): 105     1081164  gmz17.benchmarkcenter.megware.com  {161,353}
[0] MPI startup(): 106     1081063  gmz17.benchmarkcenter.megware.com  {162,354}
[0] MPI startup(): 107     1081082  gmz17.benchmarkcenter.megware.com  {163,355}
[0] MPI startup(): 108     1081117  gmz17.benchmarkcenter.megware.com  {164,356}
[0] MPI startup(): 109     1081062  gmz17.benchmarkcenter.megware.com  {165,357}
[0] MPI startup(): 110     1081142  gmz17.benchmarkcenter.megware.com  {166,358}
[0] MPI startup(): 111     1081160  gmz17.benchmarkcenter.megware.com  {167,359}
[0] MPI startup(): 112     1081116  gmz17.benchmarkcenter.megware.com  {168-169,360-361}
[0] MPI startup(): 113     1081127  gmz17.benchmarkcenter.megware.com  {170-171,362-363}
[0] MPI startup(): 114     1081055  gmz17.benchmarkcenter.megware.com  {172-173,364-365}
[0] MPI startup(): 115     1081161  gmz17.benchmarkcenter.megware.com  {174-175,366-367}
[0] MPI startup(): 116     1081058  gmz17.benchmarkcenter.megware.com  {176-177,368-369}
[0] MPI startup(): 117     1081059  gmz17.benchmarkcenter.megware.com  {178-179,370-371}
[0] MPI startup(): 118     1081087  gmz17.benchmarkcenter.megware.com  {180-181,372-373}
[0] MPI startup(): 119     1081128  gmz17.benchmarkcenter.megware.com  {182-183,374-375}
[0] MPI startup(): 120     1081081  gmz17.benchmarkcenter.megware.com  {184,376}
[0] MPI startup(): 121     1081085  gmz17.benchmarkcenter.megware.com  {185,377}
[0] MPI startup(): 122     1081120  gmz17.benchmarkcenter.megware.com  {186,378}
[0] MPI startup(): 123     1081118  gmz17.benchmarkcenter.megware.com  {187,379}
[0] MPI startup(): 124     1081141  gmz17.benchmarkcenter.megware.com  {188,380}
[0] MPI startup(): 125     1081140  gmz17.benchmarkcenter.megware.com  {189,381}
[0] MPI startup(): 126     1081167  gmz17.benchmarkcenter.megware.com  {190,382}
[0] MPI startup(): 127     1081168  gmz17.benchmarkcenter.megware.com  {191,383}
OMP: pid 1081067 tid 1081067 thread 0 bound to OS proc set {17}
OMP: pid 1081125 tid 1081125 thread 0 bound to OS proc set {54}
OMP: pid 1081041 tid 1081041 thread 0 bound to OS proc set {8}
OMP: pid 1081165 tid 1081165 thread 0 bound to OS proc set {48}
OMP: pid 1081107 tid 1081107 thread 0 bound to OS proc set {40}
OMP: pid 1081117 tid 1081117 thread 0 bound to OS proc set {164}
OMP: pid 1081078 tid 1081078 thread 0 bound to OS proc set {74}
OMP: pid 1081052 tid 1081052 thread 0 bound to OS proc set {50}
OMP: pid 1081092 tid 1081092 thread 0 bound to OS proc set {45}
OMP: pid 1081106 tid 1081106 thread 0 bound to OS proc set {36}
OMP: pid 1081123 tid 1081123 thread 0 bound to OS proc set {102}
OMP: pid 1081136 tid 1081136 thread 0 bound to OS proc set {117}
OMP: pid 1081051 tid 1081051 thread 0 bound to OS proc set {38}
OMP: pid 1081115 tid 1081115 thread 0 bound to OS proc set {144}
OMP: pid 1081094 tid 1081094 thread 0 bound to OS proc set {142}
OMP: pid 1081088 tid 1081088 thread 0 bound to OS proc set {139}
OMP: pid 1081126 tid 1081126 thread 0 bound to OS proc set {80}
OMP: pid 1081062 tid 1081062 thread 0 bound to OS proc set {165}
OMP: pid 1081044 tid 1081044 thread 0 bound to OS proc set {154}
OMP: pid 1081093 tid 1081093 thread 0 bound to OS proc set {69}
OMP: pid 1081054 tid 1081054 thread 0 bound to OS proc set {34}
OMP: pid 1081110 tid 1081110 thread 0 bound to OS proc set {100}
OMP: pid 1081166 tid 1081166 thread 0 bound to OS proc set {24}
OMP: pid 1081120 tid 1081120 thread 0 bound to OS proc set {186}
OMP: pid 1081101 tid 1081101 thread 0 bound to OS proc set {86}
OMP: pid 1081140 tid 1081140 thread 0 bound to OS proc set {189}
OMP: pid 1081100 tid 1081100 thread 0 bound to OS proc set {20}
OMP: pid 1081168 tid 1081168 thread 0 bound to OS proc set {191}
OMP: pid 1081063 tid 1081063 thread 0 bound to OS proc set {162}
OMP: pid 1081058 tid 1081058 thread 0 bound to OS proc set {176}
OMP: pid 1081099 tid 1081099 thread 0 bound to OS proc set {122}
OMP: pid 1081082 tid 1081082 thread 0 bound to OS proc set {163}
OMP: pid 1081164 tid 1081164 thread 0 bound to OS proc set {161}
OMP: pid 1081114 tid 1081114 thread 0 bound to OS proc set {56}
OMP: pid 1081095 tid 1081095 thread 0 bound to OS proc set {112}
OMP: pid 1081068 tid 1081068 thread 0 bound to OS proc set {88}
OMP: pid 1081139 tid 1081139 thread 0 bound to OS proc set {6}
OMP: pid 1081074 tid 1081074 thread 0 bound to OS proc set {126}
OMP: pid 1081128 tid 1081128 thread 0 bound to OS proc set {182}
OMP: pid 1081119 tid 1081119 thread 0 bound to OS proc set {118}
OMP: pid 1081104 tid 1081104 thread 0 bound to OS proc set {95}
OMP: pid 1081070 tid 1081070 thread 0 bound to OS proc set {120}
OMP: pid 1081147 tid 1081147 thread 0 bound to OS proc set {89}
OMP: pid 1081148 tid 1081148 thread 0 bound to OS proc set {52}
OMP: pid 1081085 tid 1081085 thread 0 bound to OS proc set {185}
OMP: pid 1081150 tid 1081150 thread 0 bound to OS proc set {78}
OMP: pid 1081064 tid 1081064 thread 0 bound to OS proc set {67}
OMP: pid 1081103 tid 1081103 thread 0 bound to OS proc set {146}
OMP: pid 1081118 tid 1081118 thread 0 bound to OS proc set {187}
OMP: pid 1081083 tid 1081083 thread 0 bound to OS proc set {132}
OMP: pid 1081090 tid 1081090 thread 0 bound to OS proc set {141}
OMP: pid 1081057 tid 1081057 thread 0 bound to OS proc set {156}
OMP: pid 1081162 tid 1081162 thread 0 bound to OS proc set {41}
OMP: pid 1081096 tid 1081096 thread 0 bound to OS proc set {21}
OMP: pid 1081045 tid 1081045 thread 0 bound to OS proc set {137}
OMP: pid 1081105 tid 1081105 thread 0 bound to OS proc set {119}
OMP: pid 1081091 tid 1081091 thread 0 bound to OS proc set {23}
OMP: pid 1081077 tid 1081077 thread 0 bound to OS proc set {76}
OMP: pid 1081069 tid 1081069 thread 0 bound to OS proc set {150}
OMP: pid 1081113 tid 1081113 thread 0 bound to OS proc set {108}
OMP: pid 1081087 tid 1081087 thread 0 bound to OS proc set {180}
OMP: pid 1081112 tid 1081112 thread 0 bound to OS proc set {98}
OMP: pid 1081072 tid 1081072 thread 0 bound to OS proc set {92}
OMP: pid 1081111 tid 1081111 thread 0 bound to OS proc set {93}
OMP: pid 1081131 tid 1081131 thread 0 bound to OS proc set {26}
OMP: pid 1081143 tid 1081143 thread 0 bound to OS proc set {19}
OMP: pid 1081102 tid 1081102 thread 0 bound to OS proc set {82}
OMP: pid 1081129 tid 1081129 thread 0 bound to OS proc set {128}
OMP: pid 1081060 tid 1081060 thread 0 bound to OS proc set {0}
OMP: pid 1081144 tid 1081144 thread 0 bound to OS proc set {158}
OMP: pid 1081134 tid 1081134 thread 0 bound to OS proc set {124}
OMP: pid 1081073 tid 1081073 thread 0 bound to OS proc set {152}
OMP: pid 1081135 tid 1081135 thread 0 bound to OS proc set {140}
LAMMPS (22 Jul 2025)
OMP: pid 1081080 tid 1081080 thread 0 bound to OS proc set {16}
OMP: pid 1081047 tid 1081047 thread 0 bound to OS proc set {58}
OMP: pid 1081121 tid 1081121 thread 0 bound to OS proc set {110}
OMP: pid 1081159 tid 1081159 thread 0 bound to OS proc set {30}
OMP: pid 1081089 tid 1081089 thread 0 bound to OS proc set {32}
OMP: pid 1081048 tid 1081048 thread 0 bound to OS proc set {114}
OMP: pid 1081059 tid 1081059 thread 0 bound to OS proc set {178}
OMP: pid 1081116 tid 1081116 thread 0 bound to OS proc set {168}
OMP: pid 1081042 tid 1081042 thread 0 bound to OS proc set {72}
OMP: pid 1081151 tid 1081151 thread 0 bound to OS proc set {84}
OMP: pid 1081109 tid 1081109 thread 0 bound to OS proc set {134}
OMP: pid 1081130 tid 1081130 thread 0 bound to OS proc set {28}
OMP: pid 1081075 tid 1081075 thread 0 bound to OS proc set {14}
OMP: pid 1081161 tid 1081161 thread 0 bound to OS proc set {174}
OMP: pid 1081141 tid 1081141 thread 0 bound to OS proc set {188}
OMP: pid 1081056 tid 1081056 thread 0 bound to OS proc set {90}
OMP: pid 1081086 tid 1081086 thread 0 bound to OS proc set {44}
OMP: pid 1081053 tid 1081053 thread 0 bound to OS proc set {115}
OMP: pid 1081061 tid 1081061 thread 0 bound to OS proc set {12}
OMP: pid 1081137 tid 1081137 thread 0 bound to OS proc set {160}
OMP: pid 1081154 tid 1081154 thread 0 bound to OS proc set {68}
OMP: pid 1081158 tid 1081158 thread 0 bound to OS proc set {62}
OMP: pid 1081079 tid 1081079 thread 0 bound to OS proc set {113}
OMP: pid 1081152 tid 1081152 thread 0 bound to OS proc set {4}
OMP: pid 1081157 tid 1081157 thread 0 bound to OS proc set {47}
OMP: pid 1081124 tid 1081124 thread 0 bound to OS proc set {22}
OMP: pid 1081097 tid 1081097 thread 0 bound to OS proc set {94}
OMP: pid 1081133 tid 1081133 thread 0 bound to OS proc set {148}
OMP: pid 1081084 tid 1081084 thread 0 bound to OS proc set {143}
OMP: pid 1081142 tid 1081142 thread 0 bound to OS proc set {166}
OMP: pid 1081149 tid 1081149 thread 0 bound to OS proc set {106}
OMP: pid 1081046 tid 1081046 thread 0 bound to OS proc set {91}
OMP: pid 1081055 tid 1081055 thread 0 bound to OS proc set {172}
OMP: pid 1081108 tid 1081108 thread 0 bound to OS proc set {70}
OMP: pid 1081163 tid 1081163 thread 0 bound to OS proc set {116}
OMP: pid 1081160 tid 1081160 thread 0 bound to OS proc set {167}
OMP: pid 1081146 tid 1081146 thread 0 bound to OS proc set {18}
OMP: pid 1081043 tid 1081043 thread 0 bound to OS proc set {138}
OMP: pid 1081132 tid 1081132 thread 0 bound to OS proc set {65}
OMP: pid 1081098 tid 1081098 thread 0 bound to OS proc set {104}
OMP: pid 1081065 tid 1081065 thread 0 bound to OS proc set {2}
OMP: pid 1081066 tid 1081066 thread 0 bound to OS proc set {136}
OMP: pid 1081071 tid 1081071 thread 0 bound to OS proc set {60}
OMP: pid 1081081 tid 1081081 thread 0 bound to OS proc set {184}
OMP: pid 1081153 tid 1081153 thread 0 bound to OS proc set {10}
OMP: pid 1081167 tid 1081167 thread 0 bound to OS proc set {190}
OMP: pid 1081049 tid 1081049 thread 0 bound to OS proc set {66}
OMP: pid 1081076 tid 1081076 thread 0 bound to OS proc set {42}
OMP: pid 1081138 tid 1081138 thread 0 bound to OS proc set {43}
OMP: pid 1081122 tid 1081122 thread 0 bound to OS proc set {130}
OMP: pid 1081145 tid 1081145 thread 0 bound to OS proc set {71}
OMP: pid 1081050 tid 1081050 thread 0 bound to OS proc set {96}
OMP: pid 1081156 tid 1081156 thread 0 bound to OS proc set {46}
OMP: pid 1081155 tid 1081155 thread 0 bound to OS proc set {64}
OMP: pid 1081127 tid 1081127 thread 0 bound to OS proc set {170}
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  8 by 4 by 4 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.019 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2152 | 2233 | 2314 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 1.10935 on 128 procs for 10 steps with 32768000 atoms

Performance: 3.894 ns/day, 6.163 hours/ns, 9.014 timesteps/s, 295.382 Matom-step/s
99.1% CPU use with 128 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.79999    | 0.82414    | 0.84654    |   1.1 | 74.29
Neigh   | 0.050769   | 0.053957   | 0.057821   |   0.7 |  4.86
Comm    | 0.079792   | 0.11118    | 0.14755    |   4.9 | 10.02
Output  | 0.0022209  | 0.0036789  | 0.0063783  |   1.6 |  0.33
Modify  | 0.034534   | 0.076275   | 0.108      |   8.8 |  6.88
Other   |            | 0.04011    |            |       |  3.62

Nlocal:         256000 ave      256158 max      255841 min
Histogram: 2 9 8 21 20 30 17 12 5 4
Nghost:          73251 ave       73410 max       73093 min
Histogram: 4 5 12 17 30 20 21 8 9 2
Neighs:    9.63629e+06 ave 9.65078e+06 max 9.62237e+06 min
Histogram: 8 15 15 24 4 6 17 19 15 5

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 2153 | 2233 | 2314 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.071    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.823    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.469    
       400   797.74786     -1.1257875e+08  0             -1.0919981e+08  51540.399    
       450   797.84596     -1.1257917e+08  0             -1.0919981e+08  51536.934    
       500   797.7245      -1.1257864e+08  0             -1.091998e+08   51543.024    
       510   797.68253     -1.1257846e+08  0             -1.091998e+08   51545.153    
Loop time of 59.4412 on 128 procs for 500 steps with 32768000 atoms

Performance: 3.634 ns/day, 6.605 hours/ns, 8.412 timesteps/s, 275.634 Matom-step/s
99.2% CPU use with 128 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 41.443     | 42.265     | 43.402     |   7.3 | 71.10
Neigh   | 4.4558     | 4.6644     | 4.9594     |   5.6 |  7.85
Comm    | 5.3167     | 6.6786     | 8.0267     |  35.5 | 11.24
Output  | 0.025563   | 0.041904   | 0.065206   |   5.5 |  0.07
Modify  | 2.054      | 4.1354     | 5.5644     |  61.4 |  6.96
Other   |            | 1.656      |            |       |  2.79

Nlocal:         256000 ave      256397 max      255567 min
Histogram: 6 4 13 17 15 25 20 13 11 4
Nghost:        73246.1 ave       73679 max       72849 min
Histogram: 4 10 14 17 26 17 18 12 4 6
Neighs:    9.66326e+06 ave 9.68792e+06 max 9.63769e+06 min
Histogram: 6 6 10 13 30 19 14 18 8 4

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747093
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:01:01


[MAQAO] Info: 127/128 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_3  #
#######################################################################################################################################################################################################################


* [MAQAO] Info: Detected 160 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1084630  gmz17.benchmarkcenter.megware.com  {0-1,192-193}
[0] MPI startup(): 1       1084704  gmz17.benchmarkcenter.megware.com  {2-3,194-195}
[0] MPI startup(): 2       1084756  gmz17.benchmarkcenter.megware.com  {4-5,196-197}
[0] MPI startup(): 3       1084709  gmz17.benchmarkcenter.megware.com  {6-7,198-199}
[0] MPI startup(): 4       1084714  gmz17.benchmarkcenter.megware.com  {8,200}
[0] MPI startup(): 5       1084745  gmz17.benchmarkcenter.megware.com  {9,201}
[0] MPI startup(): 6       1084730  gmz17.benchmarkcenter.megware.com  {10,202}
[0] MPI startup(): 7       1084672  gmz17.benchmarkcenter.megware.com  {11,203}
[0] MPI startup(): 8       1084681  gmz17.benchmarkcenter.megware.com  {12,204}
[0] MPI startup(): 9       1084664  gmz17.benchmarkcenter.megware.com  {13,205}
[0] MPI startup(): 10      1084683  gmz17.benchmarkcenter.megware.com  {14,206}
[0] MPI startup(): 11      1084658  gmz17.benchmarkcenter.megware.com  {15,207}
[0] MPI startup(): 12      1084668  gmz17.benchmarkcenter.megware.com  {16,208}
[0] MPI startup(): 13      1084635  gmz17.benchmarkcenter.megware.com  {17,209}
[0] MPI startup(): 14      1084773  gmz17.benchmarkcenter.megware.com  {18,210}
[0] MPI startup(): 15      1084724  gmz17.benchmarkcenter.megware.com  {19,211}
[0] MPI startup(): 16      1084696  gmz17.benchmarkcenter.megware.com  {20,212}
[0] MPI startup(): 17      1084665  gmz17.benchmarkcenter.megware.com  {21,213}
[0] MPI startup(): 18      1084740  gmz17.benchmarkcenter.megware.com  {22,214}
[0] MPI startup(): 19      1084741  gmz17.benchmarkcenter.megware.com  {23,215}
[0] MPI startup(): 20      1084670  gmz17.benchmarkcenter.megware.com  {24-25,216-217}
[0] MPI startup(): 21      1084677  gmz17.benchmarkcenter.megware.com  {26-27,218-219}
[0] MPI startup(): 22      1084732  gmz17.benchmarkcenter.megware.com  {28-29,220-221}
[0] MPI startup(): 23      1084720  gmz17.benchmarkcenter.megware.com  {30-31,222-223}
[0] MPI startup(): 24      1084700  gmz17.benchmarkcenter.megware.com  {32,224}
[0] MPI startup(): 25      1084699  gmz17.benchmarkcenter.megware.com  {33,225}
[0] MPI startup(): 26      1084625  gmz17.benchmarkcenter.megware.com  {34,226}
[0] MPI startup(): 27      1084671  gmz17.benchmarkcenter.megware.com  {35,227}
[0] MPI startup(): 28      1084734  gmz17.benchmarkcenter.megware.com  {36,228}
[0] MPI startup(): 29      1084711  gmz17.benchmarkcenter.megware.com  {37,229}
[0] MPI startup(): 30      1084742  gmz17.benchmarkcenter.megware.com  {38,230}
[0] MPI startup(): 31      1084669  gmz17.benchmarkcenter.megware.com  {39,231}
[0] MPI startup(): 32      1084646  gmz17.benchmarkcenter.megware.com  {40,232}
[0] MPI startup(): 33      1084659  gmz17.benchmarkcenter.megware.com  {41,233}
[0] MPI startup(): 34      1084747  gmz17.benchmarkcenter.megware.com  {42,234}
[0] MPI startup(): 35      1084707  gmz17.benchmarkcenter.megware.com  {43,235}
[0] MPI startup(): 36      1084636  gmz17.benchmarkcenter.megware.com  {44,236}
[0] MPI startup(): 37      1084645  gmz17.benchmarkcenter.megware.com  {45,237}
[0] MPI startup(): 38      1084661  gmz17.benchmarkcenter.megware.com  {46,238}
[0] MPI startup(): 39      1084660  gmz17.benchmarkcenter.megware.com  {47,239}
[0] MPI startup(): 40      1084642  gmz17.benchmarkcenter.megware.com  {48-49,240-241}
[0] MPI startup(): 41      1084631  gmz17.benchmarkcenter.megware.com  {50-51,242-243}
[0] MPI startup(): 42      1084708  gmz17.benchmarkcenter.megware.com  {52-53,244-245}
[0] MPI startup(): 43      1084721  gmz17.benchmarkcenter.megware.com  {54-55,246-247}
[0] MPI startup(): 44      1084663  gmz17.benchmarkcenter.megware.com  {56,248}
[0] MPI startup(): 45      1084762  gmz17.benchmarkcenter.megware.com  {57,249}
[0] MPI startup(): 46      1084688  gmz17.benchmarkcenter.megware.com  {58,250}
[0] MPI startup(): 47      1084687  gmz17.benchmarkcenter.megware.com  {59,251}
[0] MPI startup(): 48      1084713  gmz17.benchmarkcenter.megware.com  {60,252}
[0] MPI startup(): 49      1084710  gmz17.benchmarkcenter.megware.com  {61,253}
[0] MPI startup(): 50      1084717  gmz17.benchmarkcenter.megware.com  {62,254}
[0] MPI startup(): 51      1084674  gmz17.benchmarkcenter.megware.com  {63,255}
[0] MPI startup(): 52      1084685  gmz17.benchmarkcenter.megware.com  {64,256}
[0] MPI startup(): 53      1084643  gmz17.benchmarkcenter.megware.com  {65,257}
[0] MPI startup(): 54      1084753  gmz17.benchmarkcenter.megware.com  {66,258}
[0] MPI startup(): 55      1084767  gmz17.benchmarkcenter.megware.com  {67,259}
[0] MPI startup(): 56      1084701  gmz17.benchmarkcenter.megware.com  {68,260}
[0] MPI startup(): 57      1084639  gmz17.benchmarkcenter.megware.com  {69,261}
[0] MPI startup(): 58      1084706  gmz17.benchmarkcenter.megware.com  {70,262}
[0] MPI startup(): 59      1084722  gmz17.benchmarkcenter.megware.com  {71,263}
[0] MPI startup(): 60      1084702  gmz17.benchmarkcenter.megware.com  {72-73,264-265}
[0] MPI startup(): 61      1084676  gmz17.benchmarkcenter.megware.com  {74-75,266-267}
[0] MPI startup(): 62      1084705  gmz17.benchmarkcenter.megware.com  {76-77,268-269}
[0] MPI startup(): 63      1084715  gmz17.benchmarkcenter.megware.com  {78-79,270-271}
[0] MPI startup(): 64      1084739  gmz17.benchmarkcenter.megware.com  {80,272}
[0] MPI startup(): 65      1084766  gmz17.benchmarkcenter.megware.com  {81,273}
[0] MPI startup(): 66      1084675  gmz17.benchmarkcenter.megware.com  {82,274}
[0] MPI startup(): 67      1084726  gmz17.benchmarkcenter.megware.com  {83,275}
[0] MPI startup(): 68      1084769  gmz17.benchmarkcenter.megware.com  {84,276}
[0] MPI startup(): 69      1084655  gmz17.benchmarkcenter.megware.com  {85,277}
[0] MPI startup(): 70      1084757  gmz17.benchmarkcenter.megware.com  {86,278}
[0] MPI startup(): 71      1084759  gmz17.benchmarkcenter.megware.com  {87,279}
[0] MPI startup(): 72      1084760  gmz17.benchmarkcenter.megware.com  {88,280}
[0] MPI startup(): 73      1084728  gmz17.benchmarkcenter.megware.com  {89,281}
[0] MPI startup(): 74      1084694  gmz17.benchmarkcenter.megware.com  {90,282}
[0] MPI startup(): 75      1084697  gmz17.benchmarkcenter.megware.com  {91,283}
[0] MPI startup(): 76      1084621  gmz17.benchmarkcenter.megware.com  {92,284}
[0] MPI startup(): 77      1084774  gmz17.benchmarkcenter.megware.com  {93,285}
[0] MPI startup(): 78      1084617  gmz17.benchmarkcenter.megware.com  {94,286}
[0] MPI startup(): 79      1084618  gmz17.benchmarkcenter.megware.com  {95,287}
[0] MPI startup(): 80      1084703  gmz17.benchmarkcenter.megware.com  {96-97,288-289}
[0] MPI startup(): 81      1084673  gmz17.benchmarkcenter.megware.com  {98-99,290-291}
[0] MPI startup(): 82      1084620  gmz17.benchmarkcenter.megware.com  {100-101,292-293}
[0] MPI startup(): 83      1084772  gmz17.benchmarkcenter.megware.com  {102-103,294-295}
[0] MPI startup(): 84      1084761  gmz17.benchmarkcenter.megware.com  {104,296}
[0] MPI startup(): 85      1084641  gmz17.benchmarkcenter.megware.com  {105,297}
[0] MPI startup(): 86      1084667  gmz17.benchmarkcenter.megware.com  {106,298}
[0] MPI startup(): 87      1084768  gmz17.benchmarkcenter.megware.com  {107,299}
[0] MPI startup(): 88      1084657  gmz17.benchmarkcenter.megware.com  {108,300}
[0] MPI startup(): 89      1084666  gmz17.benchmarkcenter.megware.com  {109,301}
[0] MPI startup(): 90      1084770  gmz17.benchmarkcenter.megware.com  {110,302}
[0] MPI startup(): 91      1084691  gmz17.benchmarkcenter.megware.com  {111,303}
[0] MPI startup(): 92      1084624  gmz17.benchmarkcenter.megware.com  {112,304}
[0] MPI startup(): 93      1084765  gmz17.benchmarkcenter.megware.com  {113,305}
[0] MPI startup(): 94      1084744  gmz17.benchmarkcenter.megware.com  {114,306}
[0] MPI startup(): 95      1084723  gmz17.benchmarkcenter.megware.com  {115,307}
[0] MPI startup(): 96      1084743  gmz17.benchmarkcenter.megware.com  {116,308}
[0] MPI startup(): 97      1084637  gmz17.benchmarkcenter.megware.com  {117,309}
[0] MPI startup(): 98      1084648  gmz17.benchmarkcenter.megware.com  {118,310}
[0] MPI startup(): 99      1084649  gmz17.benchmarkcenter.megware.com  {119,311}
[0] MPI startup(): 100     1084684  gmz17.benchmarkcenter.megware.com  {120-121,312-313}
[0] MPI startup(): 101     1084718  gmz17.benchmarkcenter.megware.com  {122-123,314-315}
[0] MPI startup(): 102     1084619  gmz17.benchmarkcenter.megware.com  {124-125,316-317}
[0] MPI startup(): 103     1084764  gmz17.benchmarkcenter.megware.com  {126-127,318-319}
[0] MPI startup(): 104     1084640  gmz17.benchmarkcenter.megware.com  {128,320}
[0] MPI startup(): 105     1084729  gmz17.benchmarkcenter.megware.com  {129,321}
[0] MPI startup(): 106     1084662  gmz17.benchmarkcenter.megware.com  {130,322}
[0] MPI startup(): 107     1084633  gmz17.benchmarkcenter.megware.com  {131,323}
[0] MPI startup(): 108     1084771  gmz17.benchmarkcenter.megware.com  {132,324}
[0] MPI startup(): 109     1084695  gmz17.benchmarkcenter.megware.com  {133,325}
[0] MPI startup(): 110     1084725  gmz17.benchmarkcenter.megware.com  {134,326}
[0] MPI startup(): 111     1084634  gmz17.benchmarkcenter.megware.com  {135,327}
[0] MPI startup(): 112     1084750  gmz17.benchmarkcenter.megware.com  {136,328}
[0] MPI startup(): 113     1084616  gmz17.benchmarkcenter.megware.com  {137,329}
[0] MPI startup(): 114     1084638  gmz17.benchmarkcenter.megware.com  {138,330}
[0] MPI startup(): 115     1084738  gmz17.benchmarkcenter.megware.com  {139,331}
[0] MPI startup(): 116     1084731  gmz17.benchmarkcenter.megware.com  {140,332}
[0] MPI startup(): 117     1084758  gmz17.benchmarkcenter.megware.com  {141,333}
[0] MPI startup(): 118     1084622  gmz17.benchmarkcenter.megware.com  {142,334}
[0] MPI startup(): 119     1084615  gmz17.benchmarkcenter.megware.com  {143,335}
[0] MPI startup(): 120     1084737  gmz17.benchmarkcenter.megware.com  {144-145,336-337}
[0] MPI startup(): 121     1084682  gmz17.benchmarkcenter.megware.com  {146-147,338-339}
[0] MPI startup(): 122     1084679  gmz17.benchmarkcenter.megware.com  {148-149,340-341}
[0] MPI startup(): 123     1084746  gmz17.benchmarkcenter.megware.com  {150-151,342-343}
[0] MPI startup(): 124     1084763  gmz17.benchmarkcenter.megware.com  {152,344}
[0] MPI startup(): 125     1084653  gmz17.benchmarkcenter.megware.com  {153,345}
[0] MPI startup(): 126     1084632  gmz17.benchmarkcenter.megware.com  {154,346}
[0] MPI startup(): 127     1084647  gmz17.benchmarkcenter.megware.com  {155,347}
[0] MPI startup(): 128     1084626  gmz17.benchmarkcenter.megware.com  {156,348}
[0] MPI startup(): 129     1084693  gmz17.benchmarkcenter.megware.com  {157,349}
[0] MPI startup(): 130     1084735  gmz17.benchmarkcenter.megware.com  {158,350}
[0] MPI startup(): 131     1084751  gmz17.benchmarkcenter.megware.com  {159,351}
[0] MPI startup(): 132     1084656  gmz17.benchmarkcenter.megware.com  {160,352}
[0] MPI startup(): 133     1084716  gmz17.benchmarkcenter.megware.com  {161,353}
[0] MPI startup(): 134     1084719  gmz17.benchmarkcenter.megware.com  {162,354}
[0] MPI startup(): 135     1084686  gmz17.benchmarkcenter.megware.com  {163,355}
[0] MPI startup(): 136     1084752  gmz17.benchmarkcenter.megware.com  {164,356}
[0] MPI startup(): 137     1084727  gmz17.benchmarkcenter.megware.com  {165,357}
[0] MPI startup(): 138     1084680  gmz17.benchmarkcenter.megware.com  {166,358}
[0] MPI startup(): 139     1084654  gmz17.benchmarkcenter.megware.com  {167,359}
[0] MPI startup(): 140     1084650  gmz17.benchmarkcenter.megware.com  {168-169,360-361}
[0] MPI startup(): 141     1084748  gmz17.benchmarkcenter.megware.com  {170-171,362-363}
[0] MPI startup(): 142     1084627  gmz17.benchmarkcenter.megware.com  {172-173,364-365}
[0] MPI startup(): 143     1084628  gmz17.benchmarkcenter.megware.com  {174-175,366-367}
[0] MPI startup(): 144     1084652  gmz17.benchmarkcenter.megware.com  {176,368}
[0] MPI startup(): 145     1084692  gmz17.benchmarkcenter.megware.com  {177,369}
[0] MPI startup(): 146     1084690  gmz17.benchmarkcenter.megware.com  {178,370}
[0] MPI startup(): 147     1084644  gmz17.benchmarkcenter.megware.com  {179,371}
[0] MPI startup(): 148     1084678  gmz17.benchmarkcenter.megware.com  {180,372}
[0] MPI startup(): 149     1084623  gmz17.benchmarkcenter.megware.com  {181,373}
[0] MPI startup(): 150     1084698  gmz17.benchmarkcenter.megware.com  {182,374}
[0] MPI startup(): 151     1084733  gmz17.benchmarkcenter.megware.com  {183,375}
[0] MPI startup(): 152     1084629  gmz17.benchmarkcenter.megware.com  {184,376}
[0] MPI startup(): 153     1084754  gmz17.benchmarkcenter.megware.com  {185,377}
[0] MPI startup(): 154     1084651  gmz17.benchmarkcenter.megware.com  {186,378}
[0] MPI startup(): 155     1084755  gmz17.benchmarkcenter.megware.com  {187,379}
[0] MPI startup(): 156     1084712  gmz17.benchmarkcenter.megware.com  {188,380}
[0] MPI startup(): 157     1084749  gmz17.benchmarkcenter.megware.com  {189,381}
[0] MPI startup(): 158     1084689  gmz17.benchmarkcenter.megware.com  {190,382}
[0] MPI startup(): 159     1084736  gmz17.benchmarkcenter.megware.com  {191,383}
OMP: pid 1084664 tid 1084664 thread 0 bound to OS proc set {13}
OMP: pid 1084773 tid 1084773 thread 0 bound to OS proc set {18}
OMP: pid 1084714 tid 1084714 thread 0 bound to OS proc set {8}
OMP: pid 1084672 tid 1084672 thread 0 bound to OS proc set {11}
OMP: pid 1084715 tid 1084715 thread 0 bound to OS proc set {78}
OMP: pid 1084696 tid 1084696 thread 0 bound to OS proc set {20}
OMP: pid 1084685 tid 1084685 thread 0 bound to OS proc set {64}
OMP: pid 1084717 tid 1084717 thread 0 bound to OS proc set {62}
OMP: pid 1084661 tid 1084661 thread 0 bound to OS proc set {46}
OMP: pid 1084621 tid 1084621 thread 0 bound to OS proc set {92}
OMP: pid 1084767 tid 1084767 thread 0 bound to OS proc set {67}
OMP: pid 1084742 tid 1084742 thread 0 bound to OS proc set {38}
OMP: pid 1084756 tid 1084756 thread 0 bound to OS proc set {4}
OMP: pid 1084704 tid 1084704 thread 0 bound to OS proc set {2}
OMP: pid 1084722 tid 1084722 thread 0 bound to OS proc set {71}
OMP: pid 1084737 tid 1084737 thread 0 bound to OS proc set {144}
OMP: pid 1084652 tid 1084652 thread 0 bound to OS proc set {176}
OMP: pid 1084733 tid 1084733 thread 0 bound to OS proc set {183}
OMP: pid 1084741 tid 1084741 thread 0 bound to OS proc set {23}
OMP: pid 1084712 tid 1084712 thread 0 bound to OS proc set {188}
OMP: pid 1084764 tid 1084764 thread 0 bound to OS proc set {126}
OMP: pid 1084625 tid 1084625 thread 0 bound to OS proc set {34}
OMP: pid 1084736 tid 1084736 thread 0 bound to OS proc set {191}
OMP: pid 1084700 tid 1084700 thread 0 bound to OS proc set {32}
OMP: pid 1084770 tid 1084770 thread 0 bound to OS proc set {110}
OMP: pid 1084705 tid 1084705 thread 0 bound to OS proc set {76}
OMP: pid 1084748 tid 1084748 thread 0 bound to OS proc set {170}
OMP: pid 1084653 tid 1084653 thread 0 bound to OS proc set {153}
OMP: pid 1084676 tid 1084676 thread 0 bound to OS proc set {74}
OMP: pid 1084670 tid 1084670 thread 0 bound to OS proc set {24}
OMP: pid 1084707 tid 1084707 thread 0 bound to OS proc set {43}
OMP: pid 1084659 tid 1084659 thread 0 bound to OS proc set {41}
OMP: pid 1084760 tid 1084760 thread 0 bound to OS proc set {88}
OMP: pid 1084616 tid 1084616 thread 0 bound to OS proc set {137}
OMP: pid 1084735 tid 1084735 thread 0 bound to OS proc set {158}
OMP: pid 1084631 tid 1084631 thread 0 bound to OS proc set {50}
OMP: pid 1084740 tid 1084740 thread 0 bound to OS proc set {22}
OMP: pid 1084734 tid 1084734 thread 0 bound to OS proc set {36}
OMP: pid 1084658 tid 1084658 thread 0 bound to OS proc set {15}
OMP: pid 1084745 tid 1084745 thread 0 bound to OS proc set {9}
OMP: pid 1084710 tid 1084710 thread 0 bound to OS proc set {61}
OMP: pid 1084671 tid 1084671 thread 0 bound to OS proc set {35}
OMP: pid 1084690 tid 1084690 thread 0 bound to OS proc set {178}
OMP: pid 1084623 tid 1084623 thread 0 bound to OS proc set {181}
OMP: pid 1084730 tid 1084730 thread 0 bound to OS proc set {10}
OMP: pid 1084709 tid 1084709 thread 0 bound to OS proc set {6}
OMP: pid 1084637 tid 1084637 thread 0 bound to OS proc set {117}
OMP: pid 1084633 tid 1084633 thread 0 bound to OS proc set {131}
OMP: pid 1084645 tid 1084645 thread 0 bound to OS proc set {45}
OMP: pid 1084758 tid 1084758 thread 0 bound to OS proc set {141}
OMP: pid 1084673 tid 1084673 thread 0 bound to OS proc set {98}
OMP: pid 1084648 tid 1084648 thread 0 bound to OS proc set {118}
OMP: pid 1084680 tid 1084680 thread 0 bound to OS proc set {166}
OMP: pid 1084636 tid 1084636 thread 0 bound to OS proc set {44}
OMP: pid 1084669 tid 1084669 thread 0 bound to OS proc set {39}
OMP: pid 1084642 tid 1084642 thread 0 bound to OS proc set {48}
OMP: pid 1084718 tid 1084718 thread 0 bound to OS proc set {122}
OMP: pid 1084617 tid 1084617 thread 0 bound to OS proc set {94}
OMP: pid 1084759 tid 1084759 thread 0 bound to OS proc set {87}
OMP: pid 1084683 tid 1084683 thread 0 bound to OS proc set {14}
OMP: pid 1084622 tid 1084622 thread 0 bound to OS proc set {142}
OMP: pid 1084650 tid 1084650 thread 0 bound to OS proc set {168}
OMP: pid 1084662 tid 1084662 thread 0 bound to OS proc set {130}
OMP: pid 1084635 tid 1084635 thread 0 bound to OS proc set {17}
OMP: pid 1084646 tid 1084646 thread 0 bound to OS proc set {40}
OMP: pid 1084765 tid 1084765 thread 0 bound to OS proc set {113}
OMP: pid 1084743 tid 1084743 thread 0 bound to OS proc set {116}
OMP: pid 1084634 tid 1084634 thread 0 bound to OS proc set {135}
OMP: pid 1084694 tid 1084694 thread 0 bound to OS proc set {90}
OMP: pid 1084615 tid 1084615 thread 0 bound to OS proc set {143}
OMP: pid 1084761 tid 1084761 thread 0 bound to OS proc set {104}
OMP: pid 1084697 tid 1084697 thread 0 bound to OS proc set {91}
OMP: pid 1084699 tid 1084699 thread 0 bound to OS proc set {33}
OMP: pid 1084675 tid 1084675 thread 0 bound to OS proc set {82}
OMP: pid 1084677 tid 1084677 thread 0 bound to OS proc set {26}
OMP: pid 1084739 tid 1084739 thread 0 bound to OS proc set {80}
OMP: pid 1084663 tid 1084663 thread 0 bound to OS proc set {56}
OMP: pid 1084713 tid 1084713 thread 0 bound to OS proc set {60}
OMP: pid 1084627 tid 1084627 thread 0 bound to OS proc set {172}
OMP: pid 1084681 tid 1084681 thread 0 bound to OS proc set {12}
OMP: pid 1084772 tid 1084772 thread 0 bound to OS proc set {102}
OMP: pid 1084708 tid 1084708 thread 0 bound to OS proc set {52}
OMP: pid 1084763 tid 1084763 thread 0 bound to OS proc set {152}
OMP: pid 1084698 tid 1084698 thread 0 bound to OS proc set {182}
OMP: pid 1084639 tid 1084639 thread 0 bound to OS proc set {69}
OMP: pid 1084693 tid 1084693 thread 0 bound to OS proc set {157}
OMP: pid 1084688 tid 1084688 thread 0 bound to OS proc set {58}
OMP: pid 1084721 tid 1084721 thread 0 bound to OS proc set {54}
OMP: pid 1084667 tid 1084667 thread 0 bound to OS proc set {106}
OMP: pid 1084665 tid 1084665 thread 0 bound to OS proc set {21}
OMP: pid 1084731 tid 1084731 thread 0 bound to OS proc set {140}
OMP: pid 1084711 tid 1084711 thread 0 bound to OS proc set {37}
OMP: pid 1084692 tid 1084692 thread 0 bound to OS proc set {177}
OMP: pid 1084632 tid 1084632 thread 0 bound to OS proc set {154}
OMP: pid 1084720 tid 1084720 thread 0 bound to OS proc set {30}
OMP: pid 1084679 tid 1084679 thread 0 bound to OS proc set {148}
OMP: pid 1084630 tid 1084630 thread 0 bound to OS proc set {0}
OMP: pid 1084619 tid 1084619 thread 0 bound to OS proc set {124}
OMP: pid 1084728 tid 1084728 thread 0 bound to OS proc set {89}
OMP: pid 1084620 tid 1084620 thread 0 bound to OS proc set {100}
OMP: pid 1084684 tid 1084684 thread 0 bound to OS proc set {120}
LAMMPS (22 Jul 2025)
OMP: pid 1084706 tid 1084706 thread 0 bound to OS proc set {70}
OMP: pid 1084647 tid 1084647 thread 0 bound to OS proc set {155}
OMP: pid 1084678 tid 1084678 thread 0 bound to OS proc set {180}
OMP: pid 1084686 tid 1084686 thread 0 bound to OS proc set {163}
OMP: pid 1084651 tid 1084651 thread 0 bound to OS proc set {186}
OMP: pid 1084618 tid 1084618 thread 0 bound to OS proc set {95}
OMP: pid 1084755 tid 1084755 thread 0 bound to OS proc set {187}
OMP: pid 1084666 tid 1084666 thread 0 bound to OS proc set {109}
OMP: pid 1084774 tid 1084774 thread 0 bound to OS proc set {93}
OMP: pid 1084762 tid 1084762 thread 0 bound to OS proc set {57}
OMP: pid 1084649 tid 1084649 thread 0 bound to OS proc set {119}
OMP: pid 1084629 tid 1084629 thread 0 bound to OS proc set {184}
OMP: pid 1084749 tid 1084749 thread 0 bound to OS proc set {189}
OMP: pid 1084750 tid 1084750 thread 0 bound to OS proc set {136}
OMP: pid 1084682 tid 1084682 thread 0 bound to OS proc set {146}
OMP: pid 1084702 tid 1084702 thread 0 bound to OS proc set {72}
OMP: pid 1084640 tid 1084640 thread 0 bound to OS proc set {128}
OMP: pid 1084655 tid 1084655 thread 0 bound to OS proc set {85}
OMP: pid 1084769 tid 1084769 thread 0 bound to OS proc set {84}
OMP: pid 1084751 tid 1084751 thread 0 bound to OS proc set {159}
OMP: pid 1084771 tid 1084771 thread 0 bound to OS proc set {132}
OMP: pid 1084643 tid 1084643 thread 0 bound to OS proc set {65}
OMP: pid 1084757 tid 1084757 thread 0 bound to OS proc set {86}
OMP: pid 1084732 tid 1084732 thread 0 bound to OS proc set {28}
OMP: pid 1084674 tid 1084674 thread 0 bound to OS proc set {63}
OMP: pid 1084724 tid 1084724 thread 0 bound to OS proc set {19}
OMP: pid 1084766 tid 1084766 thread 0 bound to OS proc set {81}
OMP: pid 1084644 tid 1084644 thread 0 bound to OS proc set {179}
OMP: pid 1084754 tid 1084754 thread 0 bound to OS proc set {185}
OMP: pid 1084729 tid 1084729 thread 0 bound to OS proc set {129}
OMP: pid 1084747 tid 1084747 thread 0 bound to OS proc set {42}
OMP: pid 1084719 tid 1084719 thread 0 bound to OS proc set {162}
OMP: pid 1084695 tid 1084695 thread 0 bound to OS proc set {133}
OMP: pid 1084768 tid 1084768 thread 0 bound to OS proc set {107}
OMP: pid 1084753 tid 1084753 thread 0 bound to OS proc set {66}
OMP: pid 1084725 tid 1084725 thread 0 bound to OS proc set {134}
OMP: pid 1084687 tid 1084687 thread 0 bound to OS proc set {59}
OMP: pid 1084701 tid 1084701 thread 0 bound to OS proc set {68}
OMP: pid 1084624 tid 1084624 thread 0 bound to OS proc set {112}
OMP: pid 1084703 tid 1084703 thread 0 bound to OS proc set {96}
OMP: pid 1084657 tid 1084657 thread 0 bound to OS proc set {108}
OMP: pid 1084689 tid 1084689 thread 0 bound to OS proc set {190}
OMP: pid 1084726 tid 1084726 thread 0 bound to OS proc set {83}
OMP: pid 1084716 tid 1084716 thread 0 bound to OS proc set {161}
OMP: pid 1084628 tid 1084628 thread 0 bound to OS proc set {174}
OMP: pid 1084626 tid 1084626 thread 0 bound to OS proc set {156}
OMP: pid 1084668 tid 1084668 thread 0 bound to OS proc set {16}
OMP: pid 1084691 tid 1084691 thread 0 bound to OS proc set {111}
OMP: pid 1084723 tid 1084723 thread 0 bound to OS proc set {115}
OMP: pid 1084738 tid 1084738 thread 0 bound to OS proc set {139}
OMP: pid 1084638 tid 1084638 thread 0 bound to OS proc set {138}
OMP: pid 1084660 tid 1084660 thread 0 bound to OS proc set {47}
OMP: pid 1084752 tid 1084752 thread 0 bound to OS proc set {164}
OMP: pid 1084654 tid 1084654 thread 0 bound to OS proc set {167}
OMP: pid 1084641 tid 1084641 thread 0 bound to OS proc set {105}
OMP: pid 1084744 tid 1084744 thread 0 bound to OS proc set {114}
OMP: pid 1084746 tid 1084746 thread 0 bound to OS proc set {150}
OMP: pid 1084656 tid 1084656 thread 0 bound to OS proc set {160}
OMP: pid 1084727 tid 1084727 thread 0 bound to OS proc set {165}
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  8 by 4 by 5 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.018 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 1718 | 1787 | 1858 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 0.970099 on 160 procs for 10 steps with 32768000 atoms

Performance: 4.453 ns/day, 5.389 hours/ns, 10.308 timesteps/s, 337.780 Matom-step/s
99.0% CPU use with 160 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.69481    | 0.71512    | 0.73886    |   1.1 | 73.72
Neigh   | 0.043643   | 0.047499   | 0.050247   |   0.7 |  4.90
Comm    | 0.072357   | 0.088999   | 0.12041    |   3.9 |  9.17
Output  | 0.0019785  | 0.0028482  | 0.0050377  |   1.3 |  0.29
Modify  | 0.037621   | 0.079591   | 0.095267   |   5.9 |  8.20
Other   |            | 0.03604    |            |       |  3.72

Nlocal:         204800 ave      204960 max      204607 min
Histogram: 1 4 8 20 31 28 36 23 6 3
Nghost:          63899 ave       64092 max       63739 min
Histogram: 3 6 23 36 28 31 20 8 4 1
Neighs:    7.70903e+06 ave 7.73347e+06 max  7.6857e+06 min
Histogram: 17 13 8 18 30 25 12 7 22 8

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 1719 | 1787 | 1858 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.071    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.823    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.469    
       400   797.74786     -1.1257875e+08  0             -1.0919981e+08  51540.399    
       450   797.84596     -1.1257917e+08  0             -1.0919981e+08  51536.934    
       500   797.7245      -1.1257864e+08  0             -1.091998e+08   51543.024    
       510   797.68253     -1.1257846e+08  0             -1.091998e+08   51545.153    
Loop time of 51.4152 on 160 procs for 500 steps with 32768000 atoms

Performance: 4.201 ns/day, 5.713 hours/ns, 9.725 timesteps/s, 318.661 Matom-step/s
99.1% CPU use with 160 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 35.742     | 36.362     | 37.501     |   7.2 | 70.72
Neigh   | 3.805      | 4.1072     | 4.2861     |   5.5 |  7.99
Comm    | 4.4816     | 5.2379     | 7.2534     |  29.8 | 10.19
Output  | 0.024153   | 0.034655   | 0.055713   |   3.6 |  0.07
Modify  | 1.9986     | 4.1931     | 4.8494     |  42.4 |  8.16
Other   |            | 1.48       |            |       |  2.88

Nlocal:         204800 ave      205150 max      204389 min
Histogram: 2 8 15 21 20 27 29 19 9 10
Nghost:        63894.8 ave       64306 max       63541 min
Histogram: 9 10 18 30 26 22 19 17 7 2
Neighs:     7.7306e+06 ave 7.76043e+06 max 7.69528e+06 min
Histogram: 4 3 9 26 29 24 28 16 14 7

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747093
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:00:52


[MAQAO] Info: 159/160 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_4  #
#######################################################################################################################################################################################################################


* [MAQAO] Info: Detected 192 Lprof instances in gmz17.benchmarkcenter.megware.com. 
If this is incorrect, rerun with number-processes-per-node=X
[0] MPI startup(): Intel(R) MPI Library, Version 2021.15  Build 20250213 (id: d233448)
[0] MPI startup(): Copyright (C) 2003-2025 Intel Corporation.  All rights reserved.
[0] MPI startup(): library kind: release
[0] MPI startup(): Load tuning file: "/cluster/intel/oneapi/2025.1.0/mpi/2021.15/opt/mpi/etc/tuning_generic_shm.dat"
[0] MPI startup(): ===== CPU pinning =====
[0] MPI startup(): Rank    Pid      Node name                          Pin cpu
[0] MPI startup(): 0       1088386  gmz17.benchmarkcenter.megware.com  {0,192}
[0] MPI startup(): 1       1088340  gmz17.benchmarkcenter.megware.com  {1,193}
[0] MPI startup(): 2       1088321  gmz17.benchmarkcenter.megware.com  {2,194}
[0] MPI startup(): 3       1088346  gmz17.benchmarkcenter.megware.com  {3,195}
[0] MPI startup(): 4       1088405  gmz17.benchmarkcenter.megware.com  {4,196}
[0] MPI startup(): 5       1088241  gmz17.benchmarkcenter.megware.com  {5,197}
[0] MPI startup(): 6       1088376  gmz17.benchmarkcenter.megware.com  {6,198}
[0] MPI startup(): 7       1088308  gmz17.benchmarkcenter.megware.com  {7,199}
[0] MPI startup(): 8       1088270  gmz17.benchmarkcenter.megware.com  {8,200}
[0] MPI startup(): 9       1088291  gmz17.benchmarkcenter.megware.com  {9,201}
[0] MPI startup(): 10      1088264  gmz17.benchmarkcenter.megware.com  {10,202}
[0] MPI startup(): 11      1088409  gmz17.benchmarkcenter.megware.com  {11,203}
[0] MPI startup(): 12      1088394  gmz17.benchmarkcenter.megware.com  {12,204}
[0] MPI startup(): 13      1088384  gmz17.benchmarkcenter.megware.com  {13,205}
[0] MPI startup(): 14      1088349  gmz17.benchmarkcenter.megware.com  {14,206}
[0] MPI startup(): 15      1088268  gmz17.benchmarkcenter.megware.com  {15,207}
[0] MPI startup(): 16      1088407  gmz17.benchmarkcenter.megware.com  {16,208}
[0] MPI startup(): 17      1088305  gmz17.benchmarkcenter.megware.com  {17,209}
[0] MPI startup(): 18      1088393  gmz17.benchmarkcenter.megware.com  {18,210}
[0] MPI startup(): 19      1088350  gmz17.benchmarkcenter.megware.com  {19,211}
[0] MPI startup(): 20      1088339  gmz17.benchmarkcenter.megware.com  {20,212}
[0] MPI startup(): 21      1088257  gmz17.benchmarkcenter.megware.com  {21,213}
[0] MPI startup(): 22      1088233  gmz17.benchmarkcenter.megware.com  {22,214}
[0] MPI startup(): 23      1088273  gmz17.benchmarkcenter.megware.com  {23,215}
[0] MPI startup(): 24      1088272  gmz17.benchmarkcenter.megware.com  {24,216}
[0] MPI startup(): 25      1088304  gmz17.benchmarkcenter.megware.com  {25,217}
[0] MPI startup(): 26      1088387  gmz17.benchmarkcenter.megware.com  {26,218}
[0] MPI startup(): 27      1088414  gmz17.benchmarkcenter.megware.com  {27,219}
[0] MPI startup(): 28      1088234  gmz17.benchmarkcenter.megware.com  {28,220}
[0] MPI startup(): 29      1088412  gmz17.benchmarkcenter.megware.com  {29,221}
[0] MPI startup(): 30      1088392  gmz17.benchmarkcenter.megware.com  {30,222}
[0] MPI startup(): 31      1088373  gmz17.benchmarkcenter.megware.com  {31,223}
[0] MPI startup(): 32      1088288  gmz17.benchmarkcenter.megware.com  {32,224}
[0] MPI startup(): 33      1088254  gmz17.benchmarkcenter.megware.com  {33,225}
[0] MPI startup(): 34      1088256  gmz17.benchmarkcenter.megware.com  {34,226}
[0] MPI startup(): 35      1088320  gmz17.benchmarkcenter.megware.com  {35,227}
[0] MPI startup(): 36      1088251  gmz17.benchmarkcenter.megware.com  {36,228}
[0] MPI startup(): 37      1088287  gmz17.benchmarkcenter.megware.com  {37,229}
[0] MPI startup(): 38      1088372  gmz17.benchmarkcenter.megware.com  {38,230}
[0] MPI startup(): 39      1088313  gmz17.benchmarkcenter.megware.com  {39,231}
[0] MPI startup(): 40      1088374  gmz17.benchmarkcenter.megware.com  {40,232}
[0] MPI startup(): 41      1088336  gmz17.benchmarkcenter.megware.com  {41,233}
[0] MPI startup(): 42      1088267  gmz17.benchmarkcenter.megware.com  {42,234}
[0] MPI startup(): 43      1088263  gmz17.benchmarkcenter.megware.com  {43,235}
[0] MPI startup(): 44      1088318  gmz17.benchmarkcenter.megware.com  {44,236}
[0] MPI startup(): 45      1088261  gmz17.benchmarkcenter.megware.com  {45,237}
[0] MPI startup(): 46      1088277  gmz17.benchmarkcenter.megware.com  {46,238}
[0] MPI startup(): 47      1088284  gmz17.benchmarkcenter.megware.com  {47,239}
[0] MPI startup(): 48      1088266  gmz17.benchmarkcenter.megware.com  {48,240}
[0] MPI startup(): 49      1088328  gmz17.benchmarkcenter.megware.com  {49,241}
[0] MPI startup(): 50      1088369  gmz17.benchmarkcenter.megware.com  {50,242}
[0] MPI startup(): 51      1088226  gmz17.benchmarkcenter.megware.com  {51,243}
[0] MPI startup(): 52      1088280  gmz17.benchmarkcenter.megware.com  {52,244}
[0] MPI startup(): 53      1088337  gmz17.benchmarkcenter.megware.com  {53,245}
[0] MPI startup(): 54      1088330  gmz17.benchmarkcenter.megware.com  {54,246}
[0] MPI startup(): 55      1088383  gmz17.benchmarkcenter.megware.com  {55,247}
[0] MPI startup(): 56      1088265  gmz17.benchmarkcenter.megware.com  {56,248}
[0] MPI startup(): 57      1088329  gmz17.benchmarkcenter.megware.com  {57,249}
[0] MPI startup(): 58      1088400  gmz17.benchmarkcenter.megware.com  {58,250}
[0] MPI startup(): 59      1088275  gmz17.benchmarkcenter.megware.com  {59,251}
[0] MPI startup(): 60      1088396  gmz17.benchmarkcenter.megware.com  {60,252}
[0] MPI startup(): 61      1088262  gmz17.benchmarkcenter.megware.com  {61,253}
[0] MPI startup(): 62      1088248  gmz17.benchmarkcenter.megware.com  {62,254}
[0] MPI startup(): 63      1088338  gmz17.benchmarkcenter.megware.com  {63,255}
[0] MPI startup(): 64      1088300  gmz17.benchmarkcenter.megware.com  {64,256}
[0] MPI startup(): 65      1088269  gmz17.benchmarkcenter.megware.com  {65,257}
[0] MPI startup(): 66      1088271  gmz17.benchmarkcenter.megware.com  {66,258}
[0] MPI startup(): 67      1088327  gmz17.benchmarkcenter.megware.com  {67,259}
[0] MPI startup(): 68      1088382  gmz17.benchmarkcenter.megware.com  {68,260}
[0] MPI startup(): 69      1088290  gmz17.benchmarkcenter.megware.com  {69,261}
[0] MPI startup(): 70      1088341  gmz17.benchmarkcenter.megware.com  {70,262}
[0] MPI startup(): 71      1088342  gmz17.benchmarkcenter.megware.com  {71,263}
[0] MPI startup(): 72      1088347  gmz17.benchmarkcenter.megware.com  {72,264}
[0] MPI startup(): 73      1088403  gmz17.benchmarkcenter.megware.com  {73,265}
[0] MPI startup(): 74      1088348  gmz17.benchmarkcenter.megware.com  {74,266}
[0] MPI startup(): 75      1088357  gmz17.benchmarkcenter.megware.com  {75,267}
[0] MPI startup(): 76      1088370  gmz17.benchmarkcenter.megware.com  {76,268}
[0] MPI startup(): 77      1088367  gmz17.benchmarkcenter.megware.com  {77,269}
[0] MPI startup(): 78      1088391  gmz17.benchmarkcenter.megware.com  {78,270}
[0] MPI startup(): 79      1088397  gmz17.benchmarkcenter.megware.com  {79,271}
[0] MPI startup(): 80      1088289  gmz17.benchmarkcenter.megware.com  {80,272}
[0] MPI startup(): 81      1088230  gmz17.benchmarkcenter.megware.com  {81,273}
[0] MPI startup(): 82      1088309  gmz17.benchmarkcenter.megware.com  {82,274}
[0] MPI startup(): 83      1088353  gmz17.benchmarkcenter.megware.com  {83,275}
[0] MPI startup(): 84      1088311  gmz17.benchmarkcenter.megware.com  {84,276}
[0] MPI startup(): 85      1088345  gmz17.benchmarkcenter.megware.com  {85,277}
[0] MPI startup(): 86      1088310  gmz17.benchmarkcenter.megware.com  {86,278}
[0] MPI startup(): 87      1088232  gmz17.benchmarkcenter.megware.com  {87,279}
[0] MPI startup(): 88      1088274  gmz17.benchmarkcenter.megware.com  {88,280}
[0] MPI startup(): 89      1088235  gmz17.benchmarkcenter.megware.com  {89,281}
[0] MPI startup(): 90      1088385  gmz17.benchmarkcenter.megware.com  {90,282}
[0] MPI startup(): 91      1088398  gmz17.benchmarkcenter.megware.com  {91,283}
[0] MPI startup(): 92      1088334  gmz17.benchmarkcenter.megware.com  {92,284}
[0] MPI startup(): 93      1088352  gmz17.benchmarkcenter.megware.com  {93,285}
[0] MPI startup(): 94      1088395  gmz17.benchmarkcenter.megware.com  {94,286}
[0] MPI startup(): 95      1088307  gmz17.benchmarkcenter.megware.com  {95,287}
[0] MPI startup(): 96      1088360  gmz17.benchmarkcenter.megware.com  {96,288}
[0] MPI startup(): 97      1088343  gmz17.benchmarkcenter.megware.com  {97,289}
[0] MPI startup(): 98      1088306  gmz17.benchmarkcenter.megware.com  {98,290}
[0] MPI startup(): 99      1088282  gmz17.benchmarkcenter.megware.com  {99,291}
[0] MPI startup(): 100     1088390  gmz17.benchmarkcenter.megware.com  {100,292}
[0] MPI startup(): 101     1088399  gmz17.benchmarkcenter.megware.com  {101,293}
[0] MPI startup(): 102     1088316  gmz17.benchmarkcenter.megware.com  {102,294}
[0] MPI startup(): 103     1088344  gmz17.benchmarkcenter.megware.com  {103,295}
[0] MPI startup(): 104     1088315  gmz17.benchmarkcenter.megware.com  {104,296}
[0] MPI startup(): 105     1088237  gmz17.benchmarkcenter.megware.com  {105,297}
[0] MPI startup(): 106     1088286  gmz17.benchmarkcenter.megware.com  {106,298}
[0] MPI startup(): 107     1088354  gmz17.benchmarkcenter.megware.com  {107,299}
[0] MPI startup(): 108     1088279  gmz17.benchmarkcenter.megware.com  {108,300}
[0] MPI startup(): 109     1088361  gmz17.benchmarkcenter.megware.com  {109,301}
[0] MPI startup(): 110     1088236  gmz17.benchmarkcenter.megware.com  {110,302}
[0] MPI startup(): 111     1088238  gmz17.benchmarkcenter.megware.com  {111,303}
[0] MPI startup(): 112     1088239  gmz17.benchmarkcenter.megware.com  {112,304}
[0] MPI startup(): 113     1088244  gmz17.benchmarkcenter.megware.com  {113,305}
[0] MPI startup(): 114     1088292  gmz17.benchmarkcenter.megware.com  {114,306}
[0] MPI startup(): 115     1088285  gmz17.benchmarkcenter.megware.com  {115,307}
[0] MPI startup(): 116     1088358  gmz17.benchmarkcenter.megware.com  {116,308}
[0] MPI startup(): 117     1088410  gmz17.benchmarkcenter.megware.com  {117,309}
[0] MPI startup(): 118     1088404  gmz17.benchmarkcenter.megware.com  {118,310}
[0] MPI startup(): 119     1088368  gmz17.benchmarkcenter.megware.com  {119,311}
[0] MPI startup(): 120     1088363  gmz17.benchmarkcenter.megware.com  {120,312}
[0] MPI startup(): 121     1088249  gmz17.benchmarkcenter.megware.com  {121,313}
[0] MPI startup(): 122     1088356  gmz17.benchmarkcenter.megware.com  {122,314}
[0] MPI startup(): 123     1088351  gmz17.benchmarkcenter.megware.com  {123,315}
[0] MPI startup(): 124     1088317  gmz17.benchmarkcenter.megware.com  {124,316}
[0] MPI startup(): 125     1088388  gmz17.benchmarkcenter.megware.com  {125,317}
[0] MPI startup(): 126     1088294  gmz17.benchmarkcenter.megware.com  {126,318}
[0] MPI startup(): 127     1088366  gmz17.benchmarkcenter.megware.com  {127,319}
[0] MPI startup(): 128     1088229  gmz17.benchmarkcenter.megware.com  {128,320}
[0] MPI startup(): 129     1088365  gmz17.benchmarkcenter.megware.com  {129,321}
[0] MPI startup(): 130     1088225  gmz17.benchmarkcenter.megware.com  {130,322}
[0] MPI startup(): 131     1088332  gmz17.benchmarkcenter.megware.com  {131,323}
[0] MPI startup(): 132     1088253  gmz17.benchmarkcenter.megware.com  {132,324}
[0] MPI startup(): 133     1088380  gmz17.benchmarkcenter.megware.com  {133,325}
[0] MPI startup(): 134     1088293  gmz17.benchmarkcenter.megware.com  {134,326}
[0] MPI startup(): 135     1088247  gmz17.benchmarkcenter.megware.com  {135,327}
[0] MPI startup(): 136     1088359  gmz17.benchmarkcenter.megware.com  {136,328}
[0] MPI startup(): 137     1088312  gmz17.benchmarkcenter.megware.com  {137,329}
[0] MPI startup(): 138     1088246  gmz17.benchmarkcenter.megware.com  {138,330}
[0] MPI startup(): 139     1088302  gmz17.benchmarkcenter.megware.com  {139,331}
[0] MPI startup(): 140     1088243  gmz17.benchmarkcenter.megware.com  {140,332}
[0] MPI startup(): 141     1088362  gmz17.benchmarkcenter.megware.com  {141,333}
[0] MPI startup(): 142     1088281  gmz17.benchmarkcenter.megware.com  {142,334}
[0] MPI startup(): 143     1088240  gmz17.benchmarkcenter.megware.com  {143,335}
[0] MPI startup(): 144     1088299  gmz17.benchmarkcenter.megware.com  {144,336}
[0] MPI startup(): 145     1088413  gmz17.benchmarkcenter.megware.com  {145,337}
[0] MPI startup(): 146     1088259  gmz17.benchmarkcenter.megware.com  {146,338}
[0] MPI startup(): 147     1088258  gmz17.benchmarkcenter.megware.com  {147,339}
[0] MPI startup(): 148     1088377  gmz17.benchmarkcenter.megware.com  {148,340}
[0] MPI startup(): 149     1088283  gmz17.benchmarkcenter.megware.com  {149,341}
[0] MPI startup(): 150     1088301  gmz17.benchmarkcenter.megware.com  {150,342}
[0] MPI startup(): 151     1088375  gmz17.benchmarkcenter.megware.com  {151,343}
[0] MPI startup(): 152     1088326  gmz17.benchmarkcenter.megware.com  {152,344}
[0] MPI startup(): 153     1088295  gmz17.benchmarkcenter.megware.com  {153,345}
[0] MPI startup(): 154     1088333  gmz17.benchmarkcenter.megware.com  {154,346}
[0] MPI startup(): 155     1088381  gmz17.benchmarkcenter.megware.com  {155,347}
[0] MPI startup(): 156     1088325  gmz17.benchmarkcenter.megware.com  {156,348}
[0] MPI startup(): 157     1088322  gmz17.benchmarkcenter.megware.com  {157,349}
[0] MPI startup(): 158     1088296  gmz17.benchmarkcenter.megware.com  {158,350}
[0] MPI startup(): 159     1088331  gmz17.benchmarkcenter.megware.com  {159,351}
[0] MPI startup(): 160     1088408  gmz17.benchmarkcenter.megware.com  {160,352}
[0] MPI startup(): 161     1088250  gmz17.benchmarkcenter.megware.com  {161,353}
[0] MPI startup(): 162     1088227  gmz17.benchmarkcenter.megware.com  {162,354}
[0] MPI startup(): 163     1088319  gmz17.benchmarkcenter.megware.com  {163,355}
[0] MPI startup(): 164     1088355  gmz17.benchmarkcenter.megware.com  {164,356}
[0] MPI startup(): 165     1088245  gmz17.benchmarkcenter.megware.com  {165,357}
[0] MPI startup(): 166     1088389  gmz17.benchmarkcenter.megware.com  {166,358}
[0] MPI startup(): 167     1088297  gmz17.benchmarkcenter.megware.com  {167,359}
[0] MPI startup(): 168     1088401  gmz17.benchmarkcenter.megware.com  {168,360}
[0] MPI startup(): 169     1088298  gmz17.benchmarkcenter.megware.com  {169,361}
[0] MPI startup(): 170     1088231  gmz17.benchmarkcenter.megware.com  {170,362}
[0] MPI startup(): 171     1088323  gmz17.benchmarkcenter.megware.com  {171,363}
[0] MPI startup(): 172     1088406  gmz17.benchmarkcenter.megware.com  {172,364}
[0] MPI startup(): 173     1088371  gmz17.benchmarkcenter.megware.com  {173,365}
[0] MPI startup(): 174     1088252  gmz17.benchmarkcenter.megware.com  {174,366}
[0] MPI startup(): 175     1088228  gmz17.benchmarkcenter.megware.com  {175,367}
[0] MPI startup(): 176     1088303  gmz17.benchmarkcenter.megware.com  {176,368}
[0] MPI startup(): 177     1088255  gmz17.benchmarkcenter.megware.com  {177,369}
[0] MPI startup(): 178     1088324  gmz17.benchmarkcenter.megware.com  {178,370}
[0] MPI startup(): 179     1088224  gmz17.benchmarkcenter.megware.com  {179,371}
[0] MPI startup(): 180     1088402  gmz17.benchmarkcenter.megware.com  {180,372}
[0] MPI startup(): 181     1088411  gmz17.benchmarkcenter.megware.com  {181,373}
[0] MPI startup(): 182     1088242  gmz17.benchmarkcenter.megware.com  {182,374}
[0] MPI startup(): 183     1088260  gmz17.benchmarkcenter.megware.com  {183,375}
[0] MPI startup(): 184     1088223  gmz17.benchmarkcenter.megware.com  {184,376}
[0] MPI startup(): 185     1088276  gmz17.benchmarkcenter.megware.com  {185,377}
[0] MPI startup(): 186     1088314  gmz17.benchmarkcenter.megware.com  {186,378}
[0] MPI startup(): 187     1088278  gmz17.benchmarkcenter.megware.com  {187,379}
[0] MPI startup(): 188     1088335  gmz17.benchmarkcenter.megware.com  {188,380}
[0] MPI startup(): 189     1088378  gmz17.benchmarkcenter.megware.com  {189,381}
[0] MPI startup(): 190     1088379  gmz17.benchmarkcenter.megware.com  {190,382}
[0] MPI startup(): 191     1088364  gmz17.benchmarkcenter.megware.com  {191,383}
OMP: pid 1088346 tid 1088346 thread 0 bound to OS proc set {3}
OMP: pid 1088268 tid 1088268 thread 0 bound to OS proc set {15}
OMP: pid 1088305 tid 1088305 thread 0 bound to OS proc set {17}
OMP: pid 1088321 tid 1088321 thread 0 bound to OS proc set {2}
OMP: pid 1088232 tid 1088232 thread 0 bound to OS proc set {87}
OMP: pid 1088255 tid 1088255 thread 0 bound to OS proc set {177}
OMP: pid 1088309 tid 1088309 thread 0 bound to OS proc set {82}
OMP: pid 1088295 tid 1088295 thread 0 bound to OS proc set {153}
OMP: pid 1088407 tid 1088407 thread 0 bound to OS proc set {16}
OMP: pid 1088376 tid 1088376 thread 0 bound to OS proc set {6}
OMP: pid 1088283 tid 1088283 thread 0 bound to OS proc set {149}
OMP: pid 1088365 tid 1088365 thread 0 bound to OS proc set {129}
OMP: pid 1088380 tid 1088380 thread 0 bound to OS proc set {133}
OMP: pid 1088370 tid 1088370 thread 0 bound to OS proc set {76}
OMP: pid 1088281 tid 1088281 thread 0 bound to OS proc set {142}
OMP: pid 1088342 tid 1088342 thread 0 bound to OS proc set {71}
OMP: pid 1088352 tid 1088352 thread 0 bound to OS proc set {93}
OMP: pid 1088269 tid 1088269 thread 0 bound to OS proc set {65}
OMP: pid 1088374 tid 1088374 thread 0 bound to OS proc set {40}
OMP: pid 1088337 tid 1088337 thread 0 bound to OS proc set {53}
OMP: pid 1088298 tid 1088298 thread 0 bound to OS proc set {169}
OMP: pid 1088336 tid 1088336 thread 0 bound to OS proc set {41}
OMP: pid 1088282 tid 1088282 thread 0 bound to OS proc set {99}
OMP: pid 1088265 tid 1088265 thread 0 bound to OS proc set {56}
OMP: pid 1088307 tid 1088307 thread 0 bound to OS proc set {95}
OMP: pid 1088405 tid 1088405 thread 0 bound to OS proc set {4}
OMP: pid 1088313 tid 1088313 thread 0 bound to OS proc set {39}
OMP: pid 1088326 tid 1088326 thread 0 bound to OS proc set {152}
OMP: pid 1088351 tid 1088351 thread 0 bound to OS proc set {123}
OMP: pid 1088262 tid 1088262 thread 0 bound to OS proc set {61}
OMP: pid 1088324 tid 1088324 thread 0 bound to OS proc set {178}
OMP: pid 1088253 tid 1088253 thread 0 bound to OS proc set {132}
OMP: pid 1088382 tid 1088382 thread 0 bound to OS proc set {68}
OMP: pid 1088267 tid 1088267 thread 0 bound to OS proc set {42}
OMP: pid 1088350 tid 1088350 thread 0 bound to OS proc set {19}
OMP: pid 1088318 tid 1088318 thread 0 bound to OS proc set {44}
OMP: pid 1088341 tid 1088341 thread 0 bound to OS proc set {70}
OMP: pid 1088353 tid 1088353 thread 0 bound to OS proc set {83}
OMP: pid 1088278 tid 1088278 thread 0 bound to OS proc set {187}
OMP: pid 1088243 tid 1088243 thread 0 bound to OS proc set {140}
OMP: pid 1088344 tid 1088344 thread 0 bound to OS proc set {103}
OMP: pid 1088300 tid 1088300 thread 0 bound to OS proc set {64}
OMP: pid 1088384 tid 1088384 thread 0 bound to OS proc set {13}
OMP: pid 1088302 tid 1088302 thread 0 bound to OS proc set {139}
OMP: pid 1088373 tid 1088373 thread 0 bound to OS proc set {31}
OMP: pid 1088252 tid 1088252 thread 0 bound to OS proc set {174}
OMP: pid 1088413 tid 1088413 thread 0 bound to OS proc set {145}
OMP: pid 1088332 tid 1088332 thread 0 bound to OS proc set {131}
OMP: pid 1088390 tid 1088390 thread 0 bound to OS proc set {100}
OMP: pid 1088223 tid 1088223 thread 0 bound to OS proc set {184}
OMP: pid 1088323 tid 1088323 thread 0 bound to OS proc set {171}
OMP: pid 1088348 tid 1088348 thread 0 bound to OS proc set {74}
OMP: pid 1088271 tid 1088271 thread 0 bound to OS proc set {66}
OMP: pid 1088360 tid 1088360 thread 0 bound to OS proc set {96}
OMP: pid 1088372 tid 1088372 thread 0 bound to OS proc set {38}
OMP: pid 1088392 tid 1088392 thread 0 bound to OS proc set {30}
OMP: pid 1088250 tid 1088250 thread 0 bound to OS proc set {161}
OMP: pid 1088294 tid 1088294 thread 0 bound to OS proc set {126}
OMP: pid 1088366 tid 1088366 thread 0 bound to OS proc set {127}
OMP: pid 1088340 tid 1088340 thread 0 bound to OS proc set {1}
OMP: pid 1088230 tid 1088230 thread 0 bound to OS proc set {81}
OMP: pid 1088235 tid 1088235 thread 0 bound to OS proc set {89}
OMP: pid 1088401 tid 1088401 thread 0 bound to OS proc set {168}
OMP: pid 1088349 tid 1088349 thread 0 bound to OS proc set {14}
OMP: pid 1088409 tid 1088409 thread 0 bound to OS proc set {11}
OMP: pid 1088236 tid 1088236 thread 0 bound to OS proc set {110}
OMP: pid 1088412 tid 1088412 thread 0 bound to OS proc set {29}
OMP: pid 1088273 tid 1088273 thread 0 bound to OS proc set {23}
OMP: pid 1088414 tid 1088414 thread 0 bound to OS proc set {27}
OMP: pid 1088379 tid 1088379 thread 0 bound to OS proc set {190}
OMP: pid 1088272 tid 1088272 thread 0 bound to OS proc set {24}
OMP: pid 1088322 tid 1088322 thread 0 bound to OS proc set {157}
OMP: pid 1088339 tid 1088339 thread 0 bound to OS proc set {20}
OMP: pid 1088296 tid 1088296 thread 0 bound to OS proc set {158}
OMP: pid 1088297 tid 1088297 thread 0 bound to OS proc set {167}
OMP: pid 1088404 tid 1088404 thread 0 bound to OS proc set {118}
OMP: pid 1088241 tid 1088241 thread 0 bound to OS proc set {5}
OMP: pid 1088403 tid 1088403 thread 0 bound to OS proc set {73}
OMP: pid 1088240 tid 1088240 thread 0 bound to OS proc set {143}
OMP: pid 1088233 tid 1088233 thread 0 bound to OS proc set {22}
OMP: pid 1088411 tid 1088411 thread 0 bound to OS proc set {181}
OMP: pid 1088244 tid 1088244 thread 0 bound to OS proc set {113}
OMP: pid 1088385 tid 1088385 thread 0 bound to OS proc set {90}
OMP: pid 1088397 tid 1088397 thread 0 bound to OS proc set {79}
OMP: pid 1088276 tid 1088276 thread 0 bound to OS proc set {185}
OMP: pid 1088275 tid 1088275 thread 0 bound to OS proc set {59}
OMP: pid 1088386 tid 1088386 thread 0 bound to OS proc set {0}
OMP: pid 1088333 tid 1088333 thread 0 bound to OS proc set {154}
OMP: pid 1088289 tid 1088289 thread 0 bound to OS proc set {80}
LAMMPS (22 Jul 2025)
OMP: pid 1088355 tid 1088355 thread 0 bound to OS proc set {164}
OMP: pid 1088301 tid 1088301 thread 0 bound to OS proc set {150}
OMP: pid 1088343 tid 1088343 thread 0 bound to OS proc set {97}
OMP: pid 1088388 tid 1088388 thread 0 bound to OS proc set {125}
OMP: pid 1088303 tid 1088303 thread 0 bound to OS proc set {176}
OMP: pid 1088383 tid 1088383 thread 0 bound to OS proc set {55}
OMP: pid 1088308 tid 1088308 thread 0 bound to OS proc set {7}
OMP: pid 1088284 tid 1088284 thread 0 bound to OS proc set {47}
OMP: pid 1088358 tid 1088358 thread 0 bound to OS proc set {116}
OMP: pid 1088257 tid 1088257 thread 0 bound to OS proc set {21}
OMP: pid 1088304 tid 1088304 thread 0 bound to OS proc set {25}
OMP: pid 1088387 tid 1088387 thread 0 bound to OS proc set {26}
OMP: pid 1088338 tid 1088338 thread 0 bound to OS proc set {63}
OMP: pid 1088234 tid 1088234 thread 0 bound to OS proc set {28}
OMP: pid 1088270 tid 1088270 thread 0 bound to OS proc set {8}
OMP: pid 1088314 tid 1088314 thread 0 bound to OS proc set {186}
OMP: pid 1088224 tid 1088224 thread 0 bound to OS proc set {179}
OMP: pid 1088356 tid 1088356 thread 0 bound to OS proc set {122}
OMP: pid 1088310 tid 1088310 thread 0 bound to OS proc set {86}
OMP: pid 1088316 tid 1088316 thread 0 bound to OS proc set {102}
OMP: pid 1088377 tid 1088377 thread 0 bound to OS proc set {148}
OMP: pid 1088361 tid 1088361 thread 0 bound to OS proc set {109}
OMP: pid 1088245 tid 1088245 thread 0 bound to OS proc set {165}
OMP: pid 1088367 tid 1088367 thread 0 bound to OS proc set {77}
OMP: pid 1088406 tid 1088406 thread 0 bound to OS proc set {172}
OMP: pid 1088238 tid 1088238 thread 0 bound to OS proc set {111}
OMP: pid 1088363 tid 1088363 thread 0 bound to OS proc set {120}
OMP: pid 1088317 tid 1088317 thread 0 bound to OS proc set {124}
OMP: pid 1088228 tid 1088228 thread 0 bound to OS proc set {175}
OMP: pid 1088260 tid 1088260 thread 0 bound to OS proc set {183}
OMP: pid 1088396 tid 1088396 thread 0 bound to OS proc set {60}
OMP: pid 1088375 tid 1088375 thread 0 bound to OS proc set {151}
OMP: pid 1088239 tid 1088239 thread 0 bound to OS proc set {112}
OMP: pid 1088291 tid 1088291 thread 0 bound to OS proc set {9}
OMP: pid 1088329 tid 1088329 thread 0 bound to OS proc set {57}
OMP: pid 1088347 tid 1088347 thread 0 bound to OS proc set {72}
OMP: pid 1088251 tid 1088251 thread 0 bound to OS proc set {36}
OMP: pid 1088335 tid 1088335 thread 0 bound to OS proc set {188}
OMP: pid 1088247 tid 1088247 thread 0 bound to OS proc set {135}
OMP: pid 1088286 tid 1088286 thread 0 bound to OS proc set {106}
OMP: pid 1088400 tid 1088400 thread 0 bound to OS proc set {58}
OMP: pid 1088280 tid 1088280 thread 0 bound to OS proc set {52}
OMP: pid 1088378 tid 1088378 thread 0 bound to OS proc set {189}
OMP: pid 1088290 tid 1088290 thread 0 bound to OS proc set {69}
OMP: pid 1088256 tid 1088256 thread 0 bound to OS proc set {34}
OMP: pid 1088285 tid 1088285 thread 0 bound to OS proc set {115}
OMP: pid 1088320 tid 1088320 thread 0 bound to OS proc set {35}
OMP: pid 1088266 tid 1088266 thread 0 bound to OS proc set {48}
OMP: pid 1088368 tid 1088368 thread 0 bound to OS proc set {119}
OMP: pid 1088327 tid 1088327 thread 0 bound to OS proc set {67}
OMP: pid 1088364 tid 1088364 thread 0 bound to OS proc set {191}
OMP: pid 1088325 tid 1088325 thread 0 bound to OS proc set {156}
OMP: pid 1088357 tid 1088357 thread 0 bound to OS proc set {75}
OMP: pid 1088391 tid 1088391 thread 0 bound to OS proc set {78}
OMP: pid 1088258 tid 1088258 thread 0 bound to OS proc set {147}
OMP: pid 1088248 tid 1088248 thread 0 bound to OS proc set {62}
OMP: pid 1088393 tid 1088393 thread 0 bound to OS proc set {18}
OMP: pid 1088399 tid 1088399 thread 0 bound to OS proc set {101}
OMP: pid 1088249 tid 1088249 thread 0 bound to OS proc set {121}
OMP: pid 1088226 tid 1088226 thread 0 bound to OS proc set {51}
OMP: pid 1088231 tid 1088231 thread 0 bound to OS proc set {170}
OMP: pid 1088277 tid 1088277 thread 0 bound to OS proc set {46}
OMP: pid 1088398 tid 1088398 thread 0 bound to OS proc set {91}
OMP: pid 1088334 tid 1088334 thread 0 bound to OS proc set {92}
OMP: pid 1088246 tid 1088246 thread 0 bound to OS proc set {138}
OMP: pid 1088242 tid 1088242 thread 0 bound to OS proc set {182}
OMP: pid 1088315 tid 1088315 thread 0 bound to OS proc set {104}
OMP: pid 1088306 tid 1088306 thread 0 bound to OS proc set {98}
OMP: pid 1088237 tid 1088237 thread 0 bound to OS proc set {105}
OMP: pid 1088402 tid 1088402 thread 0 bound to OS proc set {180}
OMP: pid 1088371 tid 1088371 thread 0 bound to OS proc set {173}
OMP: pid 1088312 tid 1088312 thread 0 bound to OS proc set {137}
OMP: pid 1088362 tid 1088362 thread 0 bound to OS proc set {141}
OMP: pid 1088229 tid 1088229 thread 0 bound to OS proc set {128}
OMP: pid 1088263 tid 1088263 thread 0 bound to OS proc set {43}
OMP: pid 1088259 tid 1088259 thread 0 bound to OS proc set {146}
OMP: pid 1088319 tid 1088319 thread 0 bound to OS proc set {163}
OMP: pid 1088279 tid 1088279 thread 0 bound to OS proc set {108}
OMP: pid 1088311 tid 1088311 thread 0 bound to OS proc set {84}
OMP: pid 1088261 tid 1088261 thread 0 bound to OS proc set {45}
OMP: pid 1088345 tid 1088345 thread 0 bound to OS proc set {85}
OMP: pid 1088395 tid 1088395 thread 0 bound to OS proc set {94}
OMP: pid 1088354 tid 1088354 thread 0 bound to OS proc set {107}
OMP: pid 1088299 tid 1088299 thread 0 bound to OS proc set {144}
OMP: pid 1088389 tid 1088389 thread 0 bound to OS proc set {166}
OMP: pid 1088330 tid 1088330 thread 0 bound to OS proc set {54}
OMP: pid 1088381 tid 1088381 thread 0 bound to OS proc set {155}
OMP: pid 1088394 tid 1088394 thread 0 bound to OS proc set {12}
OMP: pid 1088254 tid 1088254 thread 0 bound to OS proc set {33}
OMP: pid 1088293 tid 1088293 thread 0 bound to OS proc set {134}
OMP: pid 1088264 tid 1088264 thread 0 bound to OS proc set {10}
OMP: pid 1088274 tid 1088274 thread 0 bound to OS proc set {88}
OMP: pid 1088410 tid 1088410 thread 0 bound to OS proc set {117}
OMP: pid 1088408 tid 1088408 thread 0 bound to OS proc set {160}
OMP: pid 1088287 tid 1088287 thread 0 bound to OS proc set {37}
OMP: pid 1088288 tid 1088288 thread 0 bound to OS proc set {32}
OMP: pid 1088359 tid 1088359 thread 0 bound to OS proc set {136}
OMP: pid 1088292 tid 1088292 thread 0 bound to OS proc set {114}
OMP: pid 1088225 tid 1088225 thread 0 bound to OS proc set {130}
OMP: pid 1088369 tid 1088369 thread 0 bound to OS proc set {50}
OMP: pid 1088331 tid 1088331 thread 0 bound to OS proc set {159}
OMP: pid 1088227 tid 1088227 thread 0 bound to OS proc set {162}
OMP: pid 1088328 tid 1088328 thread 0 bound to OS proc set {49}
  using 1 OpenMP thread(s) per MPI task
Lattice spacing in x,y,z = 3.615 3.615 3.615
Created orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  8 by 4 by 6 MPI processor grid
Created 32768000 atoms
  using lattice units in orthogonal box = (0 0 0) to (1156.8 578.4 578.4)
  create_atoms CPU = 0.018 seconds
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Neighbor list info ...
  update: every = 1 steps, delay = 5 steps, check = yes
  max neighbors/atom: 2000, page size: 100000
  master list distance cutoff = 5.95
  ghost atom cutoff = 5.95
  binsize = 2.975, bins = 389 195 195
  1 neighbor lists, perpetual/occasional/extra = 1 0 0
  (1) pair eam/intel, perpetual
      attributes: half, newton on, intel
      pair build: half/bin/newton/intel
      stencil: half/bin/3d/intel
      bin: intel
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 0
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 1445 | 1490 | 1544 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
         0   1600          -1.1599871e+08  0             -1.0922176e+08  18703.984    
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
Loop time of 0.884652 on 192 procs for 10 steps with 32768000 atoms

Performance: 4.883 ns/day, 4.915 hours/ns, 11.304 timesteps/s, 370.406 Matom-step/s
98.4% CPU use with 192 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 0.62681    | 0.64565    | 0.66422    |   0.9 | 72.98
Neigh   | 0.041287   | 0.043996   | 0.046424   |   0.4 |  4.97
Comm    | 0.063086   | 0.076801   | 0.10391    |   2.9 |  8.68
Output  | 0.001998   | 0.0028982  | 0.0046601  |   1.2 |  0.33
Modify  | 0.056036   | 0.08207    | 0.091621   |   1.7 |  9.28
Other   |            | 0.03324    |            |       |  3.76

Nlocal:         170667 ave      171308 max      169512 min
Histogram: 64 0 0 0 0 0 0 0 13 115
Nghost:        55997.1 ave       56418 max       55762 min
Histogram: 27 61 32 8 0 4 15 27 13 5
Neighs:     6.4242e+06 ave 6.47799e+06 max 6.37962e+06 min
Histogram: 64 0 1 56 7 0 0 0 2 62

Total # of neighbors = 1.2334455e+09
Ave neighs/atom = 37.641769
Neighbor list builds = 1
Dangerous builds = 0
----------------------------------------------------------
Using INTEL Package without Coprocessor.
Compiler: Intel LLVM C++ 202501.0 / Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317)
SIMD compiler directives: Enabled
Precision: mixed
----------------------------------------------------------
Setting up Verlet run ...
  Unit style    : metal
  Current step  : 10
  Time step     : 0.005
Per MPI rank memory allocation (min/avg/max) = 1445 | 1490 | 1544 Mbytes
   Step          Temp          E_pair         E_mol          TotEng         Press     
        10   475.7934      -1.1121047e+08  0             -1.091952e+08   64942.604    
        50   780.89876     -1.1250693e+08  0             -1.0919936e+08  52278.945    
       100   798.34169     -1.1258126e+08  0             -1.0919981e+08  51472.984    
       150   797.64371     -1.1257832e+08  0             -1.0919982e+08  51525.287    
       200   797.61738     -1.1257821e+08  0             -1.0919982e+08  51537.352    
       250   797.85512     -1.1257922e+08  0             -1.0919983e+08  51529.071    
       300   797.68645     -1.125785e+08   0             -1.0919982e+08  51538.823    
       350   797.80492     -1.12579e+08    0             -1.0919982e+08  51537.47     
       400   797.74786     -1.1257875e+08  0             -1.0919981e+08  51540.399    
       450   797.84597     -1.1257917e+08  0             -1.0919981e+08  51536.934    
       500   797.7245      -1.1257864e+08  0             -1.091998e+08   51543.024    
       510   797.68253     -1.1257846e+08  0             -1.091998e+08   51545.153    
Loop time of 47.1362 on 192 procs for 500 steps with 32768000 atoms

Performance: 4.582 ns/day, 5.237 hours/ns, 10.608 timesteps/s, 347.589 Matom-step/s
98.9% CPU use with 192 MPI tasks x 1 OpenMP threads

MPI task timing breakdown:
Section |  min time  |  avg time  |  max time  |%varavg| %total
---------------------------------------------------------------
Pair    | 32.39      | 32.955     | 33.512     |   4.1 | 69.92
Neigh   | 3.6267     | 3.7352     | 3.8969     |   2.9 |  7.92
Comm    | 3.9766     | 4.6514     | 5.8048     |  15.8 |  9.87
Output  | 0.023161   | 0.03206    | 0.051065   |   3.3 |  0.07
Modify  | 2.878      | 4.2753     | 4.595      |  11.1 |  9.07
Other   |            | 1.487      |            |       |  3.15

Nlocal:         170667 ave      171684 max      169135 min
Histogram: 5 32 24 3 0 0 10 44 58 16
Nghost:        55896.6 ave       56661 max       55274 min
Histogram: 5 9 23 43 38 38 24 6 4 2
Neighs:    6.44217e+06 ave  6.5059e+06 max 6.38057e+06 min
Histogram: 8 35 22 20 34 9 0 10 26 28

Total # of neighbors = 1.2368967e+09
Ave neighs/atom = 37.747092
Neighbor list builds = 85
Dangerous builds = 15
Total wall time: 0:00:48


[MAQAO] Info: 191/192 lprof instances finished


Your experiment path is /beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5

To display your profiling results:
#######################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                       COMMAND                                                                                        #
#######################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/beegfs/hackathon/users/eoseret/qaas_runs_ZEN5/175-813-3649/intel/LAMMPS/run/oneview_runs/multicore/icx_2/oneview_results_1758148104/tools/lprof_npsu_run_5  #
#######################################################################################################################################################################################################################

×