options

Software Topology

Scalability - 1x8

Number processes: 1 Number nodes: 1 Run Command: <executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t <OMP_NUM_THREADS> -n 512 -p "what is a LLM?" --seed 0MPI Command: mpirun -n <number_processes> Run Directory: /beegfs/hackathon/users/eoseret/qaas_runs_test/175-950-2189/intel/llama.cpp/run/oneview_runs/multicore/icx_3/oneview_run_1759511881OMP_NUM_THREADS: 8I_MPI_PIN_ORDER: bunchOMP_DISPLAY_AFFINITY: TRUEOMP_PROC_BIND: spreadOMP_AFFINITY_FORMAT: 'OMP: pid %P tid %i thread %n bound to OS proc set {%A}'OMP_DISPLAY_ENV: TRUEI_MPI_PIN_DOMAIN: autoI_MPI_DEBUG: 4OMP_PLACES: threads
IDObserved ProcessesObserved ThreadsTime(s)Elapsed Time (s)Active Time (%)Start (after process) (s)End (before process) (s)Maximum Time on the Same CPU (s)
Node gmz12.benchmarkcenter.megware.com+1842.25
Process 10034+842.25
Thread 1003442.2550.5183.640.000.0042.22
Thread 1004342.2242.2999.838.210.0142.14
Thread 1004542.2142.2899.828.210.0142.26
Thread 1004742.1842.2899.758.210.0142.14
Thread 1004942.2342.2899.878.210.0142.07
Thread 1005142.1742.2899.748.220.0142.14
Thread 1005342.1742.2899.738.220.0142.24
Thread 1005542.2042.2899.818.220.0142.01
×