options

Software Topology

Scalability - 1x6

Number processes: 1 Number nodes: 1 Run Command: <executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t <OMP_NUM_THREADS> -n 512 -p "what is a LLM?" --seed 0MPI Command: mpirun -n <number_processes> Run Directory: /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/run/oneview_runs/multicore/aocc_4/oneview_run_1759256241OMP_NUM_THREADS: 6I_MPI_PIN_ORDER: bunchOMP_DISPLAY_AFFINITY: TRUEOMP_PROC_BIND: spreadOMP_AFFINITY_FORMAT: 'OMP: pid %P tid %i thread %n bound to OS proc set {%A}'OMP_DISPLAY_ENV: TRUEI_MPI_PIN_DOMAIN: autoI_MPI_DEBUG: 4OMP_PLACES: threads
IDObserved ProcessesObserved ThreadsTime(s)Elapsed Time (s)Active Time (%)Start (after process) (s)End (before process) (s)Maximum Time on the Same CPU (s)
Node isix06.benchmarkcenter.megware.com+1657.61
Process 7409+657.61
Thread 740957.6174.3577.480.000.0058.05
Thread 742257.3158.0898.6616.240.0257.94
Thread 742457.4258.0898.8616.250.0257.93
Thread 742657.2558.0898.5716.250.0257.93
Thread 742857.2958.0898.6316.250.0257.94
Thread 743057.1458.0598.4316.280.0257.92
×