Help is available by moving the cursor above any
symbol or by checking MAQAO website.
| Total Time (s) | 36.55 | ||
| Max (Thread Active Time) (s) | 19.00 | ||
| Average Active Time (s) | 18.64 | ||
| Activity Ratio (%) | 97.0 | ||
| Average number of active threads | 97.893 | ||
| Affinity Stability (%) | 98.5 | ||
| GFLOPS | 24.523 | ||
| Time in analyzed loops (%) | 24.2 | ||
| Time in analyzed innermost loops (%) | 24.0 | ||
| Time in user code (%) | 24.4 | ||
| Compilation Options Score (%) | 99.9 | ||
| Array Access Efficiency (%) | 78.5 | ||
| Potential Speedups | |||
| Perfect Flow Complexity | 1.00 | ||
| Perfect OpenMP/MPI/Pthread/TBB | 1.30 | ||
| Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution | 4.12 | ||
| No Scalar Integer | Potential Speedup | 1.00 | |
| Nb Loops to get 80% | 4 | ||
| FP Vectorised | Potential Speedup | 1.00 | |
| Nb Loops to get 80% | 3 | ||
| Fully Vectorised | Potential Speedup | 1.00 | |
| Nb Loops to get 80% | 6 | ||
| FP Arithmetic Only | Potential Speedup | 1.00 | |
| Nb Loops to get 80% | 2 | ||
| OpenMP perfectly balanced | Potential Speedup | 3.39 | |
| Nb Loops to get 80% | 1 | ||
| Source Object | Issue |
|---|---|
| ▼libllama.so | |
| ○hashtable.h | |
| ○llama-sampling.cpp | |
| ▼libggml-cpu.so | |
| ○binary-ops.cpp | |
| ○ops.cpp | |
| ○vec.cpp | |
| ○mmq.cpp | |
| ○ggml-cpu.c | |
| ○common.h | |
| ○amx.cpp | |
| ○quants.c | |
| ▼libggml-base.so | |
| ▼ | |
| ○ | -g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target) |
| ○ | -O2, -O3 or -Ofast is missing. |
| ○ | -march=(target) is missing. |
| ▼exec | |
| ○sampling.cpp |
| Experiment Name | |||||
| Application | /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/run/binaries/icx_2/exec | ||||
| Timestamp | 2025-09-30 20:05:44 | Universal Timestamp | 1759255544 | ||
| Number of processes observed | 1 | Number of threads observed | 192 | ||
| Experiment Type | MPI; OpenMP; | ||||
| Machine | isix06.benchmarkcenter.megware.com | ||||
| Model Name | Intel(R) Xeon(R) 6972P | ||||
| Architecture | x86_64 | Micro Architecture | GRANITE_RAPIDS | ||
| Cache Size | 491520 KB | Number of Cores | 96 | ||
| OS Version | Linux 5.14.0-570.39.1.el9_6.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Sep 4 05:08:52 EDT 2025 | ||||
| Architecture used during static analysis | x86_64 | Micro Architecture used during static analysis | GRANITE_RAPIDS | ||
| Frequency Driver | intel_pstate | Frequency Governor | performance | ||
| Huge Pages | always | Hyperthreading | on | ||
| Number of sockets | 2 | Number of cores per socket | 96 | ||
| Compilation Options | exec: clang based Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317) /cluster/intel/oneapi/2025.1.0/compiler/2025.1/bin/compiler/clang --driver-mode=g++ --intel -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_SHARED -D LLAMA_USE_CURL -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/common/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/common/../vendor -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/src/../include -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -x GRANITERAPIDS -mprefer-vector-width=512 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -MD -MT common/CMakeFiles/common.dir/sampling.cpp.o -MF common/CMakeFiles/common.dir/sampling.cpp.o.d -o common/CMakeFiles/common.dir/sampling.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/common/sampling.cpp -fveclib=SVML libggml-base.so: N/A libggml-cpu.so: clang based Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317) /cluster/intel/oneapi/2025.1.0/compiler/2025.1/bin/compiler/clang --driver-mode=g++ --intel -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/.. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -x GRANITERAPIDS -mprefer-vector-width=512 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -fno-associative-math -fiopenmp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp -fveclib=SVML libllama.so: clang based Intel(R) oneAPI DPC++/C++ Compiler 2025.1.0 (2025.1.0.20250317) /cluster/intel/oneapi/2025.1.0/compiler/2025.1/bin/compiler/clang --driver-mode=g++ --intel -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/src/. -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/src/../include -I /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -x GRANITERAPIDS -mprefer-vector-width=512 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -MD -MT src/CMakeFiles/llama.dir/llama-sampling.cpp.o -MF src/CMakeFiles/llama.dir/llama-sampling.cpp.o.d -o src/CMakeFiles/llama.dir/llama-sampling.cpp.o -c /beegfs/hackathon/users/eoseret/qaas_runs_test/175-924-9259/intel/llama.cpp/build/llama.cpp/src/llama-sampling.cpp -fveclib=SVML | ||||
| Comments | |||||
| Dataset | |
| Run Command | <executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t 192 -n 512 -p "what is a LLM?" --seed 0 |
| MPI Command | mpirun -n <number_processes> |
| Number Processes | 1 |
| Number Nodes | 1 |
| Number Processes per Node | 1 |
| Filter | Not Used |
| Profile Start | Not Used |
| Profile Stop | Not Used |
| Maximal Path Number | 4 |