options

exec - 2025-09-15 06:07:57 - MAQAO 2025.1.1

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Total Time (s)29.12
Max (Thread Active Time) (s)28.06
Average Active Time (s)27.72
Activity Ratio (%)98.0
Average number of active threads182.767
Affinity Stability (%)99.5
GFLOPS94.294
Time in analyzed loops (%)86.9
Time in analyzed innermost loops (%)86.4
Time in user code (%)87.2
Compilation Options Score (%)100.0
Array Access Efficiency (%)Not Available
Potential Speedups
Perfect Flow Complexity1.00
Perfect OpenMP + MPI + Pthread1.07
Perfect OpenMP + MPI + Pthread + Perfect Load Distribution1.16

Average Active Threads Count

Loop Based Profile

Innermost Loop Based Profile

Application Categorization

Compilation Options

Source ObjectIssue
libllama.so
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.
libggml-cpu.so
vec.cpp
mmq.cpp
exec
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.
libggml-base.so
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.
[vdso]
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-march=(target) is missing.

Loop Path Count Profile

Experiment Summary

Application/scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/run/binaries/aocc_6/exec
Timestamp2025-09-15 06:07:57 Universal Timestamp1757941677
Number of processes observed6 Number of threads observed192
Experiment TypeMPI; OpenMP;
Machinesdp772511
Model NameIntel(R) Xeon(R) 6972P
Architecturex86_64 Micro ArchitectureGRANITE_RAPIDS
Cache Size491520 KB Number of Cores96
OS VersionLinux 6.8.0-78-generic #78-Ubuntu SMP PREEMPT_DYNAMIC Tue Aug 12 11:34:18 UTC 2025
Architecture used during static analysisx86_64 Micro Architecture used during static analysisGRANITE_RAPIDS
Frequency Driverintel_pstate Frequency Governorperformance
Huge Pagesalways Hyperthreadingon
Number of sockets2 Number of cores per socket96
Compilation Options+ [vdso]: N/A
exec: N/A
libggml-base.so: N/A
libggml-cpu.so: AMD clang version 17.0.6 (CLANG: AOCC_5.0.0-Build#1377 2024_09_24) /scratch/users/amazouz/Tools/x86_64/compilers/AOCC/aocc-compiler-5.0.0/bin/clang-17 --driver-mode=g++ -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/build/llama.cpp/ggml/src/.. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/build/llama.cpp/ggml/src/. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/build/llama.cpp/ggml/src/../include -O2 -march=graniterapids -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -nopie -grecord-command-line -fno-finite-math-only -stdlib=libc++ -D NDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -fopenmp=libomp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/amx/mmq.cpp.o -c /scratch/users/amazouz/QAAS/service/Llama.cpp/sdp772511/175-793-6543/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/amx/mmq.cpp
libllama.so: N/A

Configuration Summary

Dataset
Run Command<executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t 32 -n 512 -p "what is a LLM?" --seed 0
MPI Commandmpirun -n <number_processes>
Number Processes6
Number Nodes1
FilterNot Used
Profile StartNot Used
×