options

exec - 2025-10-01 03:14:00 - MAQAO 2025.1.1

Help is available by moving the cursor above any symbol or by checking MAQAO website.

Global Metrics

Total Time (s)21.75
Max (Thread Active Time) (s)19.33
Average Active Time (s)19.02
Activity Ratio (%)89.3
Average number of active threads62.968
Affinity Stability (%)0
Time in analyzed loops (%)1.24
Time in analyzed innermost loops (%)1.24
Time in user code (%)1.25
Compilation Options Score (%)69.8
Array Access Efficiency (%)Not Available
Potential Speedups
Perfect Flow Complexity1.00
Perfect OpenMP + MPI + Pthread1.34
Perfect OpenMP + MPI + Pthread + Perfect Load Distribution1.77

Loop Based Profile

Innermost Loop Based Profile

Application Categorization

Compilation Options

Source ObjectIssue
libllama.so
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.
libggml-cpu.so
quants.c
libggml-base.so
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.
[vdso]
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.
exec
-g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target)
-O2, -O3 or -Ofast is missing.
-mcpu=native is missing.

Loop Path Count Profile

Experiment Summary

Application/scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/run/oneview_runs/defaults/orig/exec
Timestamp2025-10-01 03:14:00 Universal Timestamp1759313640
Number of processes observed1 Number of threads observed72
Experiment TypeMPI; OpenMP;
Machineortce-gh
Architectureaarch64 Micro ArchitectureARM_NEOVERSE_V2
OS VersionLinux 6.8.0-84-generic-64k #84-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 5 15:19:10 UTC 2025
Architecture used during static analysisaarch64 Micro Architecture used during static analysisARM_NEOVERSE_V2
Frequency Drivercppc_cpufreq Frequency Governorondemand
Huge Pagesalways Hyperthreadingoff
Number of sockets1 Number of cores per socket72
Compilation Options+ [vdso]: N/A
exec: N/A
libggml-base.so: N/A
libggml-cpu.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /scratch/users/amazouz/Tools/aarch64/compilers/install/arm-linux-compiler-24.10.1_Ubuntu-22.04/llvm-bin/clang-19 -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/.. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -std=gnu11 -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -mcpu=native+dotprod+i8mm+sve+nosme -fopenmp=libomp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -c /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /scratch/users/amazouz/Tools/aarch64/compilers/install/arm-linux-compiler-24.10.1_Ubuntu-22.04/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/.. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/. -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -O3 -D NDEBUG -std=gnu++17 -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -mcpu=native+dotprod+i8mm+sve+nosme -fopenmp=libomp -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/repack.cpp.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/repack.cpp.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/repack.cpp.o -c /scratch/users/amazouz/QAAS/service/Llama.cpp/ortce-gh/175-931-3387/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/repack.cpp
libllama.so: N/A

Configuration Summary

Dataset
Run Command<executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t 72 -n 512 -p "what is a LLM?" --seed 0
MPI Commandmpirun -n <number_processes> --bind-to none --report-bindings
Number Processes1
Number Nodes1
FilterNot Used
Profile StartNot Used
×