Help is available by moving the cursor above any
symbol or by checking MAQAO website.
| Total Time (s) | 11.17 | ||
| Max (Thread Active Time) (s) | 10.55 | ||
| Average Active Time (s) | 10.27 | ||
| Activity Ratio (%) | 96.9 | ||
| Average number of active threads | 88.277 | ||
| Affinity Stability (%) | 99.3 | ||
| Time in analyzed loops (%) | 73.2 | ||
| Time in analyzed innermost loops (%) | 72.4 | ||
| Time in user code (%) | 74.0 | ||
| Compilation Options Score (%) | 99.3 | ||
| Array Access Efficiency (%) | 92.1 | ||
| Potential Speedups | |||
| Perfect Flow Complexity | 1.00 | ||
| Perfect OpenMP/MPI/Pthread/TBB | 1.23 | ||
| Perfect OpenMP/MPI/Pthread/TBB + Perfect Load Distribution | 1.38 | ||
| No Scalar Integer | Potential Speedup | 1.14 | |
| Nb Loops to get 80% | 1 | ||
| FP Vectorised | Potential Speedup | 1.06 | |
| Nb Loops to get 80% | 1 | ||
| Fully Vectorised | Potential Speedup | 1.29 | |
| Nb Loops to get 80% | 1 | ||
| FP Arithmetic Only | Potential Speedup | 1.58 | |
| Nb Loops to get 80% | 1 | ||
| OpenMP perfectly balanced | Potential Speedup | 1.27 | |
| Nb Loops to get 80% | 1 | ||
| Source Object | Issue |
|---|---|
| ▼libllama.so | |
| ○hashtable.h | |
| ○llama-sampling.cpp | |
| ○llama-arch.cpp | |
| ○stl_pair.h | |
| ○llama-vocab.cpp | |
| ○unique_ptr.h | |
| ○llama-batch.cpp | |
| ○hashtable_policy.h | |
| ▼libggml-cpu.so | |
| ○binary-ops.cpp | |
| ○traits.cpp | |
| ○repack.cpp | |
| ○ggml-cpu.cpp | |
| ○ops.cpp | |
| ○vec.cpp | |
| ○ggml-cpu.c | |
| ○quants.c | |
| ▼exec | |
| ○common.cpp | |
| ○sampling.cpp | |
| ○vector.tcc | |
| ○regex_executor.tcc | |
| ○stl_uninitialized.h | |
| ▼[vdso] | |
| ▼ | |
| ○ | -g is missing for some functions (possibly ones added by the compiler), it is needed to have more accurate reports. Other recommended flags are: -O2/-O3, -march=(target) |
| ○ | -O2, -O3 or -Ofast is missing. |
| ○ | -mcpu=native is missing. |
| ▼libggml-base.so | |
| ○stl_construct.h | |
| ○ggml.c |
| Experiment Name | |||||
| Application | /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/run/binaries/armclang_3/exec | ||||
| Timestamp | 2025-09-16 13:49:12 | Universal Timestamp | 1758030552 | ||
| Number of processes observed | 1 | Number of threads observed | 96 | ||
| Experiment Type | MPI; OpenMP; | ||||
| Machine | ip-172-31-47-249.ec2.internal | ||||
| Architecture | aarch64 | Micro Architecture | ARM_NEOVERSE_V2 | ||
| OS Version | Linux 6.1.109-118.189.amzn2023.aarch64 #1 SMP Tue Sep 10 08:58:40 UTC 2024 | ||||
| Architecture used during static analysis | aarch64 | Micro Architecture used during static analysis | ARM_NEOVERSE_V2 | ||
| Frequency Driver | NA | Frequency Governor | NA | ||
| Huge Pages | madvise | Hyperthreading | off | ||
| Number of sockets | 1 | Number of cores per socket | 96 | ||
| Compilation Options | + [vdso]: N/A exec: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_SHARED -D LLAMA_USE_CURL -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/../vendor -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT common/CMakeFiles/common.dir/regex-partial.cpp.o -MF common/CMakeFiles/common.dir/regex-partial.cpp.o.d -o common/CMakeFiles/common.dir/regex-partial.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/common/regex-partial.cpp GNU C17 14.2.0 -mlittle-endian -mabi=lp64 -g -g -g -O2 -O2 -O2 -fbuilding-libgcc -fno-stack-protector -fPIC libggml-base.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BUILD -D GGML_COMMIT=\"unknown\" -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_VERSION=\"0.0.0\" -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_base_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -MF ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o.d -o ggml/src/CMakeFiles/ggml-base.dir/ggml.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml.c libggml-cpu.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 -D GGML_BACKEND_BUILD -D GGML_BACKEND_SHARED -D GGML_SCHED_MAX_COPIES=4 -D GGML_SHARED -D GGML_USE_CPU_REPACK -D GGML_USE_LLAMAFILE -D GGML_USE_OPENMP -D _GNU_SOURCE -D _XOPEN_SOURCE=600 -D ggml_cpu_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/.. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wshadow -Wstrict-prototypes -Wpointer-arith -Wmissing-prototypes -Werror=implicit-int -Werror=implicit-function-declaration -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wdouble-promotion -fopenmp=libomp -std=gnu11 -MD -MT ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -MF ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o.d -o ggml/src/CMakeFiles/ggml-cpu.dir/ggml-cpu/arch/arm/quants.c.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/ggml-cpu/arch/arm/quants.c libllama.so: Arm C/C++/Fortran Compiler version 24.10.1 (build number 4) (based on LLVM 19.1.0) /opt/arm/arm-linux-compiler-24.10.1_AmazonLinux-2023/llvm-bin/clang-19 --driver-mode=g++ -D GGML_BACKEND_SHARED -D GGML_SHARED -D GGML_USE_BLAS -D GGML_USE_CPU -D LLAMA_BUILD -D LLAMA_SHARED -D llama_EXPORTS -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/. -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/../include -I /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/ggml/src/../include -O3 -O3 -mcpu=neoverse-v2+nosve+nosve2 -armpl -ffast-math -g -fno-omit-frame-pointer -fcf-protection=none -no-pie -grecord-command-line -fno-finite-math-only -O3 -D NDEBUG -fPIC -Wmissing-declarations -Wmissing-noreturn -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wunreachable-code-break -Wunreachable-code-return -Wmissing-prototypes -Wextra-semi -MD -MT src/CMakeFiles/llama.dir/llama-vocab.cpp.o -MF src/CMakeFiles/llama.dir/llama-vocab.cpp.o.d -o src/CMakeFiles/llama.dir/llama-vocab.cpp.o -c /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-47-249.ec2.internal/175-802-9624/llama.cpp/build/llama.cpp/src/llama-vocab.cpp | ||||
| Comments | |||||
| Dataset | |
| Run Command | <executable> -m meta-llama-3.1-8b-instruct-Q8_0.gguf -no-cnv -t 96 -n 512 -p "what is a LLM?" --seed 0 |
| MPI Command | mpirun -n <number_processes> --bind-to none --report-bindings |
| Number Processes | 1 |
| Number Nodes | 1 |
| Number Processes per Node | 1 |
| Filter | Not Used |
| Profile Start | Not Used |
| Profile Stop | Not Used |
| Maximal Path Number | 4 |