options

Executable Output


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_0  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2243 tid 2243 thread 0 bound to OS proc set {0}
OMP: pid 2243 tid 2310 thread 1 bound to OS proc set {32}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_1  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2340 tid 2340 thread 0 bound to OS proc set {0}
OMP: pid 2340 tid 2407 thread 1 bound to OS proc set {16}
OMP: pid 2340 tid 2408 thread 2 bound to OS proc set {32}
OMP: pid 2340 tid 2409 thread 3 bound to OS proc set {48}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_2  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2437 tid 2437 thread 0 bound to OS proc set {0}
OMP: pid 2437 tid 2505 thread 2 bound to OS proc set {16}
OMP: pid 2437 tid 2506 thread 3 bound to OS proc set {24}
OMP: pid 2437 tid 2507 thread 4 bound to OS proc set {32}
OMP: pid 2437 tid 2504 thread 1 bound to OS proc set {8}
OMP: pid 2437 tid 2508 thread 5 bound to OS proc set {40}
OMP: pid 2437 tid 2509 thread 6 bound to OS proc set {48}
OMP: pid 2437 tid 2510 thread 7 bound to OS proc set {56}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_3  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2543 tid 2543 thread 0 bound to OS proc set {0}
OMP: pid 2543 tid 2610 thread 1 bound to OS proc set {4}
OMP: pid 2543 tid 2611 thread 2 bound to OS proc set {8}
OMP: pid 2543 tid 2612 thread 3 bound to OS proc set {12}
OMP: pid 2543 tid 2613 thread 4 bound to OS proc set {16}
OMP: pid 2543 tid 2617 thread 8 bound to OS proc set {32}
OMP: pid 2543 tid 2614 thread 5 bound to OS proc set {20}
OMP: pid 2543 tid 2615 thread 6 bound to OS proc set {24}
OMP: pid 2543 tid 2618 thread 9 bound to OS proc set {36}
OMP: pid 2543 tid 2619 thread 10 bound to OS proc set {40}
OMP: pid 2543 tid 2621 thread 12 bound to OS proc set {48}
OMP: pid 2543 tid 2622 thread 13 bound to OS proc set {52}
OMP: pid 2543 tid 2623 thread 14 bound to OS proc set {56}
OMP: pid 2543 tid 2616 thread 7 bound to OS proc set {28}
OMP: pid 2543 tid 2620 thread 11 bound to OS proc set {44}
OMP: pid 2543 tid 2624 thread 15 bound to OS proc set {60}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_4  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2651 tid 2651 thread 0 bound to OS proc set {0}
OMP: pid 2651 tid 2720 thread 3 bound to OS proc set {8}
OMP: pid 2651 tid 2718 thread 1 bound to OS proc set {2}
OMP: pid 2651 tid 2719 thread 2 bound to OS proc set {5}
OMP: pid 2651 tid 2726 thread 9 bound to OS proc set {24}
OMP: pid 2651 tid 2721 thread 4 bound to OS proc set {10}
OMP: pid 2651 tid 2722 thread 5 bound to OS proc set {13}
OMP: pid 2651 tid 2729 thread 12 bound to OS proc set {32}
OMP: pid 2651 tid 2730 thread 13 bound to OS proc set {35}
OMP: pid 2651 tid 2724 thread 7 bound to OS proc set {18}
OMP: pid 2651 tid 2734 thread 17 bound to OS proc set {46}
OMP: pid 2651 tid 2731 thread 14 bound to OS proc set {37}
OMP: pid 2651 tid 2725 thread 8 bound to OS proc set {21}
OMP: pid 2651 tid 2728 thread 11 bound to OS proc set {29}
OMP: pid 2651 tid 2723 thread 6 bound to OS proc set {16}
OMP: pid 2651 tid 2735 thread 18 bound to OS proc set {48}
OMP: pid 2651 tid 2727 thread 10 bound to OS proc set {27}
OMP: pid 2651 tid 2733 thread 16 bound to OS proc set {43}
OMP: pid 2651 tid 2732 thread 15 bound to OS proc set {40}
OMP: pid 2651 tid 2736 thread 19 bound to OS proc set {51}
OMP: pid 2651 tid 2738 thread 21 bound to OS proc set {56}
OMP: pid 2651 tid 2737 thread 20 bound to OS proc set {54}
OMP: pid 2651 tid 2739 thread 22 bound to OS proc set {59}
OMP: pid 2651 tid 2740 thread 23 bound to OS proc set {62}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_5  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2767 tid 2767 thread 0 bound to OS proc set {0}
OMP: pid 2767 tid 2835 thread 2 bound to OS proc set {4}
OMP: pid 2767 tid 2834 thread 1 bound to OS proc set {2}
OMP: pid 2767 tid 2836 thread 3 bound to OS proc set {6}
OMP: pid 2767 tid 2838 thread 5 bound to OS proc set {10}
OMP: pid 2767 tid 2845 thread 12 bound to OS proc set {24}
OMP: pid 2767 tid 2837 thread 4 bound to OS proc set {8}
OMP: pid 2767 tid 2850 thread 17 bound to OS proc set {34}
OMP: pid 2767 tid 2839 thread 6 bound to OS proc set {12}
OMP: pid 2767 tid 2840 thread 7 bound to OS proc set {14}
OMP: pid 2767 tid 2846 thread 13 bound to OS proc set {26}
OMP: pid 2767 tid 2842 thread 9 bound to OS proc set {18}
OMP: pid 2767 tid 2841 thread 8 bound to OS proc set {16}
OMP: pid 2767 tid 2849 thread 16 bound to OS proc set {32}
OMP: pid 2767 tid 2843 thread 10 bound to OS proc set {20}
OMP: pid 2767 tid 2848 thread 15 bound to OS proc set {30}
OMP: pid 2767 tid 2852 thread 19 bound to OS proc set {38}
OMP: pid 2767 tid 2851 thread 18 bound to OS proc set {36}
OMP: pid 2767 tid 2844 thread 11 bound to OS proc set {22}
OMP: pid 2767 tid 2854 thread 21 bound to OS proc set {42}
OMP: pid 2767 tid 2853 thread 20 bound to OS proc set {40}
OMP: pid 2767 tid 2847 thread 14 bound to OS proc set {28}
OMP: pid 2767 tid 2857 thread 24 bound to OS proc set {48}
OMP: pid 2767 tid 2861 thread 28 bound to OS proc set {56}
OMP: pid 2767 tid 2858 thread 25 bound to OS proc set {50}
OMP: pid 2767 tid 2855 thread 22 bound to OS proc set {44}
OMP: pid 2767 tid 2863 thread 30 bound to OS proc set {60}
OMP: pid 2767 tid 2859 thread 26 bound to OS proc set {52}
OMP: pid 2767 tid 2862 thread 29 bound to OS proc set {58}
OMP: pid 2767 tid 2856 thread 23 bound to OS proc set {46}
OMP: pid 2767 tid 2860 thread 27 bound to OS proc set {54}
OMP: pid 2767 tid 2864 thread 31 bound to OS proc set {62}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_6  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 2892 tid 2959 thread 1 bound to OS proc set {1}
OMP: pid 2892 tid 2961 thread 3 bound to OS proc set {4}
OMP: pid 2892 tid 2892 thread 0 bound to OS proc set {0}
OMP: pid 2892 tid 2960 thread 2 bound to OS proc set {3}
OMP: pid 2892 tid 2962 thread 4 bound to OS proc set {6}
OMP: pid 2892 tid 2967 thread 9 bound to OS proc set {14}
OMP: pid 2892 tid 2971 thread 13 bound to OS proc set {21}
OMP: pid 2892 tid 2964 thread 6 bound to OS proc set {9}
OMP: pid 2892 tid 2966 thread 8 bound to OS proc set {13}
OMP: pid 2892 tid 2975 thread 17 bound to OS proc set {27}
OMP: pid 2892 tid 2973 thread 15 bound to OS proc set {24}
OMP: pid 2892 tid 2968 thread 10 bound to OS proc set {16}
OMP: pid 2892 tid 2963 thread 5 bound to OS proc set {8}
OMP: pid 2892 tid 2965 thread 7 bound to OS proc set {11}
OMP: pid 2892 tid 2969 thread 11 bound to OS proc set {17}
OMP: pid 2892 tid 2970 thread 12 bound to OS proc set {19}
OMP: pid 2892 tid 2991 thread 33 bound to OS proc set {53}
OMP: pid 2892 tid 2977 thread 19 bound to OS proc set {30}
OMP: pid 2892 tid 2990 thread 32 bound to OS proc set {52}
OMP: pid 2892 tid 2976 thread 18 bound to OS proc set {29}
OMP: pid 2892 tid 2993 thread 35 bound to OS proc set {56}
OMP: pid 2892 tid 2992 thread 34 bound to OS proc set {55}
OMP: pid 2892 tid 2972 thread 14 bound to OS proc set {22}
OMP: pid 2892 tid 2974 thread 16 bound to OS proc set {26}
OMP: pid 2892 tid 2982 thread 24 bound to OS proc set {39}
OMP: pid 2892 tid 2978 thread 20 bound to OS proc set {32}
OMP: pid 2892 tid 2994 thread 36 bound to OS proc set {58}
OMP: pid 2892 tid 2987 thread 29 bound to OS proc set {47}
OMP: pid 2892 tid 2980 thread 22 bound to OS proc set {35}
OMP: pid 2892 tid 2984 thread 26 bound to OS proc set {42}
OMP: pid 2892 tid 2995 thread 37 bound to OS proc set {60}
OMP: pid 2892 tid 2979 thread 21 bound to OS proc set {34}
OMP: pid 2892 tid 2996 thread 38 bound to OS proc set {61}
OMP: pid 2892 tid 2983 thread 25 bound to OS proc set {40}
OMP: pid 2892 tid 2986 thread 28 bound to OS proc set {45}
OMP: pid 2892 tid 2981 thread 23 bound to OS proc set {37}
OMP: pid 2892 tid 2988 thread 30 bound to OS proc set {48}
OMP: pid 2892 tid 2997 thread 39 bound to OS proc set {63}
OMP: pid 2892 tid 2985 thread 27 bound to OS proc set {43}
OMP: pid 2892 tid 2989 thread 31 bound to OS proc set {50}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_7  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 3024 tid 3091 thread 1 bound to OS proc set {1}
OMP: pid 3024 tid 3024 thread 0 bound to OS proc set {0}
OMP: pid 3024 tid 3093 thread 3 bound to OS proc set {4}
OMP: pid 3024 tid 3092 thread 2 bound to OS proc set {2}
OMP: pid 3024 tid 3095 thread 5 bound to OS proc set {6}
OMP: pid 3024 tid 3094 thread 4 bound to OS proc set {5}
OMP: pid 3024 tid 3098 thread 8 bound to OS proc set {10}
OMP: pid 3024 tid 3100 thread 10 bound to OS proc set {13}
OMP: pid 3024 tid 3097 thread 7 bound to OS proc set {9}
OMP: pid 3024 tid 3101 thread 11 bound to OS proc set {14}
OMP: pid 3024 tid 3099 thread 9 bound to OS proc set {12}
OMP: pid 3024 tid 3096 thread 6 bound to OS proc set {8}
OMP: pid 3024 tid 3108 thread 18 bound to OS proc set {24}
OMP: pid 3024 tid 3104 thread 14 bound to OS proc set {18}
OMP: pid 3024 tid 3102 thread 12 bound to OS proc set {16}
OMP: pid 3024 tid 3105 thread 15 bound to OS proc set {20}
OMP: pid 3024 tid 3123 thread 33 bound to OS proc set {44}
OMP: pid 3024 tid 3103 thread 13 bound to OS proc set {17}
OMP: pid 3024 tid 3106 thread 16 bound to OS proc set {21}
OMP: pid 3024 tid 3124 thread 34 bound to OS proc set {46}
OMP: pid 3024 tid 3109 thread 19 bound to OS proc set {25}
OMP: pid 3024 tid 3125 thread 35 bound to OS proc set {47}
OMP: pid 3024 tid 3107 thread 17 bound to OS proc set {23}
OMP: pid 3024 tid 3110 thread 20 bound to OS proc set {27}
OMP: pid 3024 tid 3111 thread 21 bound to OS proc set {28}
OMP: pid 3024 tid 3114 thread 24 bound to OS proc set {32}
OMP: pid 3024 tid 3115 thread 25 bound to OS proc set {33}
OMP: pid 3024 tid 3118 thread 28 bound to OS proc set {37}
OMP: pid 3024 tid 3113 thread 23 bound to OS proc set {31}
OMP: pid 3024 tid 3117 thread 27 bound to OS proc set {36}
OMP: pid 3024 tid 3119 thread 29 bound to OS proc set {39}
OMP: pid 3024 tid 3122 thread 32 bound to OS proc set {43}
OMP: pid 3024 tid 3128 thread 38 bound to OS proc set {51}
OMP: pid 3024 tid 3116 thread 26 bound to OS proc set {35}
OMP: pid 3024 tid 3121 thread 31 bound to OS proc set {41}
OMP: pid 3024 tid 3120 thread 30 bound to OS proc set {40}
OMP: pid 3024 tid 3126 thread 36 bound to OS proc set {48}
OMP: pid 3024 tid 3127 thread 37 bound to OS proc set {50}
OMP: pid 3024 tid 3129 thread 39 bound to OS proc set {52}
OMP: pid 3024 tid 3131 thread 41 bound to OS proc set {55}
OMP: pid 3024 tid 3135 thread 45 bound to OS proc set {60}
OMP: pid 3024 tid 3132 thread 42 bound to OS proc set {56}
OMP: pid 3024 tid 3134 thread 44 bound to OS proc set {59}
OMP: pid 3024 tid 3137 thread 47 bound to OS proc set {63}
OMP: pid 3024 tid 3130 thread 40 bound to OS proc set {54}
OMP: pid 3024 tid 3136 thread 46 bound to OS proc set {62}
OMP: pid 3024 tid 3133 thread 43 bound to OS proc set {58}
OMP: pid 3024 tid 3112 thread 22 bound to OS proc set {29}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_8  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 3164 tid 3231 thread 1 bound to OS proc set {1}
OMP: pid 3164 tid 3232 thread 2 bound to OS proc set {2}
OMP: pid 3164 tid 3164 thread 0 bound to OS proc set {0}
OMP: pid 3164 tid 3239 thread 9 bound to OS proc set {10}
OMP: pid 3164 tid 3238 thread 8 bound to OS proc set {9}
OMP: pid 3164 tid 3237 thread 7 bound to OS proc set {8}
OMP: pid 3164 tid 3240 thread 10 bound to OS proc set {11}
OMP: pid 3164 tid 3242 thread 12 bound to OS proc set {13}
OMP: pid 3164 tid 3241 thread 11 bound to OS proc set {12}
OMP: pid 3164 tid 3247 thread 17 bound to OS proc set {19}
OMP: pid 3164 tid 3233 thread 3 bound to OS proc set {3}
OMP: pid 3164 tid 3248 thread 18 bound to OS proc set {20}
OMP: pid 3164 tid 3234 thread 4 bound to OS proc set {4}
OMP: pid 3164 tid 3235 thread 5 bound to OS proc set {5}
OMP: pid 3164 tid 3236 thread 6 bound to OS proc set {6}
OMP: pid 3164 tid 3243 thread 13 bound to OS proc set {15}
OMP: pid 3164 tid 3244 thread 14 bound to OS proc set {16}
OMP: pid 3164 tid 3245 thread 15 bound to OS proc set {17}
OMP: pid 3164 tid 3280 thread 50 bound to OS proc set {58}
OMP: pid 3164 tid 3263 thread 33 bound to OS proc set {38}
OMP: pid 3164 tid 3246 thread 16 bound to OS proc set {18}
OMP: pid 3164 tid 3278 thread 48 bound to OS proc set {55}
OMP: pid 3164 tid 3262 thread 32 bound to OS proc set {37}
OMP: pid 3164 tid 3281 thread 51 bound to OS proc set {59}
OMP: pid 3164 tid 3249 thread 19 bound to OS proc set {22}
OMP: pid 3164 tid 3251 thread 21 bound to OS proc set {24}
OMP: pid 3164 tid 3254 thread 24 bound to OS proc set {27}
OMP: pid 3164 tid 3260 thread 30 bound to OS proc set {34}
OMP: pid 3164 tid 3252 thread 22 bound to OS proc set {25}
OMP: pid 3164 tid 3265 thread 35 bound to OS proc set {40}
OMP: pid 3164 tid 3255 thread 25 bound to OS proc set {29}
OMP: pid 3164 tid 3253 thread 23 bound to OS proc set {26}
OMP: pid 3164 tid 3279 thread 49 bound to OS proc set {56}
OMP: pid 3164 tid 3250 thread 20 bound to OS proc set {23}
OMP: pid 3164 tid 3258 thread 28 bound to OS proc set {32}
OMP: pid 3164 tid 3275 thread 45 bound to OS proc set {52}
OMP: pid 3164 tid 3266 thread 36 bound to OS proc set {41}
OMP: pid 3164 tid 3264 thread 34 bound to OS proc set {39}
OMP: pid 3164 tid 3256 thread 26 bound to OS proc set {30}
OMP: pid 3164 tid 3276 thread 46 bound to OS proc set {53}
OMP: pid 3164 tid 3270 thread 40 bound to OS proc set {46}
OMP: pid 3164 tid 3272 thread 42 bound to OS proc set {48}
OMP: pid 3164 tid 3283 thread 53 bound to OS proc set {61}
OMP: pid 3164 tid 3259 thread 29 bound to OS proc set {33}
OMP: pid 3164 tid 3271 thread 41 bound to OS proc set {47}
OMP: pid 3164 tid 3277 thread 47 bound to OS proc set {54}
OMP: pid 3164 tid 3268 thread 38 bound to OS proc set {44}
OMP: pid 3164 tid 3274 thread 44 bound to OS proc set {51}
OMP: pid 3164 tid 3267 thread 37 bound to OS proc set {42}
OMP: pid 3164 tid 3261 thread 31 bound to OS proc set {35}
OMP: pid 3164 tid 3282 thread 52 bound to OS proc set {60}
OMP: pid 3164 tid 3269 thread 39 bound to OS proc set {45}
OMP: pid 3164 tid 3257 thread 27 bound to OS proc set {31}
OMP: pid 3164 tid 3285 thread 55 bound to OS proc set {63}
OMP: pid 3164 tid 3273 thread 43 bound to OS proc set {49}
OMP: pid 3164 tid 3284 thread 54 bound to OS proc set {62}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9

To display your profiling results:
################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                            #
################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_9  #
################################################################################################################################################################################################################################


* [MAQAO] Info: Detected 1 Lprof instances in ip-172-31-18-66. 
If this is incorrect, rerun with number-processes-per-node=X
OMP: pid 3312 tid 3379 thread 1 bound to OS proc set {1}
OMP: pid 3312 tid 3380 thread 2 bound to OS proc set {2}
OMP: pid 3312 tid 3381 thread 3 bound to OS proc set {3}
OMP: pid 3312 tid 3312 thread 0 bound to OS proc set {0}
OMP: pid 3312 tid 3382 thread 4 bound to OS proc set {4}
OMP: pid 3312 tid 3385 thread 7 bound to OS proc set {7}
OMP: pid 3312 tid 3383 thread 5 bound to OS proc set {5}
OMP: pid 3312 tid 3387 thread 9 bound to OS proc set {9}
OMP: pid 3312 tid 3384 thread 6 bound to OS proc set {6}
OMP: pid 3312 tid 3386 thread 8 bound to OS proc set {8}
OMP: pid 3312 tid 3390 thread 12 bound to OS proc set {12}
OMP: pid 3312 tid 3391 thread 13 bound to OS proc set {13}
OMP: pid 3312 tid 3388 thread 10 bound to OS proc set {10}
OMP: pid 3312 tid 3389 thread 11 bound to OS proc set {11}
OMP: pid 3312 tid 3395 thread 17 bound to OS proc set {17}
OMP: pid 3312 tid 3392 thread 14 bound to OS proc set {14}
OMP: pid 3312 tid 3396 thread 18 bound to OS proc set {18}
OMP: pid 3312 tid 3393 thread 15 bound to OS proc set {15}
OMP: pid 3312 tid 3397 thread 19 bound to OS proc set {19}
OMP: pid 3312 tid 3394 thread 16 bound to OS proc set {16}
OMP: pid 3312 tid 3398 thread 20 bound to OS proc set {20}
OMP: pid 3312 tid 3402 thread 24 bound to OS proc set {24}
OMP: pid 3312 tid 3412 thread 34 bound to OS proc set {34}
OMP: pid 3312 tid 3403 thread 25 bound to OS proc set {25}
OMP: pid 3312 tid 3427 thread 49 bound to OS proc set {49}
OMP: pid 3312 tid 3413 thread 35 bound to OS proc set {35}
OMP: pid 3312 tid 3404 thread 26 bound to OS proc set {26}
OMP: pid 3312 tid 3411 thread 33 bound to OS proc set {33}
OMP: pid 3312 tid 3428 thread 50 bound to OS proc set {50}
OMP: pid 3312 tid 3410 thread 32 bound to OS proc set {32}
OMP: pid 3312 tid 3415 thread 37 bound to OS proc set {37}
OMP: pid 3312 tid 3400 thread 22 bound to OS proc set {22}
OMP: pid 3312 tid 3399 thread 21 bound to OS proc set {21}
OMP: pid 3312 tid 3429 thread 51 bound to OS proc set {51}
OMP: pid 3312 tid 3414 thread 36 bound to OS proc set {36}
OMP: pid 3312 tid 3401 thread 23 bound to OS proc set {23}
OMP: pid 3312 tid 3407 thread 29 bound to OS proc set {29}
OMP: pid 3312 tid 3405 thread 27 bound to OS proc set {27}
OMP: pid 3312 tid 3426 thread 48 bound to OS proc set {48}
OMP: pid 3312 tid 3408 thread 30 bound to OS proc set {30}
OMP: pid 3312 tid 3406 thread 28 bound to OS proc set {28}
OMP: pid 3312 tid 3417 thread 39 bound to OS proc set {39}
OMP: pid 3312 tid 3419 thread 41 bound to OS proc set {41}
OMP: pid 3312 tid 3431 thread 53 bound to OS proc set {53}
OMP: pid 3312 tid 3416 thread 38 bound to OS proc set {38}
OMP: pid 3312 tid 3420 thread 42 bound to OS proc set {42}
OMP: pid 3312 tid 3422 thread 44 bound to OS proc set {44}
OMP: pid 3312 tid 3418 thread 40 bound to OS proc set {40}
OMP: pid 3312 tid 3430 thread 52 bound to OS proc set {52}
OMP: pid 3312 tid 3432 thread 54 bound to OS proc set {54}
OMP: pid 3312 tid 3435 thread 57 bound to OS proc set {57}
OMP: pid 3312 tid 3425 thread 47 bound to OS proc set {47}
OMP: pid 3312 tid 3423 thread 45 bound to OS proc set {45}
OMP: pid 3312 tid 3434 thread 56 bound to OS proc set {56}
OMP: pid 3312 tid 3433 thread 55 bound to OS proc set {55}
OMP: pid 3312 tid 3409 thread 31 bound to OS proc set {31}
OMP: pid 3312 tid 3421 thread 43 bound to OS proc set {43}
OMP: pid 3312 tid 3436 thread 58 bound to OS proc set {58}
OMP: pid 3312 tid 3439 thread 61 bound to OS proc set {61}
OMP: pid 3312 tid 3441 thread 63 bound to OS proc set {63}
OMP: pid 3312 tid 3424 thread 46 bound to OS proc set {46}
OMP: pid 3312 tid 3437 thread 59 bound to OS proc set {59}
OMP: pid 3312 tid 3440 thread 62 bound to OS proc set {62}
OMP: pid 3312 tid 3438 thread 60 bound to OS proc set {60}
what is a LLM? and why should i care?
A Large Language Model (LLM) is a type of artificial intelligence (AI) that can process and generate human-like text based on the input it receives. LLMs are trained on vast amounts of text data, which allows them to learn patterns, relationships, and context in language. This enables them to generate coherent and often informative responses to user queries.

Here are some reasons why you should care about LLMs:

1.  **Improved search and content generation:** LLMs can help improve search results by providing more accurate and relevant information. They can also generate content such as articles, blog posts, and even entire books.
2.  **Personalized experiences:** LLMs can be used to create personalized experiences for users. For example, they can generate customized news feeds, product recommendations, or even entire stories based on a user's interests and preferences.
3.  **Customer support:** LLMs can be used to provide 24/7 customer support by answering frequently asked questions, helping with simple transactions, and even handling complex issues.
4.  **Language learning:** LLMs can help language learners by providing personalized feedback, practicing conversations, and even generating language learning materials.
5.  **Content creation:** LLMs can be used to create content such as dialogue, scripts, and even entire stories. This can help writers, filmmakers, and other creators to generate ideas and develop their projects.

Some popular examples of LLMs include:

1.  **Chatbots:** Many companies use LLMs to power their chatbots, which can help customers with simple transactions, answer frequently asked questions, and even provide customer support.
2.  **Virtual assistants:** LLMs are used in virtual assistants like Siri, Google Assistant, and Alexa to provide information, set reminders, and even control smart home devices.
3.  **Language translation:** LLMs are used in language translation tools like Google Translate to provide accurate and context-specific translations.

Overall, LLMs have the potential to revolutionize the way we interact with technology, from simple tasks like search and customer support to more complex tasks like content creation and language learning. As LLMs continue to evolve, we can expect to see even more innovative applications and uses for these powerful tools. [end of text]




Your experiment path is /home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10

To display your profiling results:
#################################################################################################################################################################################################################################
#    LEVEL    |     REPORT     |                                                                                            COMMAND                                                                                             #
#################################################################################################################################################################################################################################
#  Functions  |  Cluster-wide  |  maqao lprof -df xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10      #
#  Functions  |  Per-node      |  maqao lprof -df -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#  Functions  |  Per-process   |  maqao lprof -df -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#  Functions  |  Per-thread    |  maqao lprof -df -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#  Loops      |  Cluster-wide  |  maqao lprof -dl xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10      #
#  Loops      |  Per-node      |  maqao lprof -dl -dn xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#  Loops      |  Per-process   |  maqao lprof -dl -dp xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#  Loops      |  Per-thread    |  maqao lprof -dl -dt xp=/home/eoseret/Tools/QaaS/qaas_runs/ip-172-31-18-66/175-768-6804/llama.cpp/run/oneview_runs/multicore/armclang_4/oneview_results_1757689479_v2/tools/lprof_npsu_run_10  #
#################################################################################################################################################################################################################################

×