| Loop Id: 92 | Module: exec | Source: advec_cell_kernel.f90:90-91 | Coverage: 1.23% |
|---|
| Loop Id: 92 | Module: exec | Source: advec_cell_kernel.f90:90-91 | Coverage: 1.23% |
|---|
0x41f96e VMOVUPD (%R8,%RAX,1),%YMM5 [3] |
0x41f974 VMOVUPD (%RCX,%RAX,1),%YMM6 [7] |
0x41f979 VADDPD (%RSI,%RAX,1),%YMM5,%YMM8 [4] |
0x41f97e VADDPD (%R12,%RAX,1),%YMM6,%YMM7 [5] |
0x41f984 VSUBPD %YMM7,%YMM8,%YMM9 |
0x41f988 VADDPD (%RBX,%RAX,1),%YMM9,%YMM3 [1] |
0x41f98d VMOVUPD %YMM3,(%R11,%RAX,1) [2] |
0x41f993 VMOVUPD (%RCX,%RAX,1),%YMM11 [7] |
0x41f998 VSUBPD (%RSI,%RAX,1),%YMM11,%YMM10 [4] |
0x41f99d VADDPD %YMM3,%YMM10,%YMM15 |
0x41f9a1 VMOVUPD %YMM15,(%R10,%RAX,1) [9] |
0x41f9a7 VMOVUPD 0x20(%R8,%RAX,1),%YMM12 [3] |
0x41f9ae VMOVUPD 0x20(%RCX,%RAX,1),%YMM0 [7] |
0x41f9b4 VADDPD 0x20(%RAX,%RSI,1),%YMM12,%YMM14 [6] |
0x41f9ba VADDPD 0x20(%R12,%RAX,1),%YMM0,%YMM2 [5] |
0x41f9c1 VSUBPD %YMM2,%YMM14,%YMM13 |
0x41f9c5 VADDPD 0x20(%RBX,%RAX,1),%YMM13,%YMM4 [1] |
0x41f9cb VMOVUPD %YMM4,0x20(%R11,%RAX,1) [2] |
0x41f9d2 VMOVUPD 0x20(%RCX,%RAX,1),%YMM1 [7] |
0x41f9d8 VSUBPD 0x20(%RAX,%RSI,1),%YMM1,%YMM5 [6] |
0x41f9de VADDPD %YMM4,%YMM5,%YMM8 |
0x41f9e2 VMOVUPD %YMM8,0x20(%R10,%RAX,1) [9] |
0x41f9e9 VMOVUPD 0x40(%R8,%RAX,1),%YMM6 [3] |
0x41f9f0 VMOVUPD 0x40(%RCX,%RAX,1),%YMM9 [7] |
0x41f9f6 VADDPD 0x40(%RAX,%RSI,1),%YMM6,%YMM7 [6] |
0x41f9fc VADDPD 0x40(%R12,%RAX,1),%YMM9,%YMM3 [5] |
0x41fa03 VSUBPD %YMM3,%YMM7,%YMM11 |
0x41fa07 VADDPD 0x40(%RBX,%RAX,1),%YMM11,%YMM10 [1] |
0x41fa0d VMOVUPD %YMM10,0x40(%R11,%RAX,1) [2] |
0x41fa14 VMOVUPD 0x40(%RCX,%RAX,1),%YMM15 [7] |
0x41fa1a VSUBPD 0x40(%RAX,%RSI,1),%YMM15,%YMM12 [6] |
0x41fa20 VADDPD %YMM10,%YMM12,%YMM14 |
0x41fa25 VMOVUPD %YMM14,0x40(%R10,%RAX,1) [9] |
0x41fa2c VMOVUPD 0x60(%R8,%RAX,1),%YMM0 [3] |
0x41fa33 VMOVUPD 0x60(%RCX,%RAX,1),%YMM13 [7] |
0x41fa39 VADDPD 0x60(%RAX,%RSI,1),%YMM0,%YMM2 [6] |
0x41fa3f VADDPD 0x60(%R12,%RAX,1),%YMM13,%YMM4 [5] |
0x41fa46 VSUBPD %YMM4,%YMM2,%YMM1 |
0x41fa4a VADDPD 0x60(%RBX,%RAX,1),%YMM1,%YMM5 [1] |
0x41fa50 VMOVUPD %YMM5,0x60(%R11,%RAX,1) [2] |
0x41fa57 VMOVUPD 0x60(%RCX,%RAX,1),%YMM8 [7] |
0x41fa5d MOV 0xa0(%RSP),%RDX [8] |
0x41fa65 VSUBPD 0x60(%RAX,%RSI,1),%YMM8,%YMM6 [6] |
0x41fa6b VADDPD %YMM5,%YMM6,%YMM7 |
0x41fa6f VMOVUPD %YMM7,0x60(%R10,%RAX,1) [9] |
0x41fa76 SUB $-0x80,%RAX |
0x41fa7a CMP %RDX,%RAX |
0x41fa7d JNE 41f96e |
/scratch_na/users/xoserete/qaas_runs/171-214-9740/intel/CloverLeafFC/build/CloverLeafFC/CloverLeaf_ref/kernels/advec_cell_kernel.f90: 90 - 91 |
-------------------------------------------------------------------------------- |
90: pre_vol(j,k)=volume(j,k)+(vol_flux_x(j+1,k )-vol_flux_x(j,k)+vol_flux_y(j ,k+1)-vol_flux_y(j,k)) |
91: post_vol(j,k)=pre_vol(j,k)-(vol_flux_x(j+1,k )-vol_flux_x(j,k)) |
| Path / |
| Metric | Value |
|---|---|
| CQA speedup if no scalar integer | 1.00 |
| CQA speedup if FP arith vectorized | 1.26 |
| CQA speedup if fully vectorized | 2.00 |
| CQA speedup if no inter-iteration dependency | NA |
| CQA speedup if next bottleneck killed | 1.14 |
| Bottlenecks | P1, P5, |
| Function | __advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0 |
| Source | advec_cell_kernel.f90:90-91 |
| Source loop unroll info | not unrolled or unrolled with no peel/tail loop |
| Source loop unroll confidence level | max |
| Unroll/vectorization loop type | NA |
| Unroll factor | NA |
| CQA cycles | 12.00 |
| CQA cycles if no scalar integer | 12.00 |
| CQA cycles if FP arith vectorized | 9.50 |
| CQA cycles if fully vectorized | 6.00 |
| Front-end cycles | 10.50 |
| DIV/SQRT cycles | 0.50 |
| P0 cycles | 12.00 |
| P1 cycles | 9.67 |
| P2 cycles | 9.67 |
| P3 cycles | 4.00 |
| P4 cycles | 12.00 |
| P5 cycles | 0.50 |
| P6 cycles | 4.00 |
| P7 cycles | 4.00 |
| P8 cycles | 4.00 |
| P9 cycles | 0.00 |
| P10 cycles | 9.67 |
| P11 cycles | 0.00 |
| Inter-iter dependencies cycles | 1 |
| FE+BE cycles (UFS) | 12.25 |
| Stall cycles (UFS) | 1.07 |
| Nb insns | 48.00 |
| Nb uops | 47.00 |
| Nb loads | 29.00 |
| Nb stores | 8.00 |
| Nb stack references | 1.00 |
| FLOP/cycle | 8.00 |
| Nb FLOP add-sub | 96.00 |
| Nb FLOP mul | 0.00 |
| Nb FLOP fma | 0.00 |
| Nb FLOP div | 0.00 |
| Nb FLOP rcp | 0.00 |
| Nb FLOP sqrt | 0.00 |
| Nb FLOP rsqrt | 0.00 |
| Bytes/cycle | 96.67 |
| Bytes prefetched | 0.00 |
| Bytes loaded | 904.00 |
| Bytes stored | 256.00 |
| Stride 0 | 1.00 |
| Stride 1 | 5.00 |
| Stride n | 3.00 |
| Stride unknown | 0.00 |
| Stride indirect | 0.00 |
| Vectorization ratio all | 100.00 |
| Vectorization ratio load | 100.00 |
| Vectorization ratio store | 100.00 |
| Vectorization ratio mul | NA |
| Vectorization ratio add_sub | 100.00 |
| Vectorization ratio fma | NA |
| Vectorization ratio div_sqrt | NA |
| Vectorization ratio other | NA |
| Vector-efficiency ratio all | 50.00 |
| Vector-efficiency ratio load | 50.00 |
| Vector-efficiency ratio store | 50.00 |
| Vector-efficiency ratio mul | NA |
| Vector-efficiency ratio add_sub | 50.00 |
| Vector-efficiency ratio fma | NA |
| Vector-efficiency ratio div_sqrt | NA |
| Vector-efficiency ratio other | NA |
| Metric | Value |
|---|---|
| CQA speedup if no scalar integer | 1.00 |
| CQA speedup if FP arith vectorized | 1.26 |
| CQA speedup if fully vectorized | 2.00 |
| CQA speedup if no inter-iteration dependency | NA |
| CQA speedup if next bottleneck killed | 1.14 |
| Bottlenecks | P1, P5, |
| Function | __advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0 |
| Source | advec_cell_kernel.f90:90-91 |
| Source loop unroll info | not unrolled or unrolled with no peel/tail loop |
| Source loop unroll confidence level | max |
| Unroll/vectorization loop type | NA |
| Unroll factor | NA |
| CQA cycles | 12.00 |
| CQA cycles if no scalar integer | 12.00 |
| CQA cycles if FP arith vectorized | 9.50 |
| CQA cycles if fully vectorized | 6.00 |
| Front-end cycles | 10.50 |
| DIV/SQRT cycles | 0.50 |
| P0 cycles | 12.00 |
| P1 cycles | 9.67 |
| P2 cycles | 9.67 |
| P3 cycles | 4.00 |
| P4 cycles | 12.00 |
| P5 cycles | 0.50 |
| P6 cycles | 4.00 |
| P7 cycles | 4.00 |
| P8 cycles | 4.00 |
| P9 cycles | 0.00 |
| P10 cycles | 9.67 |
| P11 cycles | 0.00 |
| Inter-iter dependencies cycles | 1 |
| FE+BE cycles (UFS) | 12.25 |
| Stall cycles (UFS) | 1.07 |
| Nb insns | 48.00 |
| Nb uops | 47.00 |
| Nb loads | 29.00 |
| Nb stores | 8.00 |
| Nb stack references | 1.00 |
| FLOP/cycle | 8.00 |
| Nb FLOP add-sub | 96.00 |
| Nb FLOP mul | 0.00 |
| Nb FLOP fma | 0.00 |
| Nb FLOP div | 0.00 |
| Nb FLOP rcp | 0.00 |
| Nb FLOP sqrt | 0.00 |
| Nb FLOP rsqrt | 0.00 |
| Bytes/cycle | 96.67 |
| Bytes prefetched | 0.00 |
| Bytes loaded | 904.00 |
| Bytes stored | 256.00 |
| Stride 0 | 1.00 |
| Stride 1 | 5.00 |
| Stride n | 3.00 |
| Stride unknown | 0.00 |
| Stride indirect | 0.00 |
| Vectorization ratio all | 100.00 |
| Vectorization ratio load | 100.00 |
| Vectorization ratio store | 100.00 |
| Vectorization ratio mul | NA |
| Vectorization ratio add_sub | 100.00 |
| Vectorization ratio fma | NA |
| Vectorization ratio div_sqrt | NA |
| Vectorization ratio other | NA |
| Vector-efficiency ratio all | 50.00 |
| Vector-efficiency ratio load | 50.00 |
| Vector-efficiency ratio store | 50.00 |
| Vector-efficiency ratio mul | NA |
| Vector-efficiency ratio add_sub | 50.00 |
| Vector-efficiency ratio fma | NA |
| Vector-efficiency ratio div_sqrt | NA |
| Vector-efficiency ratio other | NA |
| Path / |
| Function | __advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0 |
| Source file and lines | advec_cell_kernel.f90:90-91 |
| Module | exec |
| nb instructions | 48 |
| nb uops | 47 |
| loop length | 277 |
| used x86 registers | 10 |
| used mmx registers | 0 |
| used xmm registers | 0 |
| used ymm registers | 16 |
| used zmm registers | 0 |
| nb stack references | 1 |
| micro-operation queue | 10.50 cycles |
| front end | 10.50 cycles |
| P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| uops | 0.50 | 12.00 | 9.67 | 9.67 | 4.00 | 12.00 | 0.50 | 4.00 | 4.00 | 4.00 | 0.00 | 9.67 |
| cycles | 0.50 | 12.00 | 9.67 | 9.67 | 4.00 | 12.00 | 0.50 | 4.00 | 4.00 | 4.00 | 0.00 | 9.67 |
| Cycles executing div or sqrt instructions | NA |
| Longest recurrence chain latency (RecMII) | 1.00 |
| FE+BE cycles | 12.25 |
| Stall cycles | 1.07 |
| LB full (events) | 1.43 |
| Front-end | 10.50 |
| Dispatch | 12.00 |
| Data deps. | 1.00 |
| Overall L1 | 12.00 |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | NA (no mul vectorizable/vectorized instructions) |
| add-sub | 100% |
| fma | NA (no fma vectorizable/vectorized instructions) |
| div/sqrt | NA (no div/sqrt vectorizable/vectorized instructions) |
| other | NA (no other vectorizable/vectorized instructions) |
| all | 50% |
| load | 50% |
| store | 50% |
| mul | NA (no mul vectorizable/vectorized instructions) |
| add-sub | 50% |
| fma | NA (no fma vectorizable/vectorized instructions) |
| div/sqrt | NA (no div/sqrt vectorizable/vectorized instructions) |
| other | NA (no other vectorizable/vectorized instructions) |
| Instruction | Nb FU | P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | Latency | Recip. throughput |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| VMOVUPD (%R8,%RAX,1),%YMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD (%RCX,%RAX,1),%YMM6 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD (%RSI,%RAX,1),%YMM5,%YMM8 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD (%R12,%RAX,1),%YMM6,%YMM7 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM7,%YMM8,%YMM9 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD (%RBX,%RAX,1),%YMM9,%YMM3 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM3,(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD (%RCX,%RAX,1),%YMM11 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD (%RSI,%RAX,1),%YMM11,%YMM10 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM3,%YMM10,%YMM15 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM15,(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x20(%R8,%RAX,1),%YMM12 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x20(%RCX,%RAX,1),%YMM0 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x20(%RAX,%RSI,1),%YMM12,%YMM14 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x20(%R12,%RAX,1),%YMM0,%YMM2 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM2,%YMM14,%YMM13 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x20(%RBX,%RAX,1),%YMM13,%YMM4 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM4,0x20(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x20(%RCX,%RAX,1),%YMM1 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD 0x20(%RAX,%RSI,1),%YMM1,%YMM5 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM4,%YMM5,%YMM8 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM8,0x20(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x40(%R8,%RAX,1),%YMM6 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x40(%RCX,%RAX,1),%YMM9 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x40(%RAX,%RSI,1),%YMM6,%YMM7 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x40(%R12,%RAX,1),%YMM9,%YMM3 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM3,%YMM7,%YMM11 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x40(%RBX,%RAX,1),%YMM11,%YMM10 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM10,0x40(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x40(%RCX,%RAX,1),%YMM15 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD 0x40(%RAX,%RSI,1),%YMM15,%YMM12 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM10,%YMM12,%YMM14 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM14,0x40(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x60(%R8,%RAX,1),%YMM0 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x60(%RCX,%RAX,1),%YMM13 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x60(%RAX,%RSI,1),%YMM0,%YMM2 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x60(%R12,%RAX,1),%YMM13,%YMM4 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM4,%YMM2,%YMM1 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x60(%RBX,%RAX,1),%YMM1,%YMM5 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM5,0x60(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x60(%RCX,%RAX,1),%YMM8 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| MOV 0xa0(%RSP),%RDX | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VSUBPD 0x60(%RAX,%RSI,1),%YMM8,%YMM6 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM5,%YMM6,%YMM7 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM7,0x60(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| SUB $-0x80,%RAX | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17 |
| CMP %RDX,%RAX | 1 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0 | 1 | 0.20 |
| JNE 41f96e <__advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0+0x295e> | 1 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 0.50 |
| Function | __advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0 |
| Source file and lines | advec_cell_kernel.f90:90-91 |
| Module | exec |
| nb instructions | 48 |
| nb uops | 47 |
| loop length | 277 |
| used x86 registers | 10 |
| used mmx registers | 0 |
| used xmm registers | 0 |
| used ymm registers | 16 |
| used zmm registers | 0 |
| nb stack references | 1 |
| micro-operation queue | 10.50 cycles |
| front end | 10.50 cycles |
| P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| uops | 0.50 | 12.00 | 9.67 | 9.67 | 4.00 | 12.00 | 0.50 | 4.00 | 4.00 | 4.00 | 0.00 | 9.67 |
| cycles | 0.50 | 12.00 | 9.67 | 9.67 | 4.00 | 12.00 | 0.50 | 4.00 | 4.00 | 4.00 | 0.00 | 9.67 |
| Cycles executing div or sqrt instructions | NA |
| Longest recurrence chain latency (RecMII) | 1.00 |
| FE+BE cycles | 12.25 |
| Stall cycles | 1.07 |
| LB full (events) | 1.43 |
| Front-end | 10.50 |
| Dispatch | 12.00 |
| Data deps. | 1.00 |
| Overall L1 | 12.00 |
| all | 100% |
| load | 100% |
| store | 100% |
| mul | NA (no mul vectorizable/vectorized instructions) |
| add-sub | 100% |
| fma | NA (no fma vectorizable/vectorized instructions) |
| div/sqrt | NA (no div/sqrt vectorizable/vectorized instructions) |
| other | NA (no other vectorizable/vectorized instructions) |
| all | 50% |
| load | 50% |
| store | 50% |
| mul | NA (no mul vectorizable/vectorized instructions) |
| add-sub | 50% |
| fma | NA (no fma vectorizable/vectorized instructions) |
| div/sqrt | NA (no div/sqrt vectorizable/vectorized instructions) |
| other | NA (no other vectorizable/vectorized instructions) |
| Instruction | Nb FU | P0 | P1 | P2 | P3 | P4 | P5 | P6 | P7 | P8 | P9 | P10 | P11 | Latency | Recip. throughput |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| VMOVUPD (%R8,%RAX,1),%YMM5 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD (%RCX,%RAX,1),%YMM6 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD (%RSI,%RAX,1),%YMM5,%YMM8 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD (%R12,%RAX,1),%YMM6,%YMM7 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM7,%YMM8,%YMM9 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD (%RBX,%RAX,1),%YMM9,%YMM3 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM3,(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD (%RCX,%RAX,1),%YMM11 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD (%RSI,%RAX,1),%YMM11,%YMM10 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM3,%YMM10,%YMM15 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM15,(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x20(%R8,%RAX,1),%YMM12 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x20(%RCX,%RAX,1),%YMM0 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x20(%RAX,%RSI,1),%YMM12,%YMM14 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x20(%R12,%RAX,1),%YMM0,%YMM2 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM2,%YMM14,%YMM13 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x20(%RBX,%RAX,1),%YMM13,%YMM4 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM4,0x20(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x20(%RCX,%RAX,1),%YMM1 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD 0x20(%RAX,%RSI,1),%YMM1,%YMM5 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM4,%YMM5,%YMM8 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM8,0x20(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x40(%R8,%RAX,1),%YMM6 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x40(%RCX,%RAX,1),%YMM9 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x40(%RAX,%RSI,1),%YMM6,%YMM7 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x40(%R12,%RAX,1),%YMM9,%YMM3 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM3,%YMM7,%YMM11 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x40(%RBX,%RAX,1),%YMM11,%YMM10 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM10,0x40(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x40(%RCX,%RAX,1),%YMM15 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VSUBPD 0x40(%RAX,%RSI,1),%YMM15,%YMM12 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM10,%YMM12,%YMM14 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM14,0x40(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x60(%R8,%RAX,1),%YMM0 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VMOVUPD 0x60(%RCX,%RAX,1),%YMM13 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| VADDPD 0x60(%RAX,%RSI,1),%YMM0,%YMM2 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD 0x60(%R12,%RAX,1),%YMM13,%YMM4 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VSUBPD %YMM4,%YMM2,%YMM1 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VADDPD 0x60(%RBX,%RAX,1),%YMM1,%YMM5 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VMOVUPD %YMM5,0x60(%R11,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| VMOVUPD 0x60(%RCX,%RAX,1),%YMM8 | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 0-1 | 0.33 |
| MOV 0xa0(%RSP),%RDX | 1 | 0 | 0 | 0.33 | 0.33 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.33 | 1 | 0.33 |
| VSUBPD 0x60(%RAX,%RSI,1),%YMM8,%YMM6 | 1 | 0 | 0.50 | 0.33 | 0.33 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.33 | 3 | 0.50 |
| VADDPD %YMM5,%YMM6,%YMM7 | 1 | 0 | 0.50 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 0.50 |
| VMOVUPD %YMM7,0x60(%R10,%RAX,1) | 1 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0.50 | 0.50 | 0.50 | 0 | 0 | 0-1 | 0.50 |
| SUB $-0x80,%RAX | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.17 |
| CMP %RDX,%RAX | 1 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0.20 | 0 | 0 | 0 | 0.20 | 0 | 1 | 0.20 |
| JNE 41f96e <__advec_cell_kernel_module_MOD_advec_cell_kernel._omp_fn.0+0x295e> | 1 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0.50 | 0 | 0 | 0 | 0 | 0 | 0 | 0.50 |
