12

I am confused by the perf events cache-misses and L1-icache-load-misses,L1-dcache-load-misses,LLC-load-misses. As when I tried to perf stat all of them, the answer doesn't seem consistent:

%$: sudo perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations,L1-dcache-load-misses,L1-dcache-loads,L1-dcache-stores,L1-icache-load-misses,LLC-loads,LLC-load-misses,LLC-stores,LLC-store-misses,LLC-prefetches ./my_app

       523,288,816      cache-references                                              (22.89%)
       205,331,370      cache-misses              #   39.239 % of all cache refs      (31.53%)
    10,163,373,365      cycles                                                        (39.62%)
    13,739,845,761      instructions              #    1.35  insn per cycle           (47.43%)
     2,520,022,243      branches                                                      (54.90%)
            20,341      faults
               147      migrations
       237,794,728      L1-dcache-load-misses     #    6.80% of all L1-dcache hits    (62.43%)
     3,495,080,007      L1-dcache-loads                                               (69.95%)
     2,039,344,725      L1-dcache-stores                                              (69.95%)
       531,452,853      L1-icache-load-misses                                         (70.11%)
        77,062,627      LLC-loads                                                     (70.47%)
        27,462,249      LLC-load-misses           #   35.64% of all LL-cache hits     (69.09%)
        15,039,473      LLC-stores                                                    (15.15%)
         3,829,429      LLC-store-misses                                              (15.30%)

The L1-* and LLC-* events are easy to understand, as I can tell they are read from the hardware counters in CPU.

But how does perf calculate cache-misses event? From my understanding, if the cache-misses counts the number of memory accesses that cannot be served by the CPU cache, then shouldn't it be equal to LLC-loads-misses + LLC-store-misses? Clearly in my case, the cache-misses is much higher than the Last-Level-Cache-Misses number.

The same confusion goes to cache-reference. It is much lower than L1-dcache-loads and much higher then LLC-loads+LLC-stores

My Linux kernel and CPU info:

%$: uname -r

4.10.0-22-generic

%$: lscpu

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 158
Model name:            Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz
Stepping:              9
CPU MHz:               885.754
CPU max MHz:           4200.0000
CPU min MHz:           800.0000
BogoMIPS:              7584.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              6144K
NUMA node0 CPU(s):     0-3
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp
LouisYe
  • 123
  • 1
  • 5
  • [so] is for programming questions, not questions about using or configuring Unix and its utilities.. [unix.se] or [su] would be better places for questions like this. – Barmar Mar 07 '19 at 02:53
  • 4
    @Barmar The question is not about configuring anything. `perf` is a tool for measuring performance-related metrics and the question is about what do some of these metrics mean. The Linux tag may not be very relevant to the question, but still perf is a Linux tool, so it's at least marginally relevant. – Hadi Brais Mar 07 '19 at 04:04
  • @HadiBrais I said "using or configuring Unix and its utilities", and he's "using its utilities" (it's a canned comment, I don't tailor it to each question). Actually, the question seems to be more about the design of Linux. But it's not about programming (he didn't post any code). – Barmar Mar 07 '19 at 15:34
  • 1
    @Barmar thanks for providing the links. But I don't think StackOverflow should be limited to just "programming questions". My question here is about CPU architecture and related tools. It is about how programmers collect performance usage, and Linux is just happened to be the most popular platform. I believe any good programmers, especially those who program in C/C++, should be aware of features provided by CPU, especially CPU cache, in order to produce programs with good performance. It is definitely worth posting if any of the related tools is confusing. – LouisYe Mar 07 '19 at 19:00
  • BTW, I made this post cuz I don't find the answer from another [related StackOverflow post](https://stackoverflow.com/questions/14674463/why-doesnt-perf-report-cache-misses) – LouisYe Mar 07 '19 at 19:00

1 Answers1

15

The built-in perf events that you are interested in are mapping to the following hardware performance monitoring events on your processor:

  523,288,816      cache-references        (architectural event: LLC Reference)                             
  205,331,370      cache-misses            (architectural event: LLC Misses) 
  237,794,728      L1-dcache-load-misses   L1D.REPLACEMENT
3,495,080,007      L1-dcache-loads         MEM_INST_RETIRED.ALL_LOADS
2,039,344,725      L1-dcache-stores        MEM_INST_RETIRED.ALL_STORES                     
  531,452,853      L1-icache-load-misses   ICACHE_64B.IFTAG_MISS
   77,062,627      LLC-loads               OFFCORE_RESPONSE (MSR bits 0, 16, 30-37)
   27,462,249      LLC-load-misses         OFFCORE_RESPONSE (MSR bits 0, 17, 26-29, 30-37)
   15,039,473      LLC-stores              OFFCORE_RESPONSE (MSR bits 1, 16, 30-37)
    3,829,429      LLC-store-misses        OFFCORE_RESPONSE (MSR bits 1, 17, 26-29, 30-37)

All of these events are documented in the Intel manual Volume 3. For more information on how to map perf events to native events, see: Hardware cache events and perf and How does perf use the offcore events?.

But how does perf calculate cache-misses event? From my understanding, if the cache-misses counts the number of memory accesses that cannot be served by the CPU cache, then shouldn't it be equal to LLC-loads-misses + LLC-store-misses? Clearly in my case, the cache-misses is much higher than the Last-Level-Cache-Misses number.

LLC-load-misses and LLC-store-misses count only cacheable data read requests and RFO requests, respectively, that miss in the L3 cache. LLC-load-misses also includes reads for page walking. Both exclude hardware and software prefetching. (The difference compared to Haswell is that some types of prefetch requests are counted.)

cache-misses also includes prefetch requests and code fetch requests that miss in the L3 cache. All of these events only count core-originating requests. They include requests from uops irrespective of whether end up retiring and irrespective of the source of the response. It's unclear to me how a prefetch promoted to demand is counted.

Overall, I think cache-misses is always larger than LLC-load-misses + LLC-store-misses and cache-references is always larger than LLC-loads + LLC-stores.

The same confusion goes to cache-reference. It is much lower than L1-dcache-loads and much higher then LLC-loads+LLC-stores

It's only guaranteed that cache-reference is larger than cache-misses because the former counts requests irrespective of whether they miss the L3. It's normal for L1-dcache-loads to be larger than cache-reference because core-originated loads usually occur only when you have load instructions and because of the cache locality exhibited by many programs. But it's not necessarily always the case because of hardware prefetches.

The L1-* and LLC-* events are easy to understand, as I can tell they are read from the hardware counters in CPU.

No, it's a trap. They are not easy to understand.

Hadi Brais
  • 18,864
  • 3
  • 43
  • 78
  • thank you for the answer, now I understand why `cache-references` is higher than `llc-loads`+`llc-stores`, as the former counts both demand and speculative requests. It looks like you suggeset that `cache-reference` doesn't count any L1 cache access, am I right? – LouisYe Mar 08 '19 at 00:00
  • @LouisYe If a cacheable memory access missed in the L1 and the L2, then it will be counted by `cache-references`. Otherwise, if it hits in the L1, then, no, it will not be counted by `cache-references`. – Hadi Brais Mar 08 '19 at 00:05
  • 1
    Note that ther's also `longest_lat_cache.miss` and `longest_lat_cache.reference` - which, at least on my system, count exactly the same as `cache-misses` and `cache-references` and `offcore_response.demand_data_rd.any_response` corresponding to `LLC-loads`. – Zulan Mar 08 '19 at 11:01
  • May I ask what do you mean by cacheable and non-cacheable memory access in this situation? – Billy Sep 03 '20 at 14:58
  • 1
    @Billy These terms are defined and discussed in the Intel SDM Volume 3. They are basically about how a memory access is handled by the caches. Uncacehable access usually don't cause cache lines to be filled in the caches. – Hadi Brais Sep 03 '20 at 15:43