In the private cache organization, most of the L1 cache misses
can be handled by the L2 cache, so the number of remote on-chip L2 cache accesses is reduced and it is not necessary to cross the interconnection network, which reduces the miss latency.
The Joseph and Grunwald study focused primarily on data cache misses
, and did not compare Markov prefetching with techniques designed specifically for prefetching instructions.
Many of these cache misses
can be avoided if we augment the demand fetch policy of the cache with a data prefetch operation.
While the total gap amount has no impact on first-level cache misses
, it does have an effect on higher levels of the memory hierarchy.
The experiments show, through running-time measurements and processor-event measurements, that some matrices cause many more cache misses
in the left-looking algorithm than in the multifrontal one, while others cause many more cache misses
in the multifrontal algorithm than in the left-looking one.
The first approach tends to increase the number of coherence messages per coherence event as well as the number of cache misses
in those cases in which several memory lines share a directory entry.
Server cache misses
cause disk accesses, which are an order of magnitude slower than a server cache hit.
Given these trends in branch predictions and cache misses
, we would expect all benchmarks but go, perl, and ijpeg to improve in performance under SCBP.
We assume the level-L cache is large enough to hold all of the needed data; therefore there are never any level-L cache misses
Memory performance can also be measured with hardware-based counters that keep track of events such as cache misses
in a running system.
In the trace-driven simulation model, however, instructions may wait at different stages in the pipeline because of resource conflicts, incorrect speculative execution, data dependencies, serialization, cache misses
, and many other reasons .