Smith found that tagged prefetching reduces cache miss
ratios in a unified (both instruction and data) cache by between 50% and 90% for a set of trace-driven simulations.
We found that the achievable cache miss
rate varies only slightly between different chunk sizes, and there is no benefit to using chunks smaller than 256 bytes for our benchmarks.
On the other hand, long cache miss
latencies of directory protocols are caused by the inefficiencies that the distributed nature of the protocols and the underlying scalable network imply.
1) On a local cache miss
the client consults its hint information for the block.
Mispredict Rates and Cache Miss
Rates for Programs Transformed with Overpruning SCBP.
This hit-bypassing code requires about 25 instructions, compared with the 320 to 510 cycles for a full call into the MemSpy simulator on a cache miss
When a cache miss
occurs, the larger routing table takes more search time than the original one.
2 L1 Cache miss
rates for the SPARC machine (OB--miss rate for the compilation-Optimized 16 x 16 Blocked; OL--miss rate for the compilation-Optimized 16 x 16 data Laid; Imp--percentage miss rate improvement of the data laid version over the blocked version) Grid Size OB OL Imp 33 x 32 0.
Our experimental results (sections 5 and 6) show that more than 8 threads can increase cache miss
rate too much, so we assume a maximum of 8 threads.
When this occurs, the memory available to the file system cache decreases, and the cache miss
While solving these equations is difficult in general, we show that is also unnecessary, as mathematical techniques for manipulating Diophantine equations allow us to relatively easily compute and/or reduce the number of possible solutions, where each solution corresponds to a potential cache miss
Otherwise, the cache miss
queries are forwarded by the cache server to the authoritative servers.