An Associativity Threshold Phenomenon in Set-Associative Caches

04/11/2023
by   Michael A. Bender, et al.
0

In an α-way set-associative cache, the cache is partitioned into disjoint sets of size α, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size α decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an α-way set-associative LRU cache of total size k, as a function of α = α(k). We prove the following, assuming that sets are selected using a fully random hash function: - For α = ω(log k), the paging cost of an α-way set-associative LRU cache is within additive O(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1/poly(k), for all request sequences of length poly(k). - For α = o(log k), and for all c = O(1) and r = O(1), the paging cost of an α-way set-associative LRU cache is not within a factor c of that a fully-associative LRU cache of size k/r, for some request sequence of length O(k^1.01). - For α = ω(log k), if the hash function can be occasionally changed, the paging cost of an α-way set-associative LRU cache is within a factor 1 + o(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1/poly(k), for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2023

Cache-Oblivious Parallel Convex Hull in the Binary Forking Model

We present two cache-oblivious sorting-based convex hull algorithms in t...
research
08/15/2015

Cracking Intel Sandy Bridge's Cache Hash Function

On Intel Sandy Bridge processor, last level cache (LLC) is divided into ...
research
07/19/2017

On the Convergence of the TTL Approximation for an LRU Cache under Independent Stationary Request Processes

In this paper we focus on the LRU cache where requests for distinct cont...
research
05/11/2022

Studying Scientific Data Lifecycle in On-demand Distributed Storage Caches

The XRootD system is used to transfer, store, and cache large datasets f...
research
05/30/2020

Lower Bounds for Caching with Delayed Hits

Caches are a fundamental component of latency-sensitive computer systems...
research
12/18/2018

Worst-case Bounds and Optimized Cache on M^th Request Cache Insertion Policies under Elastic Conditions

Cloud services and other shared third-party infrastructures allow indivi...
research
06/11/2022

Online Paging with Heterogeneous Cache Slots

It is natural to generalize the k-Server problem by allowing each reques...

Please sign up or login with your details

Forgot password? Click here to reset