An Imitation Learning Approach for Cache Replacement

by   Evan Zheran Liu, et al.

Program execution speed critically depends on increasing cache hits, as cache hits are orders of magnitude faster than misses. To increase cache hits, we focus on the problem of cache replacement: choosing which cache line to evict upon inserting a new line. This is challenging because it requires planning far ahead and currently there is no known practical solution. As a result, current replacement policies typically resort to heuristics designed for specific common access patterns, which fail on more diverse and complex access patterns. In contrast, we propose an imitation learning approach to automatically learn cache access patterns by leveraging Belady's, an oracle policy that computes the optimal eviction decision given the future cache accesses. While directly applying Belady's is infeasible since the future is unknown, we train a policy conditioned only on past accesses that accurately approximates Belady's even on diverse and complex access patterns, and call this approach Parrot. When evaluated on 13 of the most memory-intensive SPEC applications, Parrot increases cache miss rates by 20 addition, on a large-scale web search benchmark, Parrot increases cache hit rates by 61 facilitate research in this area, as data is plentiful, and further advancements can have significant real-world impact.



There are no comments yet.


page 7

page 8


CacheQuery: Learning Replacement Policies from Hardware Caches

We show how to infer deterministic cache replacement policies using off-...

Duty to Delete on Non-Volatile Memory

We firstly suggest new cache policy applying the duty to delete invalid ...

On the complexity of cache analysis for different replacement policies

Modern processors use cache memory: a memory access that "hits" the cach...

A System-Level Framework for Analytical and Empirical Reliability Exploration of STT-MRAM Caches

Spin-Transfer Torque Magnetic RAM (STT-MRAM) is known as the most promis...

Exploiting Data Skew for Improved Query Performance

Analytic queries enable sophisticated large-scale data analysis within m...

A Fast Analytical Model of Fully Associative Caches

While the cost of computation is an easy to understand local property, t...

An O(1) algorithm for implementing the LFU cache eviction scheme

Cache eviction algorithms are used widely in operating systems, database...

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Caching is a universal concept in computer systems that bridges the performance gap between different levels of data storage hierarchies, found everywhere from databases to operating systems to CPUs (Jouppi, 1990; Harty and Cheriton, 1992; Xu et al., 2013; Cidon et al., 2016). Correctly selecting what data is stored in caches is critical for latency, as accessing the data directly from the cache (a cache hit) is orders of magnitude faster than retrieving the data from a lower level in the storage hierarchy (a cache miss). For example, Cidon et al. (2016) show that improving cache hit rates of web-scale applications by just 1% can decrease total latency by as much as 35%.

Thus, general techniques for increasing cache hit rates would significantly improve performance at all levels of the software stack. Broadly, two main avenues for increasing cache hit rates exist: (i) avoiding future cache misses by proactively prefetching the appropriate data into the cache beforehand; and (ii) strategically selecting which data to evict from the cache when making space for new data (cache replacement). Simply increasing cache sizes is a tempting third avenue, but is generally prohibitively expensive.

This work focuses on single-level cache replacement (Figure 1). When a new block of data (referred to as a line) is added to the cache (i.e., due to a cache miss), an existing cache line must be evicted from the cache to make space for the new line. To do this, during cache misses, a cache replacement policy takes as inputs the currently accessed line and the lines in the cache and outputs which of the cache lines to evict.

Figure 1: Cache replacement. At , line D is accessed, causing a cache miss. The replacement policy chooses between lines A, B, and C in the cache and in this case evicts C. At , line A is accessed and is already in the cache, causing a cache hit. No action from the replacement policy is needed. At , line C is accessed, causing another cache miss. The replacement policy could have avoided this miss by evicting a different line at .

Prior work frequently relies on manually-engineered heuristics to capture the most common cache access patterns, such as evicting the most recently used (MRU) or least recently used (LRU) cache lines, or trying to identify the cache lines that are frequently reused vs. those that are not (Qureshi et al., 2007; Jaleel et al., 2010; Jain and Lin, 2016; Shi et al., 2019). These heuristics perform well on the specific simple access patterns they target, but they only target a small fraction of all possible access patterns, and consequently they perform poorly on programs with more diverse and complex access patterns. Current cache replacement policies resort to heuristics as practical theoretical foundations have not yet been developed (Beckmann and Sanchez, 2017).

We propose a new approach for learning cache replacement policies by leveraging Belady’s optimal policy (Belady, 1966) in the framework of imitation learning (IL), and name this approach Parrot.111Parrots are known for their ability to imitate others. Belady’s optimal policy (Belady’s for short) is an oracle policy that computes the theoretically optimal cache eviction decision based on knowledge of future cache accesses, which we propose to approximate with a policy that only conditions on the past accesses. While our main goal is to establish (imitation) learned replacement policies as a proof-of-concept, we note that deploying such learned policies requires solving practical challenges, e.g., model latency may overshadow gains due to better cache replacement. We address some of these challenges in Section 4.5 and highlight promising future directions in Section 7.

Hawkeye (Jain and Lin, 2016) and Glider (Shi et al., 2019)

were the first to propose learning from Belady’s. They train a binary classifier to predict if a cache line will soon be reused (cache-friendly) or not (cache-averse), evicting the cache-averse lines before the cache-friendly ones and relying on a traditional heuristic to determine which lines are evicted first within the cache-friendly and cache-averse groups. Training such a binary classifier avoids the challenges (e.g.,

compounding errors) of directly learning a policy, but relying on the traditional heuristic heavily limits the expressivity of the policy class that these methods optimize over, which prevents them from accurately approximating Belady’s. In contrast, our work is the first to propose cache replacement as an IL problem, which allows us to directly train a replacement policy end-to-end over a much more expressive policy class to approximate Belady’s. This represents a novel way of leveraging Belady’s and provides a new framework for learning end-to-end replacement policies.

Concretely, this paper makes the following contributions:

  • We cast cache replacement as an imitation learning problem, leveraging Belady’s in a new way (Section 3).

  • We develop a neural architecture for end-to-end cache replacement and several supervised tasks that further improve its performance over standard IL (Section 4).

  • Our proposed approach, Parrot, exceeds the state-of-the-art replacement policy’s hit rates by over 20% on memory-intensive CPU benchmarks. On an industrial-scale web search workload, Parrot improves cache hit rates by 61% over a commonly implemented LRU policy (Section 5).

  • We propose cache replacement as a challenging new IL/RL (reinforcement learning) benchmark involving dynamically changing action spaces, delayed rewards, and significant real-world impact. To that end, we release an associated Gym environment (Section 


2 Cache Preliminaries

We begin with cache preliminaries before formulating cache replacement as learning a policy over a Markov decision process in Section 

3. We describe the details relevant to CPU caches, which we evaluate our approach on, but as caching is a general concept, our approach can be extended towards other cache structures as well.

A cache is a memory structure that maintains a portion of the data from a larger memory. If the desired data is located in the cache when it is required, this is advantageous, as smaller memories are faster to access than larger memories. Provided a memory structure, there is a question of how to best organize it into a cache. In CPUs, caches operate in terms of atomic blocks of memory or cache lines (typically 64-bytes large). This is the minimum granularity of data that can be accessed from the cache.

During a memory access, the cache must be searched for the requested data. Fully-associative caches layout all data in a single flat structure, but this is generally prohibitively expensive, as locating the requested data requires searching through all data in the cache. Instead, CPU caches are often -way set-associative caches of size , consisting of cache sets, where each cache set holds cache lines . Each line maps to a particular cache set (typically determined by the lower order bits of line’s address), so only the lines within that set must be searched.

During execution, programs read from and write to memory addresses by executing load or store instructions. These load/store instructions have unique identifiers known as program counters (PCs). If the address is located in the cache, this is called a cache hit. Otherwise, this is a cache miss, and the data at that address must be retrieved from a larger memory. Once the data is retrieved, it is generally added to the appropriate cache set (as recently accessed lines could be accessed again). Since each cache set can only hold lines, if a new line is added to a cache set already containing lines, the cache replacement policy must choose an existing line to replace. This is called a cache eviction and selecting the optimal line to evict is the cache replacement problem.

Belady’s Optimal Policy.

Given knowledge of future cache accesses, Belady’s computes the optimal cache eviction decision. Specifically, at each timestep , Belady’s computes the reuse distance for each line in the cache set, which is defined as the number of total cache accesses until the next access to . Then, Belady’s chooses to evict the line with the highest reuse distance, effectively the line used furthest in the future, i.e., .

3 Casting Cache Replacement as Imitation Learning

We cast cache replacement as learning a policy on an episodic Markov decision process in order to leverage techniques from imitation learning. Specifically, the state at the -th timestep consists of three components, where:

  • is the current cache access, consisting of the currently accessed cache line address and the unique program counter of the access.

  • is the cache state consisting of the cache line addresses currently in the cache set accessed by (the replacement policy does not require the whole cache state including other cache sets to make a decision).222A cache set can have less than cache lines for the first cache accesses (small fraction of program execution). In this case, no eviction is needed to insert the line.

  • is the history of all past cache accesses. In practice, we effectively only condition on the past accesses.

The action set available at a state is defined as follows: During cache misses, i.e., , the action set consists of the integers , where action corresponds to evicting line . Otherwise, during cache hits, the action set consists of a single no-op action , since no line must be evicted.

The transition dynamics are given by the dynamics of the three parts of the state. The dynamics of the next cache access and the cache access history are independent of the action and are defined by the program being executed. Specifically, the next access is simply the next memory address the program accesses and its associated PC. The -th access is appended to , i.e., .

The dynamics of the cache state are determined by the actions taken by the replacement policy. At state with and : A cache hit does not change the cache state, i.e., , as the accessed line is already available in the cache. A cache miss replaces the selected line with the newly accessed line, i.e., where .

The reward is for a cache miss (i.e., ) and is 1 otherwise for a cache hit. The goal is to learn a policy that maximizes the undiscounted total number of cache hits (the reward), , for a sequence of cache accesses .

In this paper, we formulate this task as an imitation learning problem. During training, we can compute the optimal policy (Belady’s) , by leveraging that the future accesses are fixed. Then, our approach learns a policy to approximate the optimal policy without using the future accesses, as future accesses are unknown during test time.

Figure 2: Normalized cache hit rates of Belady’s vs. the number of accesses it looks into the future. Achieving 80% the performance of Belady’s with an infinite window size requires accurately computing reuse distances for lines 2600 accesses into the future.

To demonstrate the difficulty of the problem, Figure 2 shows the amount of future information required to match the performance of Belady’s on a common computer architecture benchmark (omnetpp, Section 5). We compute this by imposing a future window of size on Belady’s, which we call , Within the window (), observes exact reuse distances, and sets the reuse distances of the remaining cache lines (with ) to . Then, evicts the line with the highest reuse distance, breaking ties randomly. The cache hit rate of is plotted on the y-axis, normalized so that 0 and 1 correspond to the cache hit rate of LRU and (the normal unconstrained version of Belady’s), respectively. As the figure shows, a significant amount of future information is required to fully match Belady’s performance.

4 Parrot: Learning to Imitate Belady’s

4.1 Model and Training Overview


Below, we overview the basic architecture of the Parrot policy (Figure 3), which draws on the Transformer (Vaswani et al., 2017) and BiDAF (Seo et al., 2016) architectures. See Appendix A for the full details.

Figure 3: Neural architecture of Parrot.
  1. Embed the current cache access to obtain memory address embedding and PC embedding and pass them through an LSTM to obtain cell state and hidden state :

  2. Keep the past hidden states, , representing an embedding of the cache access history and current cache access .

  3. Form a context for each cache line in the cache state by embedding each line as and attending over the past hidden states with :

  4. Apply a final dense layer and softmax on top of these line contexts to obtain the policy:

  5. Choose as the replacement action to take at timestep .


Algorithm 1 summarizes the training algorithm for the Parrot policy . The high-level strategy is to visit a set of states and then update the parameters to make the same eviction decision as the optimal policy on each state

via the loss function



1:  Initialize policy
2:  for  to  do
3:     if  then
4:         Collect data set of visited states by following on all accesses
5:     end if
6:     Sample contiguous accesses from
7:     Warm up policy on initial accesses
8:     Compute loss
9:     Update policy parameters based on loss
10:  end for
Algorithm 1 Parrot training algorithm

First, we convert a given sequence of consecutive cache accesses into states (Section 4.2), on which we can compute the optimal action with Belady’s (lines 3–5). Given the states, we train Parrot

with truncated backpropagation through time (lines 6–9). We sample batches of consecutive states

and initialize the LSTM hidden state of our policy on the cache accesses of to . Then, we apply our replacement policy to each of the remaining states in order to compute the loss (Sections 4.3 and 4.4), which encourages the learned replacement policy to make the same decisions as Belady’s.

4.2 Avoiding Compounding Errors

Since we are only given the cache accesses and not the states, we must determine which replacement policy to follow on these cache accesses to obtain the states . Naively, one natural policy to follow is the optimal policy . However, this leads to compounding errors (Ross et al., 2011; Daumé et al., 2009; Bengio et al., 2015), where the distribution of states seen during test time (when following the learned policy) differs from the distribution of states seen during training (when following the oracle policy). At test time, since Parrot learns an imperfect approximation of the oracle policy, it will eventually make a mistake and evict a suboptimal cache line. This leads to cache states that are different from those seen during training, which the learned policy has not trained on, leading to further mistakes.

To address this problem, we leverage the DAgger algorithm (Ross et al., 2011). DAgger avoids compounding errors by also following the current learned policy instead of the oracle policy to collect during training, which forces the distribution of training states to match that of test states. As Parrot updates the policy, the current policy becomes increasingly different from the policy used to collect , causing the training state distribution to drift from the test state distribution. To mitigate this, we periodically update every 5000 parameter updates by recollecting again under the current policy. Based on the recommendation in (Ross et al., 2011), we follow the oracle policy the first time we collect , since at that point, the policy is still random and likely to make poor eviction decisions.

Notably, this approach is possible because we can compute our oracle policy (Belady’s) at any state during training, as long as the future accesses are known. This differs from many IL tasks (Hosu and Rebedea, 2016; Vecerik et al., 2017), where querying the expert is expensive and limited.

4.3 Ranking Loss

Once the states are collected, we update our policy to better approximate Belady’s on these states via the loss function . A simple log-likelihood (LL) behavior cloning loss (Pomerleau, 1989)

encourages the learned policy to place probability mass on the optimal action

. However, in the setting where the distribution is known, instead of just the optimal action , optimizing to match this distribution can provide more supervision, similar to the intuition of distillation (Hinton et al., 2015). Thus, we propose an alternate ranking loss to leverage this additional supervision.

Concretely, Parrot uses a differentiable approximation (Qin et al., 2010) of normalized discounted cumulative gain (NDCG) with reuse distance as the relevancy metric:

Here, is a differentiable approximation of the rank of line , ranked by how much probability the policy places on evicting , where

is a hyperparameter and

is the sigmoid function.

is a normalization constant set so that , equal to the value of when the policy correctly places probability mass on the lines in descending order of reuse distance. This loss function improves cache hit rates by heavily penalizing for placing probability on lines with low reuse distance, which will likely lead to cache misses, and only lightly penalizing for placing probability on lines with higher reuse distance, which are closer to being optimal and are less likely to lead to cache misses.

Optimizing our loss function is similar to optimizing the Kullback-Liebler (KL) divergence (Kullback and Leibler, 1951) between a smoothed version of Belady’s, which evicts line with probability proportional to its exponentiated reuse distance , and our policy . Directly optimizing the KL between the non-smoothed oracle policy and our policy just recovers the normal LL loss, since Belady’s actually places all of its probability on a single line.

4.4 Predicting Reuse Distance

To add further supervision during training, we propose to predict the reuse distances of each cache line as an auxiliary task (Jaderberg et al., 2016; Mirowski et al., 2016; Lample and Chaplot, 2017). Concretely, we add a second fully-connected head on Parrot’s network that takes as inputs the per-line context embeddings and outputs predictions of the log-reuse distance . We train this head with a mean-squared error loss . Intuitively, since the reuse distance predicting head shares the same body as the policy head , learning to predict reuse distances helps learn better representations in the rest of the network. Overall, we train our policy with loss .

4.5 Towards Practicality

Figure 4: Byte embedder, taking only a few kilobytes of memory.

The goal of this work is to establish directly imitating Belady’s as a proof-of-concept. Applying approaches like Parrot to real-world systems requires reducing model size and latency to prevent overshadowing improved cache replacement. We leave these challenges to future work, but highlight one way to reduce model size in this section, and discuss further promising directions in Section 7.

In the full-sized Parrot

model, we learn a separate embedding for each PC and memory address, akin to word vectors 

(Mikolov et al., 2013)

in natural language processing. While this approach performs well, these embeddings can require tens of megabytes to store for real-world programs that access hundreds of thousands of unique memory addresses.

To reduce model size, we propose learning a byte embedder shared across all memory addresses, only requiring several kilobytes of storage. This byte embedder embeds each memory address (or PC) by embedding each byte separately and then passing a small linear layer over their concatenated outputs (Figure 4). In principle, this can learn a hierarchical representation, that separately represents large memory regions (upper bytes of an address) and finer-grained objects (lower bytes).

5 Experiments

5.1 Experimental Setup

Following Shi et al. (2019), we evaluate our approach on a three-level cache hierarchy with a 4-way L1 cache, a 8-way L2 cache, and a 16-way last-level cache. We apply our approach to the last-level cache while using the LRU replacement policy for L1/L2 caches.

For benchmark workloads, we evaluate on the memory-intensive SPEC CPU2006 (Henning, 2006) applications used by Shi et al. (2019). In addition, we evaluate on Google Web Search, an industrial-scale application that serves billions of queries per day, to further evaluate the effectiveness of Parrot on real-world applications with complex access patterns and large working sets.

For each of these programs, we run them and collect raw memory access traces over a 50 second interval using dynamic binary instrumentation tools (Bruening et al., 2003). This produces the sequence of all memory accesses that the program makes during that interval. Last-level cache access traces are obtained from this sequence by passing the raw memory accesses through the L1 and L2 caches using an LRU replacement policy.

As this produces a large amount of data, we then sample the resultant trace for our training data (Qureshi et al., 2007). We randomly choose 64 sets and collect the accesses to those sets on the last-level cache, totaling an average of about 5M accesses per program. Concretely, this yields a sequence of accesses . We train replacement policies on the first 80% of this sequence, validate on the next 10%, and report test results on the final 10%.

Our evaluation focuses on two key metrics representing the efficiency of cache replacement policies. First, as increasing cache hit rates is highly correlated to decreasing program latency (Qureshi et al., 2007; Shi et al., 2019; Jain and Lin, 2016), we evaluate our policies using raw cache hit rates. Second, we report normalized cache hit rates, representing the gap between LRU (the most common replacement policy) and Belady’s (the optimal replacement policy). For a policy with hit rate , we define the normalized cache hit rate as , where and are the hit rates of LRU and Belady’s, respectively. The normalized hit rate represents the effectiveness of a given policy with respect to the two baselines, LRU (normalized hit rate of 0) and Belady’s (normalized hit rate of 1).

We compare the following four approaches:

  1. Parrot: trained with the full-sized model, learning a separate embedding for each PC and address.

  2. Parrot (byte): trained with the much smaller byte embedder (Section 4.5).

  3. Glider (Shi et al., 2019): the state-of-the-art cache replacement policy, based on the results reported in their paper.

  4. Nearest Neighbor: a nearest neighbors version of Belady’s, which finds the longest matching PC and memory address suffix in the training data and follows the Belady’s decision of that.

The SPEC2006 program accesses we evaluate on may slightly differ from those used by Shi et al. (2019) in evaluating Glider, as the latter is not publicly available. However, to ensure a fair comparison, we verified that the measured hit rates for LRU and Belady’s on our cache accesses are close to the numbers reported by Shi et al. (2019), and we only compare on normalized cache hit rates. Since Glider’s hit rates are not available on Web Search, we compare Parrot against LRU, the policy frequently used in production CPU caches. The reported hit rates for Parrot, LRU, Belady’s, and Nearest Neighbors are measured on the test sets. We apply early stopping on Parrot, based on the cache hit rate on the validation set. For Parrot, we report results averaged over 3 random seeds, using the same minimally-tuned hyperparameters in all domains. These hyperparameters were tuned exclusively on the validation set of omnetpp (full details in Appendix B).

5.2 Main Results

astar bwaves bzip cactusadm gems lbm leslie3d libq mcf milc omnetpp sphinx3 xalanc Web Search
Optimal 43.5% 8.7% 78.4% 38.8% 26.5% 31.3% 31.9% 5.8% 46.8% 2.4% 45.1% 38.2% 33.3% 67.5%
LRU 20.0% 4.5% 56.1% 7.4% 9.9% 0.0% 12.7% 0.0% 25.3% 0.1% 26.1% 9.5% 6.6% 45.5%
Parrot 34.4% 7.8% 64.5% 38.6% 26.0% 30.8% 31.7% 5.4% 41.4% 2.1% 41.4% 36.7% 30.4% 59.0%
Table 1: Raw cache hit rates. Optimal is the hit rate of Belady’s. Averaged over all programs, Parrot (3 seeds) outperforms LRU by 16%.
Figure 5: Comparison of Parrot with the state-of-the-art replacement policy, Glider. We evaluate two versions of Parrot, the full-sized model (Parrot) and the byte embedder model (Parrot

(byte)), and report the mean performance over 3 seeds with 1-standard deviation error bars. On the SPEC2006 programs (left),

Parrot with the full-sized model improves hit rates over Glider by 20% on average.

Table 1 compares the raw cache hit rate of Parrot with that of Belady’s and LRU. Parrot achieves significantly higher cache hit rates than LRU on every program, ranging from 2% to 30%. Averaged over all programs, Parrot achieves 16% higher cache hit rates than LRU. According to prior study on cache sensitivity of SPEC2006 workloads (Jaleel, 2010), achieving the same level of cache hit rates as Parrot with LRU would require increasing the cache capacity by 2–3x (e.g., omnetpp and mcf) to 16x (e.g., libquantum).

On the Web Search benchmark, Parrot achieves a 61% higher normalized cache hit rate and 13.5% higher raw cache hit rate than LRU, demonstrating Parrot’s practical ability to scale to the complex memory access patterns found in datacenter-scale workloads.

Figure 5 compares the normalized cache hit rates of Parrot and Glider. With the full-sized model, Parrot outperforms Glider on 10 of the 13 SPEC2006 programs, achieving a 20% higher normalized cache hit rate averaged over all programs; on the remaining 3 programs (bzip, bwaves, and mcf), Glider performs marginally better. Additionally, Parrot

achieves consistent performance with low variance across seeds.

Reducing model size.

Though learning Parrot from scratch with the byte embedder does not perform as well as the full-sized model, the byte embedder model is significantly smaller and still achieves an average of 8% higher normalized cache hit rate than Glider (Figure 5). In Section 7, we highlight promising future directions to reduce the performance gap and further reduce model size and latency.


An effective cache replacement policy must be able to generalize to unseen code paths (i.e., sequences of accesses) from the same program, as there are exponentially many code paths and encountering them all during training is infeasible. We test Parrot’s ability to generalize to new code paths by comparing it to the nearest neighbors baseline (Figure 5). The performance of the nearest neighbors baseline shows that merely memorizing training code paths seen achieves near-optimal cache hit rates on simpler programs (e.g., gems, lbm), which just repeatedly execute the same code paths, but fails for more complex programs (e.g., mcf, Web Search), which exhibit highly varied code paths. In contrast, Parrot maintains high cache hit rates even on these more complex programs, showing that it can generalize to new code paths not seen during training.

Additionally, some of the programs require generalizing to new memory addresses and program counters at test time. In mcf, 21.6% of the test-time memory addresses did not appear in the training data, and in Web Search, 5.3% of the test-time memory addresses and 6% of the test-time PCs did not appear in the training data (full details in Appendix B), but Parrot performs well despite this.

5.3 Ablations

Below, we ablate each of the following from Parrot: predicting reuse distance, on-policy training (DAgger), and ranking loss. We evaluate on four of the most memory-intensive SPEC2006 applications (lbm, libq, mcf, and omnetpp) and Web Search and compare each ablation with Glider, Belady’s, and two versions of Parrot. Parrot is the full-sized model with no ablations. Parrot (base) is Parrot’s neural architecture, with all three additions ablated. Comparing Parrot (base) to Glider (e.g., Figure 6) shows that in some programs (e.g., omnetpp and lbm), simply casting cache replacement as an IL problem with Parrot’s neural architecture is sufficient to obtain competitive performance, while in other programs, our additions are required to achieve state-of-the-art cache hit rates.

Predicting Reuse Distance.

Figure 6: Comparison between different mechanisms of incorporating reuse distance into Parrot. Including reuse distance prediction in our full model (Parrot) achieves 16.8% higher normalized cache hit rates than ablating reuse distance prediction (Parrot (no reuse dist.)).

Figure 6 compares the following three configurations to show the effect of incorporating reuse distance information: (i) Parrot (no reuse dist.), where reuse distance prediction is ablated, (ii) Parrot (evict highest reuse dist.), where our fully ablated model (Parrot (base)) predicts reuse distance and directly evicts the line with the highest predicted reuse distance, and (iii) Parrot (reuse dist. aux loss), where our fully ablated model learns to predict reuse distance as an auxiliary task.

Comparing Parrot (no reuse dist.) to Parrot shows that incorporating reuse distance greatly improves cache hit rates. Between different ways to incorporate reuse distance into Parrot, using reuse distance prediction indirectly as an auxiliary loss function (Parrot (reuse dist. aux loss)) leads to higher cache hit rates than using the reuse distance predictor directly to choose which cache line to evict (Parrot (evict highest reuse dist.)). We hypothesize that in some cache states, accurately predicting the reuse distance for each line may be challenging, but ranking the lines may be relatively easy. Since our reuse distance predictor predicts log reuse distances, small errors may drastically affect which line is evicted when the reuse distance predictor is used directly.

Training with DAgger.

Figure 7 summarizes the results when ablating training on-policy with DAgger. In theory, training off-policy on roll-outs of Belady’s should lead to compounding errors, as the states visited during training under Belady’s differ from those visited during test time. Empirically, we observe that this is highly program-dependent. In some programs, like mcf or Web Search, training off-policy performs as well or better than training on-policy, but in other programs, training on-policy is crucial. Overall, training on-policy leads to an average 9.8% normalized cache hit rate improvement over off-policy training.

Figure 7: Ablation study for training with DAgger. Training with DAgger achieves 9.8% higher normalized cache hit rates than training off-policy on the states visited by the oracle policy.

Ranking Loss.

Figure 8 summarizes the results when ablating our ranking loss. Using our ranking loss over a log-likelihood (LL) loss introduces some bias, as the true optimal policy places all its probability on the line with the highest reuse distance. However, our ranking loss better optimizes cache hit rates, as it more heavily penalizes evicting lines with lower reuse distances, which lead to misses. In addition, a distillation perspective of our loss, where the teacher network is an exponentially-smoothed version of Belady’s with the probability of evicting a line set as proportional to

, suggests that our ranking loss provides greater supervision than LL. Tuning a temperature on the exponential smoothing of Belady’s could interpolate between less bias and greater supervision. Empirically, we observe that our ranking loss leads to an average 3.5% normalized cache hit rate improvement over LL.

Figure 8: Ablation study for our ranking loss. Using our ranking loss improves normalized cache hit rate by 3.5% over a LL loss.

5.4 History Length

One key question is: how much past information is needed to accurately approximate Belady’s? We study this by varying the number of past accesses that Parrot attends over () from 20 to 140. In theory, Parrot’s LSTM hidden state could contain information about all past accesses, but the LSTM’s memory is limited in practice.

The results are summarized in Figure 9. We observe that the past accesses become an increasingly better predictor of the future as the number past accesses increase, until about 80. After that point, more past information doesn’t appear to help approximate Belady’s. Interestingly, Shi et al. (2019) show that Glider experiences a similar saturation in improvement from additional past accesses, but at around 30 past accesses. This suggests that learning a replacement policy end-to-end with Parrot can effectively leverage more past information than simply predicting whether a cache line is cache-friendly or cache-averse.

Figure 9: Performance of Parrot trained with different numbers of past accesses (). As the number of past accesses increases, normalized cache hit rates improve, until reaching a history length of 80. At that point, additional past accesses have little impact.

6 Related Work

Cache Replacement.

Traditional approaches to cache replacement rely on heuristics built upon intuition for cache access behavior. LRU is based on the assumption that most recently used lines are more likely to be reused. More sophisticated policies target a handful of manually classified access patterns based on simple counters (Qureshi et al., 2007; Jaleel et al., 2010) or try to predict instructions that tend to load zero-reuse lines based on a table of saturating counters (Wu et al., 2011; Khan et al., 2010).

Several recent approaches instead focus on learning cache replacement policies. Wang et al. (2019) also cast cache replacement as learning over a Markov decision process, but apply reinforcement learning instead of imitation learning, which results in lower performance. More closely related to ours are Hawkeye (Jain and Lin, 2016) and Glider (Shi et al., 2019), which also learn from Belady’s. They train a binary classification model based on Belady’s to predict if a line is cache-friendly or cache-averse, but rely on a traditional replacement heuristic to determine which line to evict when several lines are cache-averse. Relying on the traditional heuristic to produce the final eviction decisions heavily constrains the expressivity of the policy class they learn over, so that even the best policy within their class of learnable policies may not accurately approximate Belady’s, yielding high cache miss rates for some access patterns.

In contrast, our work is the first to propose learning a cache replacement policy end-to-end with imitation learning. Framing cache replacement in this principled framework is important as much prior research has resorted to heuristics for hill climbing specific benchmarks. In addition, learning end-to-end enables us to optimize over a highly expressive policy class, achieving high cache hit rates even on complex and diverse access patterns.

Imitation Learning.

Our work builds on imitation learning (IL) techniques (Ross and Bagnell, 2014; Sun et al., 2017), where the goal is to approximate an expert policy. Our setting exhibits two distinctive properties: First, in our setting, the expert policy (Belady’s) can be queried at any state during training. the oracle policy (Belady’s) can be cheaply queried at any state during training, which differs from a body of IL work (Vecerik et al., 2017; Hosu and Rebedea, 2016; Hester et al., 2018) focusing on learning with limited samples from an expensive expert (e.g., a human). The ability to arbitrarily query the oracle enables us to avoid compounding errors with DAgger (Ross et al., 2011). Second, the distribution over actions of the oracle policy is available, enabling more sophisticated loss functions. Prior work (Sabour et al., 2018; Choudhury et al., 2017) also studies settings with these two properties, although in different domains. Sabour et al. (2018) shows that an approximate oracle can be computed in some natural-language sequence generation tasks; Choudhury et al. (2017) learns to imitate an oracle computed from data only available during training, similar to Belady’s, which requires future information.

7 Conclusion and Future Directions

We develop a foundation for learning end-to-end cache replacement policies with imitation learning, which significantly bridges the gap between prior work and Belady’s optimal replacement policy. Although we evaluate our approach on CPU caches, due to the popularity of SPEC2006 as a caching benchmark, we emphasize that our approach applies to other caches as well, such as software caches, databases, and operating systems. Software caches may be an especially promising area for applying our approach, as they tolerate higher latency in the replacement policy and implementing more complex replacement policies is easier in software. We highlight two promising future directions:

First, this work focuses on the ML challenges of training a replacement to approximate Belady’s and does not explore the practicality of deploying the learned policy in production, where the two primary concerns are the memory and latency overheads of the policy. To address these concerns, future work could investigate model-size reduction techniques, such as distillation (Hinton et al., 2015), pruning (Janowsky, 1989; Han et al., 2015; Sze et al., 2017), and quantization, as well as domains tolerating greater latency and memory use, such as software caches. Additionally, cache replacement decisions can be made at any time between misses to the same set, which provides a reasonably long latency window (e.g., on the order of seconds for software caches) for our replacement policy to make a decision. Furthermore, the overall goal of cache replacement is to minimize latency. While minimizing cache misses minimizes latency to a first approximation, cache misses incur variable amounts of latency (Qureshi et al., 2006), which could be addressed by fine-tuning learned policies to directly minimize latency via reinforcement learning.

Second, while Belady’s algorithm provides an optimal replacement policy for a single-level cache, there is no known optimal policy for multiple levels of caches (as is common in CPUs and web services). This hierarchical cache replacement

policy is a ripe area for deep learning and RL research, as is exploring the connection between cache replacement and prefetching, as they both involve selecting the optimal set of lines to be present in the cache. Cache replacement is backward looking (based on the accesses so far) while prefetching is forward looking (predicting future accesses directly 

(Hashemi et al., 2018; Shi et al., 2020)).

To facilitate further research in this area, we release a Gym environment for cache replacement, which easily extends to the hierarchical cache replacement setting, where RL is required as the optimal policy is unknown. We find cache replacement an attractive problem for the RL/IL communities, as it has significant real-world impact and data is highly available, in contrast to many current benchmarks that only have one of these two properties. In addition, cache replacement features several interesting challenges: rewards are highly delayed, as evicting a particular line may not lead to a cache hit/miss until thousands of timesteps later; the semantics of the action space dynamically changes, as the replacement policy chooses between differing cache lines at different states; the state space is large (e.g., 100,000s of unique addresses) and some programs require generalizing to new memory addresses at test time, not seen during training, similar to the rare words problem (Luong et al., 2014) in NLP; and as our ablations show, different programs exhibit wildly different cache access patterns, which can require different techniques to address. In general, we observe that computer systems exhibit many interesting machine learning (ML) problems, but have been relatively inaccessible to the ML community because they require sophisticated systems tools. We take steps to avoid this by releasing our cache replacement environment.


Code for Parrot and our cache replacement Gym environment is available at


We thank Zhan Shi for insightful discussions and for providing the results for Glider, which we compare against. We also thank Chelsea Finn, Lisa Lee, and Amir Yazdanbakhsh for their comments on a draft of this paper. Finally, we thank the anonymous ICML reviewers for their useful feedback, which helped improve this paper.

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518.


  • N. Beckmann and D. Sanchez (2017) Maximizing cache performance under uncertainty. In Proceedings of the International Symposium on High Performance Computer Architecture, Cited by: §1.
  • L. A. Belady (1966) A study of replacement algorithms for a virtual-storage computer. IBM Systems Journal. Cited by: §1.
  • S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer (2015)

    Scheduled sampling for sequence prediction with recurrent neural networks

    In Advances in Neural Information Processing Systems, pp. 1171–1179. Cited by: §4.2.
  • D. Bruening, T. Garnett, and S. Amarasinghe (2003) An infrastructure for adaptive dynamic optimization. In Proceedings of the International Symposium on Code Generation and Optimization, Cited by: §5.1.
  • S. Choudhury, A. Kapoor, G. Ranade, S. Scherer, and D. Dey (2017) Adaptive information gathering via imitation learning. arXiv preprint arXiv:1705.07834. Cited by: §6.
  • A. Cidon, A. Eisenman, M. Alizadeh, and S. Katti (2016) Cliffhanger: scaling performance cliffs in web memory caches. In Proceedings of the USENIX Symposium on Networked Systems Design and Implementation, Cited by: §1.
  • H. Daumé, J. Langford, and D. Marcu (2009) Search-based structured prediction. Machine Learning 75 (3), pp. 297–325. Cited by: §4.2.
  • X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In

    Proceedings of the thirteenth international conference on artificial intelligence and statistics

    pp. 249–256. Cited by: Appendix A.
  • S. Han, H. Mao, and W. J. Dally (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: §7.
  • K. Harty and D. R. Cheriton (1992) Application-controlled physical memory using external page-cache management. In Proceedings of the International Conference on Architectural Support for Programming Languages and Operating Systems, Cited by: §1.
  • M. Hashemi, K. Swersky, J. A. Smith, G. Ayers, H. Litz, J. Chang, C. Kozyrakis, and P. Ranganathan (2018) Learning memory access patterns. In Proceedings of the International Conference on Machine Learning, Cited by: §7.
  • J. L. Henning (2006) SPEC CPU2006 benchmark descriptions. ACM SIGARCH Computer Architecture News 34 (4), pp. 1–17. Cited by: §5.1.
  • T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband, et al. (2018) Deep Q-learning from demonstrations. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §6.
  • G. Hinton, O. Vinyals, and J. Dean (2015) Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §4.3, §7.
  • I. Hosu and T. Rebedea (2016) Playing Atari games with deep reinforcement learning and human checkpoint replay. arXiv preprint arXiv:1607.05077. Cited by: §4.2, §6.
  • M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu (2016) Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397. Cited by: §4.4.
  • A. Jain and C. Lin (2016) Back to the future: leveraging Belady’s algorithm for improved cache replacement. In Proceedings of the International Symposium on Computer Architecture, Cited by: §1, §1, §5.1, §6.
  • A. Jaleel, K. B. Theobald, S. C. Steely, and J. Emer (2010) High performance cache replacement using re-reference interval prediction (RRIP). In Proceedings of the International Symposium on Computer Architecture, Cited by: §1, §6.
  • A. Jaleel (2010) Memory characterization of workloads using instrumentation-driven simulation. Web Copy: Cited by: §5.2.
  • S. A. Janowsky (1989) Pruning versus clipping in neural networks. Physical Review A 39 (12), pp. 6600. Cited by: §7.
  • N. P. Jouppi (1990) Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers. ACM SIGARCH Computer Architecture News. Cited by: §1.
  • S. M. Khan, Y. Tian, and D. A. Jimenez (2010) Sampling dead block prediction for last-level caches. In Proceedings of the International Symposium on Microarchitecture, Cited by: §6.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Appendix A.
  • S. Kullback and R. A. Leibler (1951) On information and sufficiency. The Annals of Mathematical Statistics 22 (1), pp. 79–86. Cited by: §4.3.
  • G. Lample and D. S. Chaplot (2017) Playing FPS games with deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Cited by: §4.4.
  • M. Luong, H. Pham, and C. D. Manning (2015)

    Effective approaches to attention-based neural machine translation

    arXiv preprint arXiv:1508.04025. Cited by: item 2.
  • M. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba (2014) Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206. Cited by: §7.
  • T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean (2013) Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111–3119. Cited by: §4.5.
  • P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, et al. (2016) Learning to navigate in complex environments. arXiv preprint arXiv:1611.03673. Cited by: §4.4.
  • A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: Appendix A.
  • D. A. Pomerleau (1989) Alvinn: an autonomous land vehicle in a neural network. In Advances in neural information processing systems, pp. 305–313. Cited by: §4.3.
  • T. Qin, T. Liu, and H. Li (2010) A general approximation framework for direct optimization of information retrieval measures. Information Retrieval 13 (4), pp. 375–397. Cited by: §4.3.
  • M. K. Qureshi, A. Jaleel, Y. N. Patt, S. C. Steely, and J. Emer (2007) Adaptive insertion policies for high performance caching. In Proceedings of the International Symposium on Computer Architecture, Cited by: §1, §5.1, §5.1, §6.
  • M. K. Qureshi, D. N. Lynch, O. Mutlu, and Y. N. Patt (2006) A case for mlp-aware cache replacement. In 33rd International Symposium on Computer Architecture (ISCA’06), pp. 167–178. Cited by: §7.
  • S. Ross and J. A. Bagnell (2014) Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979. Cited by: §6.
  • S. Ross, G. Gordon, and D. Bagnell (2011) A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Cited by: §4.2, §4.2, §6.
  • S. Sabour, W. Chan, and M. Norouzi (2018) Optimal completion distillation for sequence learning. arXiv preprint arXiv:1810.01398. Cited by: §6.
  • M. Seo, A. Kembhavi, A. Farhadi, and H. Hajishirzi (2016) Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Cited by: §4.1.
  • Z. Shi, X. Huang, A. Jain, and C. Lin (2019) Applying deep learning to the cache replacement problem. In Proceedings of the International Symposium on Microarchitecture, Cited by: §1, §1, item 3, §5.1, §5.1, §5.1, §5.1, §5.4, §6.
  • Z. Shi, K. Swersky, D. Tarlow, P. Ranganathan, and M. Hashemi (2020) Learning execution through neural code fusion. In Proceedings of the International Conference on Learning Representations, Cited by: §7.
  • W. Sun, A. Venkatraman, G. J. Gordon, B. Boots, and J. A. Bagnell (2017) Deeply aggrevated: differentiable imitation learning for sequential prediction. In Proceedings of the International Conference on Machine Learning, pp. 3309–3318. Cited by: §6.
  • V. Sze, Y. Chen, T. Yang, and J. S. Emer (2017) Efficient processing of deep neural networks: a tutorial and survey. Proceedings of the IEEE 105 (12), pp. 2295–2329. Cited by: §7.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in Neural Information Processing Systems, Cited by: item 1, item 2, §4.1.
  • M. Vecerik, T. Hester, J. Scholz, F. Wang, O. Pietquin, B. Piot, N. Heess, T. Rothörl, T. Lampe, and M. Riedmiller (2017) Leveraging demonstrations for deep reinforcement learning on robotics problems with sparse rewards. arXiv preprint arXiv:1707.08817. Cited by: §4.2, §6.
  • H. Wang, H. He, M. Alizadeh, and H. Mao (2019) Learning caching policies with subsampling. Cited by: §6.
  • C. Wu, A. Jaleel, W. Hasenplaugh, M. Martonosi, S. C. Steely Jr, and J. Emer (2011) SHiP: signature-based hit predictor for high performance caching. In Proceedings of the International Symposium on Microarchitecture, Cited by: §6.
  • Y. Xu, E. Frachtenberg, S. Jiang, and M. Paleczny (2013) Characterizing Facebook’s memcached workload. IEEE Internet Computing 18 (2), pp. 41–49. Cited by: §1.

Appendix A Architecture Details

Our model is implemented in PyTorch (Paszke et al., 2019) and is optimized with the Adam optimizer (Kingma and Ba, 2014).


In the full-sized model, to embed memory addresses and cache lines, we train a unique embedding for each unique memory address observed during training, sharing the same embedder across memory addresses and cache lines. We similarly embed PCs.

Concretely, we initialize an embedding matrix via Glorot uniform initialization (Glorot and Bengio, 2010), where is set to the number of unique memory addresses in the training set and is the dimension of the embedding. Then, each unique memory address is dynamically assigned a unique id 1 to , and its embedding is set to the -th row of the embedding matrix . At test time, all memory addresses unseen during training are mapped to a special UNK embedding, equal to the last row of . We embed PCs with a similar embedding matrix .


After computing embeddings (with either the full-sized model or the byte embedder) for each line in the cache state and hidden states representing the past accesses, we compute a context for each cache line by attending over the hidden states with the line embeddings as queries as follows:

  1. Following Vaswani et al. (2017), we compute positional embeddings , where and:

    We concatenate these positional embeddings with the hidden states to encode how far in the past each access is: . Although in theory, the LSTM hidden states can encode positions, we found that explicitly concatenating a positional embedding helped optimization.

  2. We apply General Attention (Luong et al., 2015) with each cache line embedding as the query and the concatenated hidden states and positional embeddings as keys:

    The matrix is learned and can be computed in parallel (Vaswani et al., 2017).

Appendix B Experimental Details

b.1 Detailed Results

To provide further insight into our learned policy, we also report results on two additional metrics:

  • Top- Accuracy: The percentage of the time that the optimal line to evict according to Belady’s is in the top- lines with highest probability of eviction under our learned policy. This indicates how frequently our learned policy is outputting decisions like those of Belady’s. We report top- and top- accuracy. Note that since each cache set in the last-level cache can only hold lines, the top- accuracy is 100%.

  • Reuse Distance Gap: The average difference between the reuse distance of the optimal line to evict and the reuse distance of the line evicted by Parrot, i.e., , where . This metric roughly captures how sub-optimal the decision made by Parrot is at each timestep, as evicting a line with a smaller reuse distance is more likely to lead to a cache miss. A policy with a reuse distance gap of is optimal, while a high reuse distance gap leads to more cache misses.

We report results with of our full model with this metric in Table 2. In mcf, libquantum, and lbm, our replacement policy frequently chooses to evict the same cache line as Belady’s with a top-1 accuracy of over and a top-5 accuracy close to . In other programs (e.g., omnetpp, Web Search), our replacement policy’s top-1 and top-5 accuracies are significantly lower, even though its normalized cache hit rate in these programs is similar to its normalized cache hit rate in mcf. Intuitively, this can occur when several cache lines have similar reuse distances to the reuse distance of the optimal cache line, so evicting any of them is roughly equivalent. Thus top- accuracy is an interesting, but imperfect metric. Note that this is the same intuition behind our ranking loss, which roughly measures the relative suboptimality of a line as a function of the suboptimality of its reuse distance.

The differences in the reuse distance gaps between the programs emphasize the differences in the program behaviors and roughly indicate how frequently cache lines are reused in each program. For example, Parrot achieves wildly different average reuse distance gaps, while maintaining similar normalized and raw cache hit rates (Table 1) in omnetpp and mcf, due to differences in their access patterns. In omnetpp, evicting a line with a reuse distance 100’s of accesses smaller than the reuse distance of the optimal line sometimes does not lead to a cache miss, as both lines may eventually be evicted before being reused anyway. On the other hand, in mcf, evicting a line that is used only slightly earlier than the optimal line more frequently leads to cache misses.

Program Top-1 Acc. (%) Top-5 Acc. (%) Reuse Dist. Gap
Web Search
Table 2: Mean top-1/5 accuracy and reuse distance gap of Parrot, averaged over 3 seeds with single standard deviation intervals.

b.2 Hyperparameters

The following shows the values of all the hyperparameters we used in all our final experiments. We ran our final experiments with the bolded values and tuned over the non-bolded values. These values were used in all of our final experiments, including the ablations, except the history length experiments (Section 5.4), where we varied the history length .

  • Learning rate: (0.001, 0.003)

  • Address embedding dimension (): 64

  • PC embedding dimension (): 64

  • PC embedding vocab size (): 5000

  • Position embedding dimension (): 128

  • LSTM hidden size (): 128

  • Frequency of recollecting : (5000, 10000)

  • History Length (): (20, 40, 60, 80, 100, 120, 140)

We used the same hyperparameter values in all 5 programs (omnetpp, mcf, libq, lbm, and Web Search) we evaluated on, where the address embedding vocab size was set to the number of unique addresses seen in the train set of each program (see Table 3). For most hyperparameters, we selected a reasonable value and never changed it. We tuned the rest of the hyperparameters exclusively on the validation set of omnetpp.

b.3 Program Details

Train Test Unseen Test
Program Cache Accesses Addresses PCs Addresses PCs Addresses PCs
astar 879,040 13,047 25 8,235 9 46 (0.6%) 0 (0%)
bwaves 2,785,280 443,662 436 209,231 155 12 (0.1%) 38 (5.8%)
bzip 3,899,840 6,086 415 3,571 28 0 (0%) 0 (0%)
cactusadm 1,759,040 298,213 243 83,046 191 0 (0%) 0 (0%)
gems 7,298,880 423,706 5,305 363,313 1,395 1 (0%) 0 (0%)
lbm 10,224,960 206,271 44 206,265 34 0 (0%) 0 (0%)
leslie3d 6,956,160 38,507 1,705 38,487 1,663 0 (0%) 0 (0%)
libquantum 4,507,200 16,394 14 16,386 5 0 (0%) 0 (0%)
mcf 4,143,360 622,142 73 81,666 67 77,608 (21.6%) 0 (0%)
milc 3,048,000 203,504 254 90,933 82 0 (0%) 2 (0.5%)
omnetpp 3,414,720 16,912 402 14,079 315 275 (0.6%) 3 (1.0%)
sphinx3 2,372,800 20,629 528 4,586 199 0 (0%) 1 (0%)
xalanc 4,714,240 15,515 217 11,600 161 507 (1%) 5 (0%)
Web Search 3,636,800 241,164 32,468 66,645 15,893 10,334 (5.3%) 948 (6%)
Table 3: Program details in terms of the number of last-level cache accesses and unique addresses/PCs. ‘Train’ and ‘Test’ show the number of unique addresses and PCs appearing in each domain at train and test time, whereas ‘Unseen Test’ indicates the number of addresses and PCs appearing at test time, but not train time. The given percentages indicate what portion of all test accesses had unseen addresses or PCs.

Table 3 reports the number of unique addresses and PCs contained in the train/test splits of each program, including the number of unique addresses and PCs in the test split that were not in the train split. Notably, in some programs, new addresses and PCs appear at test time, that are not seen during training, requiring the replacement policy to generalize.

In Table 3, we also report the total number of last-level cache accesses collected for each of the five programs in our 50s collection interval. These accesses were split into the 80% train, 10% validation, and 10% test sets. Since different programs exhibited varying levels of cacheability at the L1 and L2 cache levels, different numbers of last-level cache accesses resulted for each program. These varying numbers also indicate how different programs exhibit highly different behavior.

b.4 Randomly Chosen Cache Sets

We randomly chose 64 sets and collected accesses to those sets on the last-level cache. The 64 randomly chosen sets were: 6, 35, 38, 53, 67, 70, 113, 143, 157, 196, 287, 324, 332, 348, 362, 398, 406, 456, 458, 488, 497, 499, 558, 611, 718, 725, 754, 775, 793, 822, 862, 895, 928, 1062, 1086, 1101, 1102, 1137, 1144, 1175, 1210, 1211, 1223, 1237, 1268, 1308, 1342, 1348, 1353, 1424, 1437, 1456, 1574, 1599, 1604, 1662, 1683, 1782, 1789, 1812, 1905, 1940, 1967, and 1973.