On Resource Pooling and Separation for LRU Caching

08/04/2017
by   Jian Tan, et al.
0

Caching systems using the Least Recently Used (LRU) principle have now become ubiquitous. A fundamental question for these systems is whether the cache space should be pooled together or divided to serve multiple flows of data item requests in order to minimize the miss probabilities. In this paper, we show that there is no straight yes or no answer to this question, depending on complex combinations of critical factors, including, e.g., request rates, overlapped data items across different request flows, data item popularities and their sizes. Specifically, we characterize the asymptotic miss probabilities for multiple competing request flows under resource pooling and separation for LRU caching when the cache size is large. Analytically, we show that it is asymptotically optimal to jointly serve multiple flows if their data item sizes and popularity distributions are similar and their arrival rates do not differ significantly; the self-organizing property of LRU caching automatically optimizes the resource allocation among them asymptotically. Otherwise, separating these flows could be better, e.g., when data sizes vary significantly. We also quantify critical points beyond which resource pooling is better than separation for each of the flows when the overlapped data items exceed certain levels. Technically, we generalize existing results on the asymptotic miss probability of LRU caching for a broad class of heavy-tailed distributions and extend them to multiple competing flows with varying data item sizes, which also validates the Che approximation under certain conditions. These results provide new insights on improving the performance of caching systems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/08/2018

Asymptotic Miss Ratio of LRU Caching with Consistent Hashing

To efficiently scale data caching infrastructure to support emerging big...
research
09/07/2022

Computing the Hit Rate of Similarity Caching

Similarity caching allows requests for an item i to be served by a simil...
research
06/11/2021

A New Upper Bound on Cache Hit Probability for Non-anticipative Caching Policies

Caching systems have long been crucial for improving the performance of ...
research
09/21/2023

Performance Model for Similarity Caching

Similarity caching allows requests for an item to be served by a similar...
research
05/18/2019

A caching system with object sharing

We consider a public content caching system that is shared by a number o...
research
04/01/2020

Learning to Cache and Caching to Learn: Regret Analysis of Caching Algorithms

Crucial performance metrics of a caching algorithm include its ability t...
research
12/18/2018

Worst-case Bounds and Optimized Cache on M^th Request Cache Insertion Policies under Elastic Conditions

Cloud services and other shared third-party infrastructures allow indivi...

Please sign up or login with your details

Forgot password? Click here to reset