Adaptive Offline and Online Similarity-Based Caching

10/15/2020 ∙ by Jizhe Zhou, et al. ∙ King's College London IEEE 0

With similarity-based content delivery, the request for a content can be satisfied by delivering a related content under a dissimilarity cost. This letter addresses the joint optimization of caching and similarity-based delivery decisions across a network so as to minimize the weighted sum of average delay and dissimilarity cost. A convergent alternate gradient descent ascent algorithm is first introduced for an offline scenario with prior knowledge of the request rates, and then extended to an online setting. Numerical results validate the advantages of the approach with respect to standard per-cache solutions.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Caching systems provide the underlying architecture for content-centric networks [1], content distribution networks [2], and edge networks [3]. In conventional systems, a request for a content is satisfied by forwarding it to a node that permanently stores the requested content. Caching networks can reduce the delivery delay by serving the request from one of the intermediate nodes in the forwarding path that stores the requested content (see Fig. 1).

In many applications, a request can also be satisfied by delivering a content similar to the requested one [4]

. Examples include video and image retrieval as well as advertising

[5, 6]. For example, when a user searches for a video, a related video cached locally may be delivered instead, as long as the resulting lower downloading latency offsets the “dissimilarity cost” associated with receiving a different content.

Fig. 1: Similarity-based delivery: The green line depicts the forwarding path for the requested content . While the request is for a “cat” image (as depicted over the green arrowed line), a similar image of a “tiger cub” cached at node is delivered instead (as depicted over the orange arrowed line).

Motivated by these considerations, we study similarity-based content caching and delivery in a cache-enabled network. As illustrated in Fig. 1, a request for a content is routed over a path to a designated node that permanently stores the requested content. Ioannidis and Yeh [10] studied the conventional case in which the request is satisfied by delivering the requested content from one of the caches along the path, if a “cache hit” occurs, or from the end node otherwise. In contrast, in this paper, we allow for similarity-based delivery. Accordingly, a similar content can be delivered if it is found in one of the caches along the path – an event known as “soft cache hit” [7, 8]. To the best of our knowledge, prior work on similarity caching focuses on per-cache strategies that deliver the most similar content from a fixed local cache for each request [4, 5]. As in [10], we allow instead for coordination across the caches in the network, and consider the joint optimization of caching and delivery decisions, where a hit can occur at any of the caches along a path for a request.

Specifically, we first study the offline optimization problem over cache allocation and delivery decisions, such that the weighted sum of delivery delay and dissimilarity cost is minimized under prior knowledge of request rates. To this end, we apply integer relaxation and we tackle a minimax primal-dual formulation of the relaxation problem via a variant of gradient descent ascent, namely the Hybrid Block Successive Approximation (HiBSA) introduced in [9]. HiBSA is known to converge to a stationary point of the relaxed minimax problem [9]. Moreover, for the scenario in which the request rates are unknown, we present an online stochastic version of the algorithm that adapts to the requests observed over time.

The rest of this paper is organized as follows. In Sec. II, we describe the network model. The offline optimization problem is formulated and addressed in Sec. III. In Sec. IV, we consider the online version of the similarity-based caching problem, and introduce the online scheme. Numerical results are presented in Sec. V. Finally, we offer some conclusions in Sec. VI.

Ii System Model

As illustrated in Fig.1, we consider a network consisting of a set of nodes and a set of undirected transmission links between pairs of nodes. The network delivers contents from a given set of popular contents to devices connected to one of the network nodes. For every content , there is a subset of nodes, referred to as source nodes, that permanently store content . We denote as the average delivery delay between nodes and for any content in . A request consists of a content and of a fixed path through the network. Path is an acyclic sequence of nodes , where is the node receiving the request, for all edges with , and . Note that there may exist distinct requests and with paths and sharing an arbitrary subsets of nodes. Once a request is received by the network, a request for content is routed through until a suitable content is found and delivered, through the same path, to the requesting device. Unlike [10], in which the request must be satisfied by delivering the requested content , here we allow for similarity-based delivery [4]: A content different from the requested one may be delivered as long as the content selection satisfies a desirable trade-off between delivery latency and content similarity.

A similarity matrix of nonnegative values describes the similarity between pairs of contents in . Similarity may account for properties such as language, authors, genres, and so on. In this paper, following [4], we define a dissimilarity matrix , where denotes the cost of delivering content when the requested content is . Naturally, we have for all contents .

The caching policy of node

is defined by a vector

, where indicates whether node stores content : we have if node stores content and otherwise we set . The overall caching policy of the network is defined by the matrix . Due to cache capacity and source node cache constraints, we have the inequalities


Let denote the set of requests that can be received by the network. Instances of requests are received according to independent Poisson processes, with arrival rate (requests/s) for request . We are interested in optimizing the caching decision matrix , along with the delivery decision matrix . Variable indicates the network decision to deliver content for request : We set if the network delivers content in lieu of the requested through path , and otherwise we have . Only contents that are cached or permanently stored at nodes can be selected for delivery for request . Accordingly, we set if for all . This constraint can be expressed as


Moreover, only one content is selected to be delivered for each request , which can be expressed as


Iii Offline Optimization

In this section, we study the problem of minimizing the weighted sum of average delivery latency and dissimilarity cost with respect to the caching decision matrix and the similarity-based delivery decision matrix . We assume here that the arrival rate matrix is known, and that the problem is solved offline before the runtime delivery phase.

Iii-a Problem Formulation

As in [10], we assume that the delay of delivering the response message is much larger than that of forwarding a request. Therefore, the delay of delivering content for request for a given caching matrix is written as


We consider the problem of minimizing the weighted sum of average delay (5) plus the weighted sum of average dissimilarity cost under constraints (1)-(4). The cost of delivering content for request consists of delay and dissimilarity cost, written as , and parameter is a nonnegative constant that quantifies the penalty in terms of the latency cost that is incurred by one unit of dissimilarity cost. The optimization problem is defined as the minimization

s.t. (6b)

where we define . All constraints in (6) is introduced before. Reference [10] studied the special case of problem (6) in which . Under this assumption, the only feasible solution for matrix is given as for all and , i.e., content is delivered for any request . Therefore, the optimization is only over the caching matrix .

In the following subsections, we tackle problem (6) through the following steps: (i) the integer constraints on matrices and are relaxed, and a minimax formulation is introduced; (ii) a variant of the gradient descent ascent algorithm, namely HiBSA [9], is applied to define the iterative procedure that converges to a stationary point of the minimax problem; (iii) and a greedy rounding method is applied to obtain integer solutions for variables and .

Iii-B Integer Relaxation and Problem Reformulation

In order to address the problem (6

), we first relax the binary variables in matrices

and to lie in the interval . The relaxed problem is still non-convex on account of the objective function (6a) and the constraint (6f). We proceed by defining the Lagrangian function


where we wrote , and are the Lagrangian multipliers for constraint (6f). The multiplication by the requests’ rate is introduced in in order to simplify the online design presented in Sec. IV. We then consider the problem

s.t. (8)

Note that the optimal solution of problem (III-B) coincides with that of the mentioned relaxation of problem (6) [11, Chapter 5]. The problem (III-B) is a nonconvex-concave minimax optimization problem, which is non-convex in the primal variables and and concave (affine) in the dual variables .

Iii-C A Variant of the Gradient Descent Ascent Algorithm

1:  Input: ;
2:  Output:
3:  repeat
4:     Compute
5:     Compute
6:  until Stopping criterion is satisfied
7:  Each node allocates cache to the uncached content with the largest until no cache is available
8:  For each request , set for , s.t.
Algorithm 1 HiBSA Algorithm with Rounding

To tackle problem (III-B), we apply the HiBSA algorithm introduced in [9], which leverages gradient descent for the minimization over primal variables and gradient ascent for the maximization problem of dual variables . The HiBSA algorithm is proved in [9] to converge to a stationary solution by solving a sequence of convex minimization problem and concave maximization problem with suitable regularization terms. To apply this scheme, we need to first identify strongly convex and concave approximation functions for the primal variables and dual variables , respectively, that satisfy the conditions in [9].

Let the and denote the -th iterate of the primal variables and dual variables. Since the Lagrangian function is twice-differentiable, it has an -Lipschitz constant with respect to primal variables

, i.e., the largest eigenvalue of the Hessian matrix of

. For approximation function in the primal variables , we consider the function for some constant . Since the Lagrangian function is linear in , the approximation function for dual variable can be directly defined as for some constant . The introduced approximation functions satisfy Assumptions in [9], since they are respectively strongly convex and concave; they provide respectively upper bound and lower bound for Lagrangian function at the current iterate; they guarantee gradient consistency; and they have Lipschitz continuous gradients. The convex minimization problem for primal variable solved at iteration by HiBSA is defined as


where denote the overall gradients for the primal variables, and is the projection to the convex subset of primal variables defined by constraints (6c)-(6e) [11, Chapter 8]. Let function return the position of node in path , so that we have if and otherwise. The derivatives of evaluated at the -th iterate can be computed as




Similarly, the concave maximization problem for dual variables at iteration is defined as


where is a perturbation parameter, which satisfy Assumption C in [9]. In (III-C), we have function if and otherwise. The gradients with respect to dual variable at iteration are denoted as , and are computed as


The overall HiBSA algorithm is summarized in Algorithm 1. The stopping criterion is given as , where is the desired accuracy, and we now discuss how to perform rounding.

Iii-D Rounding Method

In order to obtain an integer solution for the output of the HiBSA algorithm, a greedy rounding algorithm is applied to round first the caching decision matrix and then the delivery decision matrix . Considering the first step of rounding , each node selects contents to cache by following the order of decreasing values of while the cache capacity constraint is met with equality. Then, in a similar manner, for each request , the delivery decision is set to for content , s.t. for all . Note that quantifying the loss due to quantization remains an open problem, which does not seem tractable with standard tools such as those used in [10].

Iv Online Optimization

In this section, we consider the scenario in which the arrival rate matrix is a priori unknown. The requests are received according to the independent Poisson processes described in Sec. II. In order to tackle the problem of optimizing caching and delivering decision matrices in this scenario, we introduce an online algorithm to solve the minimax problem (III-B

). Following the offline solution presented in Sec. III, the algorithm leverages stochastic gradient descent for primal variables and stochastic gradient ascent for dual variables. Moreover, an online greedy rounding method is used to determine the cache allocation at nodes.

Time is partitioned into periods of equal length . In time period , the number of instances of each request is a Poisson variable with mean . Denote as the multi-set of requests received in the -th time slot. Note that a request may appear multiple times in . For each received request , the network delivers the content that satisfies . This means that we choose the content with the largest value of the current delivery decision variable subject to cache availability. We denote as the multi-set of triples containing request and associated delivered content .

Since the only measured delays are for requests , the derivatives (III-C), (III-C) and (III-C) are computed only for variables and with

. A stochastic estimate

for the derivative in (III-C) can be specifically obtained as , where is the term in the square bracket in (III-C). In a similar way, stochastic estimates for the derivatives in (III-C) and (III-C) can be obtained by choosing as the terms in the square bracket in (III-C) and (III-C), respectively. Following the same arguments as in [10, Lemma 1]

, these stochastic derivatives are unbiased estimates of the true derivatives and they have finite variance. At the end of any time slot

, estimates of the derivatives in (III-C), (III-C) and (III-C) are computed as discussed above and applied using steps 4 and 5 in Algorithm 1 by replacing the gradients and with the discussed stochastic estimates. These two steps are followed by greedy rounding as for steps 7 and 8 of Algorithm 1.

Fig. 2: Expected delay of similarity-based caching compared with adaptive caching [10], along with the dissimilarity cost, for two values of the content popularity obtained by similarity-based caching.

V Numerical Experiments

In this section, we provide numerical results concerning a grid-2D network topology with nodes and edges [10]. The average delay over an edge

follows an uniform distribution in the interval

. Each node has a caching capacity contents. The total number of contents in is 10, and for each , a node is randomly selected as the source node that permanently stores content . The set of requests, with cardinality , is generated as follows. We first select a subset of nodes with that can generate requests. For each request , a content is selected from following a Zipf distribution with parameter ; and the forwarding path is selected as the shortest path from a randomly selected starting node in to the source node of the requested . With set fixed, we set as the arrival rate for every . The dissimilarity of content and is modeled as , where is a non-negative constant. We set . The performance metrics is measured as the expected delay of requests, i.e., . We compare the obtained performance with the offline algorithm introduced in [10], which does not enable similarity caching and is referred to as adaptive caching. We will also provide a comparison with the state-of-art per-cache scheme LRU-. This scheme always delivers the most similar content in the cache of the starting node for each request.

Fig. 3: Expected delay of similarity-based caching scheme versus the cache capacity of all nodes compared with the adaptive caching scheme in [10].

First, we evaluate the offline algorithm in Algorithm 1, referred to as similarity-based caching, as a function of the weight given to the dissimilarity cost in (6a) for at all nodes. We set and . In Fig. 2, when is small, similarity-based caching is seen to obtain a significantly lower expected delay as compared with adaptive caching by delivering similar contents instead of the requested contents. As grows larger, delivering different contents is increasingly penalized, and the performance converges to that of adaptive caching [10]. Fig. 2 also shows that the dissimilarity cost of similarity-based caching decreases with . We also observe the more significant gains obtained by similarity-based caching when is larger, corresponding to a request distribution more concentrated around the most popular contents.

Fig. 3 shows the expected delay performance versus the cache capacity , assumed to be equal for all nodes. Here, we set . It is observed that, as cache resources become abundant, the two schemes obtain similar results, while similarity-based caching is better able to use limited caching resources for the given value of .

Fig. 4: Average delay for the proposed online similarity-based caching scheme and LRU- as a function of the number of time slots. The proposed online performance converges to the average delay obtained by the offline scheme.

Finally, we evaluate the performance of the online HiBSA algorithm. To this end, we simulate the request (Poisson) processes and plot the average delay obtained with the current iterates , as a function of the number of time slots. The length of time period is . We set the step size for updating variable as , the step size for updating variable as , and . Note that we have found it useful to set the step size to be larger than , which suggests that the change in should be more gradual than for . The delay is averaged in a window comprising the last ten time slots. The plot corresponds to one realization of the request processes. It is also seen that the online HiBSA algorithm can significantly outperform LRU- thanks to network-wide coordination. We also observe that online similarity-based caching scheme approaches the performance of the offline scheme as more requests are processed. Convergence is particularly fast for more concentrated popularity distributions, i.e., for larger . This is because in this case it is sufficient to optimize the caching delivery decision variables only for the more popular contents in order to reap most of the benefits of caching.

Vi Conclusions

In this work, we have studied a multi-hop caching network in which similarity-based delivery is allowed. Both offline and online optimization of caching and delivery policy have been considered. The proposed solutions are based on a variant of gradient descent ascent that minimizes the weighted sum of delay and dissimilarity cost of the requests. Interesting future directions are integrating the use of advanced wireless edge caching strategies [3], and implementing larger-scale networks.


  • [1] Y. Li, H. Xie, Y. Wen, C. Chow and Z. Zhang, “How Much to Coordinate? Optimizing In-Network Caching in Content-Centric Networks,” IEEE Trans. on Netw, Service Manag., vol. 12, no. 3, pp. 420-434, Sep. 2015.
  • [2] S. Borst, V. Gupta and A. Walid, “Distributed Caching Algorithms for Content Distribution Networks,” in 2010 Proc. IEEE INFOCOM, USA, Mar. 2010, pp. 1-9.
  • [3] S. M. Azimi, O. Simeone, A. Sengupta and R. Tandon, “Online Edge Caching and Wireless Delivery in Fog-Aided Networks With Dynamic Content Popularity,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1189-1202, June 2018.
  • [4] M. Garetto, E. Leonardi and G. Neglia, “Similarity Caching: Theory and Algorithms,” in 2020 Proc. IEEEE INFOCOM, China, April 2020.
  • [5] D. Zhang, J. Wang, D. Cai, and J. Lu, “Self-taught hashing for fast similarity search,” in Proc. 33rd Int. ACM SIGIR Conf. Res. Development Inf. Retrieval, July 2010, pp. 18–25.
  • [6] S. Pandey, A. Z. Broder, F. Chierichetti, V. Josifovski, R. Kumar, and S. Vassilvitskii, “Nearest-neighbor caching for content-match applications,” in Proc. 18th Int. Conf. World Wide Web, April 2009, pp. 441–450.
  • [7] P. Sermpezis, T. Giannakas, T. Spyropoulos and L. Vigneri, “Soft Cache Hits: Improving Performance Through Recommendation and Delivery of Related Content,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1300-1313, June 2018.
  • [8] P. Sermpezis, T. Spyropoulos, L. Vigneri, and T. Giannakas, “Femtocaching with soft cache hits: Improving performance with related content recommendation,” in 2017 Proc. IEEE GLOBECOM, Singapore, Dec. 2017, pp. 1–7.
  • [9] S. Lu, I. Tsaknakis, M. Hong, and Y. Chen, “Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications,” to appear on IEEE Trans. Signal Processing.
  • [10] S. Ioannidis, and E. Yeh, “Adaptive Caching Networks With Optimality Guarantees,” IEEE/ACM Trans. Netw. , vol. 26, no. 2, pp. 737-750, April 2018.
  • [11] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.