The wireless edge caching architecture proposes to cache popular files at small-cell base stations (SBS) in order to serve future user requests . This is a promising approach for accommodating the increasing mobile data traffic in a cost-efficient fashion , and has rightfully spurred a flurry of related work . A weakness of these proactive caching solutions, however, is that they assume static and known file popularity. Practice has shown quite the opposite: file popularity changes fast, and it is challenging to learn it. Here, we study these systems from a new perspective and propose an online caching policy that optimizes their performance under any popularity model. Our approach tackles the caching problem in its most general form and reveals a novel connection between (wireless or wired) caching networks and Online Convex Optimization (OCO) .
Due to its finite capacity a cache can host only a small subset of the file library, and it is therefore necessary to employ a caching policy that selects which files should be stored. The main selection criterion is typically the fraction of file requests the cache can satisfy (cache hit ratio), and different policies employ different rules in order to maximize this metric. For instance, the Least-Recently-Used (LRU) policy inserts in the cache the newly requested file and evicts the one that has not been requested for the longest time period; while the Least-Frequently-Used (LFU) policy evicts the file that is least frequently requested. These widely adopted policies were designed empirically, and hence a question that arises is under what conditions they achieve high hit ratio?
The answer depends on the file popularity model. For instance, it has been shown that (i) for stationary requests, LFU achieves the highest hit ratio ; (ii) an age-based-threshold policy maximizes the ratio when requests follow the Poisson Shot Noise model ; and (iii) LRU has the highest hit ratio  for more general request models [9, 10]. These policies, however, perform poorly when the request model is other than the one assumed ; and indeed in practice the requests follow unknown and possibly time-varying distributions. This renders imperative the design of a universal caching policy that works provably well for all possible request models.
This requirement is even more crucial for wireless edge caching networks, see Fig. 1, where the caches receive requests with low rate and therefore “see” processes with highly non-stationary behavior [11, 12]. Moreover, due to the wireless medium, a user might be within the range of multiple SBS caches, each one offering a different transmission rate and thus caching utility. This creates the need for explicit routing decisions which are inevitably intertwined with the caching policy. In other words, the caching decisions across different SBSs are coupled, routing affects caching, and the requests might change both in space and time. Request models for this intricate case include random replacement models , and inhomogeneous Poisson processes [7, 13]
, among others. However, such multi-parametric models are challenging to fit to data, and rely on strong assumptions about the popularity evolution (see Sec.II). Our approach is orthogonal to these works as we design an online learning mechanism for adaptive caching and routing decisions that reduce the MBS transmissions and maximize the utility offered by the SBS caches.
I-B Methodology and Contributions
We introduce a model-free caching model along the lines of the OCO framework. We assume that file requests are drawn from a general distribution, which is equivalent to caching versus an adversary that chooses the requests arbitrarily.111The adversary might even select the requests attempting to degrade the system performance, exploit our past caching decisions, and so on. At each slot (i) the adversary creates a new file request; (ii) a routing plan is deployed to retrieve the file from the SBS caches and/or MBS; (iii) a (file, cache)-dependent utility is obtained; and (iv) the caching policy updates the stored files at each SBS. This generalizes the criterion of cache hit ratio and allows one to build policies that, for instance, minimize delay or provide different priority to different users.
In this setting we seek to design a policy with sublinear regret; i.e., one that achieves zero utility loss per slot as the time horizon increases when it is compared to the best static cache configuration (hindsight policy). To this end, we propose the Bipartite Supergradient Caching Algorithm (BSCA) policy, and prove that its regret is upper bounded by for a network of caches that each can store up to of the library files. Constants and deg are independent of parameters and ; and therefore BSCA amortizes the average loss compared to the hindsight policy, i.e. , and its oblivious to the library size. Moreover, for the single cache scenario we derive the lower attainable regret bound and prove that BSCA matches it. Our contributions can be thus summarized as follows:
Machine Learning (ML) caching: We provide a fresh ML angle for the design of wireless edge caching policies by reformulating this problem to handle time-varying file popularity and ensure its efficient solution. To the best of our knowledge this is the first time online convex optimization is used in the context of caching networks.
Universal caching policy: BSCA has zero loss over the hindsight policy under any request model and hits the sweet spot of complexity versus performance. It is applicable to a variety of settings, including general caching networks that can be modeled with a bipartite graph, and networks with time-varying parameters or file prefetching costs.
Single-cache performance: For the basic model of one cache, we prove that the lowest attainable regret is , where parameter is independent of . We show that BSCA achieves this bound, by employing a smart combination of LFU and LRU-type decisions.
Fast Cache Projection: BSCA requires at each slot projections on the intersection of box and simplex constraints. We design a routine that performs each of them in steps. This simplifies the execution of BSCA and enables its application to large caching networks.
Trace-driven Evaluation: We evaluate BSCA using several request models and real traces, and compare it with state-of-the-art competitor algorithms. We verify that BSCA has no regret and we find that it outperforms previous policies by up to 45.8 in typical scenarios.
I-C Paper Organization
The rest of this paper is organized as follows. Section II presents the related work and Section III introduces the system model. The online wireless edge caching problem is formulated in Section IV, and Section V presents the BSCA algorithm for a network of caches. Section VI introduces our projection routine. Section VII focuses on the simpler but important case of one cache. We discuss model extensions in Section VIII, compare BSCA with key competitors in Section IX and conclude in Section X.
Ii Background and Related Work
The literature of caching policies cannot, by any means, be covered in a single section, and we refer the interested reader to [4, 14] for a thorough presentation. We focus here on reactive policies and online algorithms for caching networks.
Ii-a Reactive Policies
The design of caching policies depends heavily on the file popularity model that is assumed to generate the requests. One option is to use the adversarial model of , where a policy’s hit rate is compared to Belady’s dynamic hindsight policy that evicts the file which will be requested farther in the future [9, 10]. LRU performs better than other policies under this model222Comparing with this very demanding benchmark requires one to restrict the caching cacpacity of the Belady cache to a portion of the actual cache — otherwise all policies perform very poorly. , but its performance is actually comparable to any other marking policy , e.g., even to a simple FIFO. In a sense, this dynamic hindsight policy is a “too strong” benchmark to help us identify a good caching policy. On the other hand, moderately stationary models like IRM  are easy to fit in data, and LFU maximizes the cache hit rate in this case. However, IRM is accurate only when used to model requests within small time intervals where popularity is roughly static, hence it is not suitable for evaluating long-term performance of a caching network.
In fact, in real systems the requests are rarely stationary and this has motivated the proposal of several non-stationary models. For instance,  uses the theory of variations,  makes random content replacements in the catalog,  proposes a time-dependent Poisson model, and  introduced the shot noise model for correlated requests in temporal proximity. Unfortunately, selecting and fitting these models to data is a time-demanding task 
, and thus not suitable for fast-changing environments. There are also several model-based/free approaches for predicting content popularity using statistical analysis, transfer learning, or social network properties, see[17, 18, 19, 20]. Yet, these works do not incorporate the predictions into the system operation. Unlike prior efforts, our proposal does not involve model selection and the learning mechanism is fully embedded into the caching policy.
Instead of fitting models, another option is to learn the popularity without using prior assumptions [21, 22]. For instance,  models the popularity evolution as a Markov process and employs Q-learning
to estimate the transition probabilities which are then used for proactive caching. Such model-free solutions work well if there are adequate data, but have substantial computation and memory requirements. For instance, tabular Q-learning needs memory size combinatorial in the catalog size and cache capacity; and Q-learning with function approximation requires more involved gradient computations, while its convergence can be slow. Following a different approach,
predicts file popularities using classification. This interesting approach, however, needs feature extraction, does not consider routing, nor accounts for changes in utilities. Other online caching proposals include[24, 25, 26] which study the basic paging problem of hit-maximization in one cache. Our approach works for networks of caches without requiring stationary or known request models.
Ii-B Caching Networks (CNs)
The first OCO-based caching policy was proposed in  which reformulated the caching problem and embedded a learning mechanism, while  studied how such policies can be used in device-to-device caching scenarios. In CNs one needs to additionally decide which cache will satisfy a request (routing) and which files will be evicted (caching), and these decisions are perplexed when each user is connected to multiple caches. Thus, it is not surprising that online policies for CNs are under-explored. Placing more emphasis on the network,  introduced a joint routing and caching algorithm assuming that file popularity is stationary. On the other hand, proposals for reactive CN policies include: randomized caching policies for small-cell networks ; joint caching and SBSs transmission policies ; distributed cooperative caching algorithms ; and policies using a TTL-based utility-cache model . All these solutions presume that the popularity model is fixed and known.
proposed the multi-LRU (mLRU) heuristic strategy, and the “lazy rule” extending -LRU to provide local optimality guarantees under stationary requests. These works pioneered the extension of the seminal LFU/LRU-type policies to the case of multiple connected caches and designed efficient caching algorithms with minimal overheads. Nevertheless, dropping the stationarity assumption, the problem of online routing and caching remains open. Our method is different as we embed a learning mechanism into the system operation that adapts the caching and routing decisions to any request model and to network changes.
Iii System model
Network Connectivity. The caching network consists of small-cell base stations (SBS) denoted with the set , and a macro-cell base station (MBS) indexed with 0; each station is equipped with a cache. There is a set of user locations , where file requests are created. The connectivity between user locations and SBSs is modeled by parameters , where only if cache can be reached from location . The MBS is within the range of all users in .
File Requests. The system evolves in slots, . Users submit requests for obtaining files from a library of files with unit size333For simplicity, we assume that files have unit size; but the results can be readily extended for the case the files are of size .. We denote with the event that a request for file has been submitted by a user at location during slot . At each slot we assume that there is exactly one request.444We can also consider batches of requests. If the batch has 1 request from each location, it is biased to equal request rate at each location. An unbiased batch contains an arbitrary number of requests from each location. Our guarantees hold for unbiased batches of arbitrary (but finite) length. From a different perspective, this means that the policy is applied after every request, exactly as it happens with the standard LFU/LRU-type of reactive policies, see [33, 34]
and references therein. Hence, the request process can be described by a sequence of vectorsdrawn from:
The instantaneous file popularity is expressed by the probability distribution(with support
), which is considered unknown and arbitrary. The same holds for the joint distributionthat describes the file popularity evolution within the time interval . This general model captures all studied request sequences in the literature, including stationary (i.i.d. or otherwise), non-stationary, and adversarial models. The latter are the most demanding models one can employ as they include request sequences selected by an adversary aiming to disrupt the system performance, e.g., consider Denial-of-Service attacks. If a policy achieves a certain performance under this model, it is guaranteed to meet this benchmark for all request models.
Caching. Each SBS can cache only files, with , while the MBS can store the entire library, i.e., . One may also assume that the MBS has high-capacity direct access to the file server. Following the standard femtocaching model , we perform caching using the Maximum Distance Separable (MDS) codes, where files are split into a fixed number of chunks, and each stored chunk is a pseudo-random linear combination of the original chunks. Using the properties of MDS codes, a user will be able to decode the file (with high probability) if it receives any coded chunks, a property that greatly facilitates cache collaboration and improves efficiency.
The above model results in the following: the caching decision vector has elements, and each element denotes the amount of random coded chunks of file stored at cache .555The fractional model is justified by the observation that large files are composed of thousands chunks, stored independently . Hence, rounding the fractional decisions to the closest integer induces small errors. Based on this, we introduce the set of eligible caching vectors:
which is convex. We can now define the online caching policy:
A caching policy is a (randomized) rule:
which at each slot maps past observations and configurations to a new caching vector .
Note that unlike previous strictly proactive caching policies, we assume here that files can be cached dynamically in response to submitted requests.
Routing. Since each location is possibly connected to multiple caches, we introduce routing variables to determine the cache from which the requested file will be fetched. Namely, let denote the portion of request that is served by cache , and we define the respective routing vector . There are two important remarks here. First, due to the coded caching model, the requests can be simultaneously routed from multiple caches. Second, the caching and routing decisions are coupled and constrained: (i) a request cannot be routed from an unreachable cache; (ii) we cannot route from a cache more data chunks than it has; and (iii) each request must be fully routed, i.e., satisfied.
Based on the above, we define the set of eligible routing vectors conditioned on caching policy as:
where the first constraint ensures that the entire request is routed, and the second constraint captures connectivity and caching limitations. We note that routing from MBS (variable ) does not appear in the second constraint because the MBS stores the entire file library and can serve all users. This last-resort routing option ensures that the set is non-empty for all . As it will become clear in the next section, the optimal routing decisions can be easily devised for a given caching and request vector. This is an inherent property of uncapacitated bipartite caching networks, and also appears in prior works, e.g., see .
Iv Problem Statement & Formulation
We begin this section by defining the caching objective and then proving that the online wireless edge caching operation can be modeled as a regret minimization problem.
Iv-a Cache Utility
We consider a utility-cache model which is more general than cache-hit maximization . We introduce the weights to denote the utility when delivering a unit of file (i.e., a coded chunk) to location from cache instead of the MBS, and trivially set . This detailed file-dependent utility model can be used to capture bandwidth economization from cache hits , QoS improvement from using caches in proximity , or any other cache-related benefit such as transmission energy savings due to proximity with the SBSs.666We can obtain the special case of hit ratio maximization from the above model if we set . Our model allows these benefits to be different for each cache and user location due to, for example, the impact of wireless links; and we extend it in Sec. VIII to account for network dynamics such as link capacity variations.
We can then define the network utility accrued in slot as:
where index is used to remind us that is affected by the request . It is easy to see that states that utility is accrued when a unit of request is successfully routed to cache where file is available. Note also that we have written only as function of caching, as for each we have already included in the utility definition the selection of the optimal routing . As we will see next, this formulation facilitates the solution of the problem by simplifying the projection step.
Iv-B Problem Formulation
Formulating the caching network operation as an OCO problem is non-trivial and requires certain conceptual innovations. For the discussion below please refer to Fig. 2. First, in order to model that the request sequence can follow any arbitrary and unknown probability distribution, we use the notion of an adversary that selects in each slot . In the worst case, this entity generates requests aiming to degrade the performance of the caching system. Going a step further, we model the adversary as selecting the utility function instead of the request. Namely, at each slot , the adversary picks from the family of functions by deciding the vector . We emphasize that these functions are piece-wise linear. In the next subsection we will show that they are concave in the caching vector , but not always differentiable.
It is important to emphasize that we consider here the practical online setting where is decided after the request has arrived and the caching utility has been calculated. This timing reflects naturally the operation of reactive caching policies, where first a generated request yields some utility (based on whether there was a cache hit or miss), and then the system reacts by updating the cached files. In other words, caching decisions are taken without knowing the future requests. The above steps allow us to reformulate the caching problem and place it squarely on the OCO framework .
Given the adversarial nature of our request model, the ability to extract useful conclusions depends crucially on the choice of the performance metric. Differently from the competitive ratio approach of , we introduce a new metric for designing our policies. Namely, we will compare how our policy fares against the best static cache configuration designed in hindsight. This benchmark is a hypothetical policy that makes one-shot caching decisions having a priori knowledge of the entire request sequence. This metric is commonly used in machine learning [36, 37] and is known as the worst-case static regret. In particular, we define the regret of policy as:
where is the time horizon of reference. The maximization is over all possible adversary distributions and the expectation is taken w.r.t. the possibly randomized and . Essentially, this captures that the adversary can select any sequence of functions so as to deteriorate the effectiveness of our caching decisions777In defining the regret , the maximization is taken w.r.t. the sequence of functions, which for our problem is determined by the sequence of requests. .
The best cache configuration is found by using the entire sample path of requests and solving:
Intuitively, measuring the performance of w.r.t constrains the power of the adversary; for example a rapidly changing request pattern will impact but also . This comparison makes regret different from the standard competitive hit ratio888As explained in Sec. II, the competitive ratio metrics typically use a dynamic benchmark that has full knowledge of requests and can select the exact optimal sequence of caching decisions, not simly a static configuration. , and allow us to discern policies that learn high-utility caching configurations from those that fail to do so.
Our goal is to study how the regret scales with . A policy with sublinear regret produces average loss
w.r.t. the optimal static cache configuration. This means that the two policies have the same average per-slot performance in the long-run, a property that is called no-regret in OCO. In other words, learns which file chunks to store and how to route requests, without having a priori access to the file popularity. We can now formally define the online caching and routing problem as follows:
[boxrule=1.1pt,arc=0.6em, title=Online Caching Problem (OCP)]
Given a file library ; a set of user locations and caches ; a set of links connecting them ; and utilities :
Determine the policy that selects at each slot caching decisions that incur no regret over horizon , i.e., , where is defined in (2). We stress that while OCO typically focuses on time horizon , in (OCP) the number and size of caches and, importantly, the library size, are large enough to induce high utility loss themselves. Hence, it is crucial to study how the regret is affected by these parameters as well.
Iv-C Problem Properties
We prove that (OCP) is an OCO problem by establishing the concavity of with respect to . Note that we propose here a different formulation from the typical femtocaching model  by including routing variables and the request arrival events. This re-formulation is imperative in order to fit our problem to the OCO framework, but also because otherwise (e.g., if were using  ) we would need to make in each slot a computationally-challenging projection operation.
First, we simplify by exploiting the fact that there is only one request at each slot. Let be the file and location where the request in arrives. Then is zero except for . Denoting with the set of reachable SBS caches from , and simplifying the notation by setting , , and dropping subscript , eq. (1) reduces to:
Function is concave in its domain .
Consider two feasible caching vectors . We will show that:
We begin by denoting with and the routing vectors that maximize (3) for vectors , respectively. Immediately, it is . Next, consider a candidate vector , for some . We first show that routing is feasible for . By the feasibility of , , we have:
which proves that satisfies (4). Also, it is:
thus satisfies also (5) and . It follows:
Combining the above, we obtain:
which establishes the concavity of . ∎
Observe that the term of the regret definition is convex, and the operator applied for all possible request arrivals preserves this convexity. This makes (OCP) an OCO problem, and this holds even when we consider general graphs and other convex functions .
Finally, we can show with a simple example that does not belong to the class , i.e., it is not always differentiable. Consider a network with a single file , and two caches with , that serve one user with utility . Assume that for some very small . Notice that the partial derivatives (equal to ). But if we suppose a slight increase in caching variables such that term is removed, then the partial derivatives become zero. This is because extra caching of this file cannot improve the utility, which is already maximal. The same holds in many scenarios which make it impossible to guess when the objective changes in a non-smooth manner (having points of non - differentiability). Hence we will employ supergradients.
V Bipartite Supergradient Caching Algorithm
Our solution employs an efficient and lightweight gradient-based algorithm for the caching decisions, which incorporates the optimal routing as a subroutine. We start from the latter.
V-a Optimal Routing
Recall that file routing is naturally decided after a request is submitted, at which time the caching has been determined, Fig. 2. Thus, in order to decide we will assume and are given. The goal of routing is to determine which chunks of the file are fetched from each cache.
Specifically, let us fix a request for file submitted to location . Using the notation defined above, and letting be the optimal routing variables related to these caches, file, and user location, we may recover an optimal routing vector as one that maximizes the utility:
Ultimately, the routing at is set:
) is a Linear Program (LP) of a dimension at mostdeg, where , and is the number of caches reachable from location . This LP is computationally-efficient and can be solved by the interior point or the simplex method . Interestingly, however, due to its structure a solution can be found by inspection as follows. First, we order the reachable caches in decreasing utility, i.e., let be a permutation such that . We set for the first element, and then iteratively for each round , we set:
until all reachable caches are visited, or we obtain for some ; where in the latter case the rest of the caches have . Both approaches, i.e., solving directly the LP or using this iterative process, may be helpful in practice. By explicitly solving the LP we also obtain the value of the dual variables, which, as we will see, help us to compute the supergradient.
V-B Optimal Caching - BSCA Algorithm
The general idea is to gradually update caching decisions along the direction of the gradient. However, since is not differentiable everywhere we need to find a supergradient direction at each slot. We describe next how this can be achieved. Consider the partial Lagrangian of (3):
where , and define the auxiliary function:
From the strong duality property of linear programming , we may exchange and in the Lagrangian, and obtain:
We prove next the following lemma for the supergradients.
Lemma 2 (Supergradient).
Let be the vector of optimal multipliers corresponding to (5). Define
The vector is a supergradient of at , i.e., it holds .
First note that we can write:
Where holds since it is:
and by applying (8), where the optimization is independent of variables (or ), we obtain , with and being the same as those appearing in (since their calculation is independent of ). Hence, we can subtract the two expressions (observe the linear structure of (7)), plug in a certain vector and obtain:
where . Note also that it holds by definition of , hence:
which concludes our proof. ∎
Intuitively, the dual variable (element of vector ) is positive only if the respective constraint (5) is tight which ensures that increasing the allocation will induce a benefit in case of a request with occurs in future. The actual value of is proportional to this benefit. The reason the algorithm emphasizes this request, is that in the online gradient-type of algorithms the last function (in this case a linear function with parameters the last request) serves as a corrective step in the “prediction” of future. Having this method for calculating a supergradient direction, we can extend the seminal online gradient ascent algorithm , to design an online caching policy for (OCP). In detail:
Definition 2 (Bsca).
The Bipartite Subgradient Caching Algorithm adjusts the caching decisions with a supergradient:
where is the stepsize, can be taken as in Lemma 2, and
is the Euclidean projection of the argument vector onto .
Algorithm 1 explains how BSCA can be incorporated into the network operation for devising the caching and routing decisions in an online fashion. The algorithm requires as input only the network parameters . The stepsize is computed using the set diameter , the upper bound on the supergradient , and the time horizon . The former two depend on the network parameters as well. Specifically, define first the diameter of set to be the largest Euclidean distance between any two elements of this set. In order to calculate this quantity for , we select vectors which cache exactly different files at each cache , and hence:
where . Also, we denote with the upper bound on the norm of the supergradient vector. By construction this vector is non-zero only at the reachable caches, and only for the specific file. Further, its smallest value is zero by the non-negativity of Lagrangian multipliers, and its largest is no more than the maximum utility, denoted with . Thus, using we can bound the supergradient norm:
The algorithm proceeds as follows. At each slot , the system receives a request and sets for the requester and file (line 5). Given the cached files, the system finds the optimal routing for serving (line 6), e.g. by solving an LP with at most deg variables and finding the dual variables. This yields utility (line 7). The supergradient is calculated (line 8) and is used to update the cache configuration (line 9). Finally, the decisions are projected to the feasible set so as to satisfy the cache capacities (line 10).
It is interesting to note the following. Since the supergradient computation in line 8 and the optimal routing, explained in the previous subsection, require the solution of the same LP, it is possible to combine these as follows. When the optimal routing is found, the dual variables are stored and used for the direct computation of the supergradient in the next iteration of BSCA. Note that, given the cache update rule, the algorithm state needs to include only , and therefore its memory requirements are very small.
V-C Performance of BSCA
Following the rationale of the analysis in , we show that our policy achieves no regret and we analyze how the various system parameters affect the regret expression. [boxrule=1.1pt,arc=0.6em, left=0.25mm, right=0.25mm]
The regret of BSCA satisfies:
Using the non-expansiveness property of the Euclidean projection , we can bound the distance of each new value from the hindsight policy , as follows:
where we expanded the norm. If we fix the step size and sum telescopically over all slots until , we obtain:
Since , rearranging the terms and using that and we obtain:
Since our utility function is concave, it holds:
for every , and therefore also for the function that maximizes the regret; thus, we can remove the operator from (2) and rewrite it as:
We can minimize the regret bound by optimizing the step size. Using the first-order condition w.r.t. for the RHS of the above expression, we obtain which yields:
Theorem 1 shows that the regret of BSCA scales as and therefore BSCA solves (OCP). The regret expression captures how fast the algorithm learns the right caching configuration, and therefore the detailed constants we obtain in the theorem are of great importance. For example, we see that the bound is independent of the file library size . This is very crucial in caching problems where the drives the problem’s dimension. Another interesting observation is that the learning rate of the algorithm might become slow (i.e., resembling regret behavior of ) when is comparable to . This is in line with empirical observations suggesting that in order to extract safe conclusions about the performance of a policy, one should simulate datasets with size .
We stress also that Theorem 1 does not imply that BSCA outperforms all other possible policies; for example, if the requests have a particular structure, e.g., are highly correlated, then another policy might perform better. However, policies that exploit the structure of requests tend to perform poorly when the request model assumptions do not hold. We present such examples in Sec. IX
Finally, note that calculating requires to know , but this can be relaxed by using the standard doubling trick . Alternatively, we can employ a diminishing step. Namely, if we sum telescopically (14) for slots, we obtain:
and if we set , then the two terms in (18) yield factors of order , hence:
Vi Cache Projection Algorithm
BSCA involves a projection (line 9) which might affect significantly its complexity and runtime. We develop here a tailored algorithm that resolves this issue.
The Euclidean projection defined in (12) can be written as the equivalent quadratic program:
that might be computationally very expensive in some cases; see [40, 41] and references therein. Our problem has certain properties that facilitate this operation. First, the projection can be performed independently for each cache, namely we project on the intersection of a simplex-type constraint and a -dimensional box (capped simplex). Second, and differ only in one element. Exploiting these properties we design an algorithm for (20) with complexity , that uses the Karush-Kuhn-Tucker (KKT) conditions  to navigate fast the solution space.
We first introduce the Lagrangian:
where , , are the non-negative Lagrange multipliers introduced when relaxing the constraints above. The KKT conditions of (20) at the optimal point, are:
where we have omitted the primal constraints of (20) for brevity. In order to solve the projection problem we will use a simple algorithm that tests, in a systematic fashion, combinations of the complementary slackness conditions (22)-(23) until it finds a solution that is primal and dual feasible. An important observation is the following: since , the simplex constraint will be tight at the optimal point (the cache is filled) and hence we only need to check cases for (23).
First, note that the caching decisions for each cache are partitioned at the optimal point into three sets defined as follows:
where contains the files that will be stored in their entirety, the partially cached files () in cache , and the evicted files. Due to full utilization of cache capacity, it holds for each cache :
In order to solve the projection problem it suffices to determine for each cache a partition of files into sets . Note that we can check in linear time if a candidate partition satisfies all KKT conditions (and only the optimal one will). Additionally, one can show that the ordering of files in is preserved at optimal , hence prior approaches, e.g.,  that search exhaustively over all possible ordered partitions will need steps. Here, however, we expedite the solution by exploiting the property that all elements of satisfy except at most one (hence also for every cache ). This allows us to reduce the runtime to steps for each cache. Furthermore, our algorithm can also operate without sorting the files, and therefore the runtime for one cache is , and the overall runtime is .
The details are presented in Algorithm 2. The initial partition places all files to the set of partially cached (line 2). For the given partition, we compute the Lagrange multiplier (line 4), and calculate a tentative caching allocation (line 5). The indices of all files whose tentative allocation is negative are stored in a set (line 6), removed from the middle set and added to the set of files to be evicted (line 7). If there exists a file with allocation more than 1, it is placed at the set of fully cached, and the procedure is repeated. We exploit the structure of our problem: since in the previous slot all files had allocation at most 1, it follows that adding the supergradient element and taking into account the multiplier , the new allocation of all files (but the one in the supergradient) will be strictly smaller than 1. Therefore, can either have one file or none, and we search between these two possibilities (line 8). The set operation we perform in line 7 is proven in  to be monotonous, and therefore we will at most search all possibilities, resulting in worst-case runtime that matches previously known results . Finally, we observed in simulations that each loop was visited at most two times (instead of ), resulting in an extremely fast projection.
Vii The Single Cache Case
The problem is simplified for a single cache as there are no routing decisions. Nevertheless, even for this basic version, we lack a policy that can achieve no-regret caching performance for any request sequence. BSCA not only fills this gap, but in fact it achieves the best learning rate than any possible policy (based on OCO or not) can achieve.999As it will become clear in the simulations, BSCA ensures no regret for any request sequence, and in the case of a single cache we prove that its learning rate is the best possible. However, this does not mean that there are no policies which can achieve better performance for specific request patterns.
Vii-a BSCA for One Cache
We denote with the size of our single cache and with the request arriving at slot , where now we do not consider different user locations as all requests are served by the same cache. The cache utility can be written:
which states that a request for file yields utility proportional to a file-specific parameter per unit of its cached fraction . There are no routing variables in this case. Also, the gradient at exists, and it is the -dimensional vector with coordinates: