The Optimal Memory-Rate Trade-off for the Non-uniform Centralized Caching Problem with Two Files under Uncoded Placement

08/23/2018 ∙ by Saeid Sahraei, et al. ∙ EPFL University of Southern California 0

We propose a novel caching strategy for the problem of centralized coded caching with non-uniform demands. Our placement strategy can be applied to an arbitrary number of users and files, and can be easily adapted to the scenario where file popularities are user-specific. The distinguishing feature of the proposed placement strategy is that it allows for equal sub-packetization for all files while requiring the users to allocate more cache to the more popular files. This creates natural broadcasting opportunities in the delivery phase which are simultaneously helpful for users who have requested files of different popularities. For the case of two files, we propose a delivery strategy which meets a natural extension of the existing converse bounds under uncoded placement to non-uniform demands, thereby establishing the optimal expected memory-rate trade-off for the case of two files with arbitrary popularities under the restriction of uncoded placement.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Wireless traffic has been dramatically increasing in recent years, mainly due to the increasing popularity of video streaming services. Caching is a mechanism for Content Distribution Networks (CDNs) to cope with this increasing demand by placing the contents closer to the users during off-peak hours. The attractive possibility of replacing expensive bandwidth with cheap memories has caused an outburst of research in the recent years [1, 2, 3, 4, 5, 6, 7, 8, 9]. Coded caching [1] is a canonical formulation of a two-stage communication problem between a server and many clients which are connected to the server via a shared broadcast channel. The two stages consist of filling in the caches of the users during off-peak hours and transmitting the desired data to them at their request, typically during the peak hours. The local caches of the users act as side information for an instance of the index coding problem where different users may have different demands. Logically, if a content is more likely to be requested, it is desirable to cache more of it during the first stage. Furthermore, by diversifying the contents cached at different users, broadcasting opportunities can be created which are simultaneously helpful for several users [1].

In general, there exists a trade-off between the amount of cache that each user has access to and the delivery rate at the second stage. Significant progress has been made towards characterizing this trade-off for worst case and average case demands under uniform file popularities [1, 10, 3, 11, 2]. The optimal memory-rate region under uncoded placement is known [3, 12], and the general optimal memory-rate region has been characterized within a factor of 2 [2]. Furthermore, many achievability results have been proposed based on coded placement [11, 13, 14]. Some of these schemes outperform the optimal caching scheme under uncoded placement [3], establishing that uncoded placement is in general sub-optimal.

By contrast, the coded caching problem with non-uniform file popularities, an arguably more realistic model, has remained largely open. The existing achievability schemes are generally speaking straightforward generalizations of the caching schemes that are specifically tailored to the uniform case. Here we briefly review some of these works.

I-a Related Work

The main body of work on non-uniform coded caching has been concentrated around the decentralized paradigm where there is no coordination among different users [10]. The core idea here is to partition the files into groups where the files within each group have similar popularities [15]. Within each group, one performs the decentralized coded caching strategy of [10]

as if all the files had the same probability of being requested. In the delivery phase, coding opportunities among different groups are ignored and as a result, the total delivery rate is the sum of the delivery rates for the

partitions. It was subsequently suggested to use groups [16, 17, 18] and to allocate no cache at all to the group which contains the least popular files. This simple scheme was proven to be within a multiplicative and additive gap of optimal for arbitrary file popularities [18].

The problem of centralized coded caching with non-uniform demands has also been extensively studied [19, 20, 21, 22, 23]. Here, in order to create coding opportunities among files with varying popularities, a different approach has been taken. Each file is partitioned into subfiles corresponding to all possible ways that a given subfile can be shared among users. This creates coding opportunities among subfiles that are shared among equal number of users, even if they belong to files with different popularities. The delivery rate can be minimized by solving an optimization problem that decides what portion of each file must be shared among users, for any . It was proven in [21] that if the size of the cache belongs to a set of base-cases of size , the best approach is to allocate no cache at all to the least popular files while treating the other files as if they were equally probable of being requested. Memory-sharing among such points must be performed if the cache size is not a member of . The set of base-cases depends on the popularities of the files, the number of users and the number of files, and can be computed via an efficient algorithm [21].

The graph-based delivery strategies that are predominantly used in this line of research [17, 21] are inherently limited, in that they do not capture the algebraic properties of summation over finite fields. For instance, one can easily construct examples where the chromatic-number index coding scheme in [17] is sub-optimal, even for uniform file popularities. Suppose we have only one file and 3 users each with a cache of size file. Assume user caches . In this case, the optimal delivery rate is but the clique-cover (or chromatic-number) approach in [17] only provides a delivery rate of 1. This is due to the fact that from and one can recover , a property that the graph-based model in [17] fails to reflect. This issue was addressed in a systematic manner by Yu et. al. in [3] which introduced the concept of leaders. Our delivery scheme in this paper provides an alternative mechanism for overcoming this limitation, which transcends the uniform file popularities and can be applied to nonuniform demands. This is accomplished via interference alignment as outlined in the proof of achievability of Lemma 1 in Section VI.

In [23] it was proven that a slightly modified version of the decentralized scheme in [16] is optimal for the centralized caching problem when we have only two users. In [22], a centralized caching strategy was proposed for the case where the number of users is prime and the case where divides , where is the subpacketization of each file. The placement scheme allows for equal subpacketization of all the files while more stringent requirements are imposed for caching subfiles of less popular files. This concept is closely related to what was presented in [24] which serves as the placement scheme for the current paper.

I-B Our Contributions

In this paper, we first propose a centralized placement strategy for an arbitrary number of users and files, which allows for equal subpacketization of all the files while allocating less cache to the files which are less likely to be requested. This creates natural coding opportunities in the delivery phase among all the files regardless of their popularities. Next, we propose a delivery strategy for the case of two files and an arbitrary number of users. This delivery strategy consists of two phases. First, each file is compressed down to its entropy conditioned on the side information available at the users who have requested it. Simultaneously, this encoding aims at aligning the subfiles which are unknown to the users who have not requested them. In the second phase of the delivery strategy, the two encoded files are further encoded with an MDS code and broadcast to the users. Each user will be able to decode his desired file following a two-layer peeling decoder. By extending the converse bound for uncoded placement first proposed in [3] to the non-uniform case, we prove that our joint placement and delivery strategy is optimal for two files with arbitrary popularities under uncoded placement. To summarize, our main contributions are the following:

  • A new placement strategy is developed for non-uniform caching with users and files (Section V). This scheme allows for equal sub-packetization of every file, while allocating more cache to files that are more popular. A simple modification of the proposed scheme can be applied to user-dependent file popularities. More broadly, the proposed multiset indexing approach to subpacketization can be expected to find applications in other coding problems of combinatorial nature with heterogeneous objects, such as Coded Data shuffling [25], Coded Map-Reduce [26], and Fractional Repetition codes [27] for Distributed Storage.

  • An extension of the converse bound under uncoded placement first proposed in [3] to non-uniform caching with users and files is established (Section VII).

  • A new delivery strategy is presented for the case of two files which relies on source coding and interference alignment (Section VI). The achievable expected delivery rate meets the extended converse bound under uncoded placement, hence establishing the optimal memory-rate tradeoff for non-uniform demands for the case of two files. If each file has probability , this approach leads to an alternative delivery strategy for uniform caching of two files, which can be of independent interest.

The rest of the paper is organized as follows. We introduce the notation used throughout the paper and the formal problem statement in Sections II and III. In Section IV we will explain the main ideas behind our caching strategy via a case study. The general placement and delivery strategy are presented in Sections V and VI. We will then propose our converse bound under uncoded placement in Section VII. In Section VIII we will prove that our proposed caching strategy is optimal for the case of two files. Finally, in Section IX, we will provide numerical results, and conclude the paper in Section X.

Ii Notation

For two integers define if or . For non-negative integers that satisfy , define

(1)

For a positive integer define . For two integers define

. For two column vectors

and denote their vertical concatenation by . For a real number , define as the largest integer no greater than . Similarly, define as the smallest integer no less than . For

and a discrete random variable

with support define as the entropy of in base :

(2)

Suppose we have a function where is a discrete set of points in . Let be the convex hull of . Define

(3)

as the lower convex envelope of evaluated at point .

Iii Model Description and Problem Statement

We follow the canonical model in [1] except here we concentrate on the expected delivery rate as opposed to the worst case delivery rate. For the sake of completeness, we repeat the model description here. We have a network consisting of users that are connected to a server through a shared broadcast link. The server has access to files each of size symbols over a sufficiently large field . Therefore, . Each user has a cache of size symbols over . An illustration of the network has been provided in Figure 1. The communication between the server and the users takes place in two phases, placement and delivery.

In the placement phase, each user stores some function of all the files in his local cache. Therefore, for a fixed (normalized) memory size , a placement strategy consists of placement functions such that for all . After the placement phase, each user requests one file from the server. We represent the request of the ’th user with

which is drawn from a known probability distribution

. Furthermore, the requests of all the users are independent and identically distributed. After receiving the request vector , the server transmits a delivery message through the broadcast link to all the users. User then computes a function

in order to estimate

. For a fixed placement strategy , fixed file size , and fixed request vector we say that a delivery rate of is achievable if a delivery message and decoding functions exist such that

(4)

and

(5)

For a fixed placement strategy , we say that an expected delivery rate is achievable if there exists a sequence of achievable delivery rates such that

(6)

Finally, for a memory of size , we say that an expected delivery rate is achievable if there exists a placement strategy with for all , for which an expected delivery rate of is achievable.
Our goal in this paper is to characterize the minimum expected delivery rate for all under the restriction of uncoded placement. In other words, the placement functions must be of the form

(7)

where for all , and refers to the subset of symbols of the file which are indexed in the set .

Server

Users
Fig. 1: An illustration of the caching network. A server is connected to users via a shared broadcast link. Each user has a cache of size symbols where he can store an arbitrary function of the files .

Iv Motivating Example: The Case of Four Users and Two Files

Consider the caching problem with two files and and users. Assume the probability of requesting is lower than the probability of requesting . In this section we will demonstrate how to find the optimal expected delivery rate for any memory size for this particular choice of parameters, while explaining the main principles behind our joint placement and delivery strategy. We start by fixing two integers such that . As we will see soon, any choice of corresponds to a particular where is the amount of cache that each user allocates to file , normalized by the size of one file. For the sake of brevity, we will explain our strategy only for . The delivery rate for other possible choices of will be summarized at the end of this section. Next, we will characterize the entire region that can be achieved by our algorithm. Finally, we will illustrate how to find the optimal choice of for a particular cache size .
Define the parameter . We divide each of the two files into subfiles and index them as and such that and . In this example, The 12 subfiles of are then denoted by . In our placement strategy, user stores the subfiles for which as well as the subfiles for which . Since , the users naturally store fewer subfiles of than In our running example, each user stores six subfiles of but only three subfiles of The cache contents of each user has been summarized in Table I.

TABLE I: The proposed placement scheme for , , .

Note that this placement scheme results in a memory of size . As we will see soon, the memory size is in general where is the amount of cache dedicated by each user to file . It is important to note that despite the fact that each user has allocated more cache to file , all the subfiles are of equal size. This is a key property of the proposed placement scheme which allows us to efficiently transmit messages in the delivery phase which are simultaneously helpful for users who have requested files of different popularities.

Let us now turn to the delivery phase. To make matters concrete, let us suppose that the first three users have demanded and the last user is interested in . Therefore, our demand vector is . We define as the subset of users who have requested file . In this case, and .

For the delivery phase, we construct a compressed description for each file For users in recovering implies recovering , that is, . Moreover, among all that satisfy this property, our particular construction minimizes both and at the same time. The general construction of is presented in Section VI, along with proofs of its properties. For the example at hand, our construction specializes to

(8)

where

(9)

and

(10)

Therefore, the subfiles of are . We can represent this in matrix form as follows.

(11)

If a user in successfully receives , he can, with the help of side information already stored in his cache, recover the entire . For instance, user only needs to solve the following set of equations for

(12)

This is possible since user 1 knows the left-hand side of the equation, and the matrix on the right hand-side is invertible. Therefore, our goal boils down to transferring the entire to all the users in . Following a similar process, we construct the description as follows

(13)

where

(14)

That is, in this example, consists of the subfiles of which are unknown to user . Again, transferring the entire to user 4, guarantees his successful recovery of .
To simplify matters, we will require every user in to recover the entire . In order to accomplish this, we transmit over the broadcast link. The matrix here is an MDS matrix of rows and columns. The number of rows of this matrix is determined by the maximum number of subfiles of which are unknown to any given user. In this example, a user in has precisely 12 unknowns in (6 subfiles of and 6 subfiles of ). On the other hand, a user in knows 4 out of the 6 subfiles of . Therefore, a total of 11 subfiles of are unknown to him. Hence, the matrix must have 12 rows. Once a user in receives , he can remove the columns of which correspond to the subfiles he already knows. The resulting matrix will be square (or overdetermined) which is invertible owing to the MDS structure of . This will allow every user to decode . Subsequently, each user in can proceed to decode with the help of his side information. Recall that we started by dividing each file into subfiles, and the delivery message consists of 12 linear combinations of such subfiles. Therefore, the delivery rate for this particular request vector is .
As we will see in the next section, the delivery rate of our strategy only depends on the request vector through , the set of indices of all the files that have been requested at least once. Therefore, we showed that with , , and assuming , we can achieve a delivery rate of . We can perform the same process for every choice of that satisfies . The result is summarized in Table II. Note that if , the delivery rate is simply .

[width=6em]spac
[width=6em]spac
[width=6em]spac
TABLE II: The set of delivery rates of our proposed scheme for all possible choices of and all possible . We have and .

By performing memory-sharing among all such points, we are able to achieve the lower convex envelope of the points in Table II. The expected delivery rate as a function of for a probability distribution of has been plotted in Figure 2. Note that the dotted half of the figure where would correspond to switching the roles of the two files and and allocating more cache to the less popular file. The next question is how to find the best delivery rate for a particular cache size For this, we first have to restrict Figure 2 to the trajectory . As an example, we have plotted the thick red curve on the figure which corresponds to (or ). In order to find the best caching strategy for a cache size of , we need to choose the global minimum of this red curve. This can be done efficiently due to the convexity of the curve, and as we will see in Section VIII-A, can be even performed via binary search over the set of break points of the curve. As marked on the figure with a red circle, for this particular example with , the expected delivery rate is which can be achieved by allocating a cache of size to file and to file . Theorem 3 from Section VIII will tell us that under the restriction of uncoded placement, this is the best expected delivery rate that one can achieve for the given .

Fig. 2: The expected delivery rate for the caching problem with 4 users and 2 files versus . The probabilities of the two files are and respectively. The thick red curve determines the set of which results in a cache of size . The red circle on the curve is the minimizer of the red curve, which provides the optimal delivery rate under uncoded placement for the given .

V The Placement Strategy

In this section we describe our general placement strategy. Note that our placement strategy can be applied to an arbitrary number of files and users and can even be adapted to user-specific file popularities (see Remark 2). Without loss of generality, suppose that the files are indexed in decreasing order of their popularity. In other words, file is at least as popular as file for all . The placement strategy begins with selecting integers such that . Each is proportional to the amount of cache that we are willing to allocate to file . We divide each file into

(15)

subfiles of equal size. We label each subfile by sets where for and for and . It should be evident that there are exactly such subfiles. Next, for file , we require each user to store the subfile if and only if . This process has been summarized in Algorithm 1, and an illustration for the case of has been provided in Figure 3. We can compute the amount of cache dedicated by each user to file as follows

(16)

This results in a total normalized cache size of

(17)

Cache

Cache
Fig. 3: Van Diagram of the placement strategy for the case of two files. Users whose indices appear in cache for all . Users whose indices appear in cache .
Remark 1.

In the special case of , all the sets will be equal, and can be represented by only one set . In this case, our placement phase is equivalent to the uniform placement strategy proposed in [1].

Remark 2.

More generally, each user could choose a permutation and store file if and only if . This would allow different users to have different preferences in terms of the popularities of the files, while still keeping all the cache sizes equal, and maintaining the same sub-packetization for all files. To provide a simple example, suppose we have two users and two files and with resulting in a cache size of . In this case, user could cache and but only . On the other hand, user 2 could cache and