The global mobile data traffics are predicted to be increased sevenfold by 2021, in which over three-forths will be multimedia . The ever-increasing mobile data traffic has imposed huge burden to the networks. Recently, the cheaper cache with increasing availability provides us an alternative method to accommodate the explosive data traffic. In fact, by prefetching video contents at the end users, those locally cached contents can be directly served once they are requested, thus with caching the data traffic can be reduced, and this saving is referred to as the local caching gain.
In order to fully exploit the potential benefit of the cache, Maddah-Ali and Niesen proposed a centralized coded caching scheme (referred to as MN scheme for brevity) in , in which a single server containing files with equal length coordinates users over a shared link, and every user is assumed to be provisioned with an identical cache of size files . The coded caching scheme consists of two independent phases, namely, placement phase and delivery phase, which is referred to as one round in this paper. The placement phase occurs at off-peak hours, in which each user is able to access the server to fulfill its cache without the knowledge of users’ demands. If user only prefetches the portions of the files at server, it is called uncoded placement. If the user fulfills its cache with some linear combinations of sub-packets from multiple files, it is the coded placement. Delivery phase follows in peak hours when users’ demands are revealed. The server designs and multicasts coded messages through error-free and shared links to a set of users simultaneously, in which the global multicasting gain is maintained. By the end of the delivery phase, each user reconstructs its requested content on the basis of the received coded messages and its own caching contents. For this coded caching scheme, by jointly optimizing the placement and delivery phase, the system traffic load is expected to be minimized for all possible demands.
Motivated by the MN scheme, how to further reduce the required transmission load has attracted many research attentions. An improved lower bound of the transmission load was derived from the combinatorial problem by optimally labeling the leaves of a directed tree in . By interference elimination, a new scheme with smaller transmission load was disclosed for the case in . More generally, the transmission load for various demand patterns was derived by modifying the delivery phase of the MN scheme in . Note that it was shown in  and  that the MN scheme can achieve the minimum transmission load via graph theory and an optimization framework under a specific uncoded placement rule when . Moreover, the MN scheme has been extended to different scenario of networks, for instance, the multi-server systems [8, 9], D2D networks , hierarchical networks , combination networks , and heterogenous network .
It should be addressed that, all the aforementioned works considered the coded caching scheme design within one round, namely, there is only one placement-then-delivery operation. Nonetheless, in practical applications, the coded caching system should be devised to operate within multiple rounds, in which the number of users may be time varying. For instance, residents (fixed users) and visiting guests (mobile users) may coexist in one network. Intuitively, the residents may stay in network for a long time (multiple rounds)111If some fixed users request the same file in previous round, the fixed users can be removed from the coded caching design since their traffic requirements have been fulfilled., while those visiting guests may dynamically move in or out in different round of coded caching operation. For such a dynamic network, when applying the coded caching scheme to all the users at each round separately, the variations in the participating users may lead to frequent update in both the content caching and the signal transmission in order to make both the placement and the delivery fit the variations. Sometimes, this may become undesirable and resource inefficient, especially when most of the users are fixed while only few users join or leave. In this dynamic setup with multiple rounds of service request, how to tailor the coded caching design, such that the content updating in placement phase will be minimized and the full caching gain in delivery phase can be retained, will be a very interesting problem. This is exactly the motivation of our work in this paper.
In order to effectively handle the dynamic coded caching requirement, we need to rethink the content caching in placement phase and the coded signal generation in delivery phase for multiple rounds, such that the content updating at those fixed users can be minimized, while the coded caching gain for all participating users in delivery phase can be maximized. Intuitively the more users join the network in the same round, the possibility of the larger coding gain achieved should be higher. Therefore, all the set of fixed and mobile users should be considered when we design a coded caching scheme. However, in practice we do not have any knowledge of mobile users in the forthcoming rounds. To handle this issue, in this paper, we propose a Concatenating based placement and the Saturating Matching based delivery design (CSM) without the knowledge of mobile users. In the placement, the concatenating method is involved, in fact it has been widely used to cope with asynchronous problems. With this concatenating method we can keep the cache content unchanged for those fixed users who have already participated in the previous round of coded cooperation. For those newly joined mobile users, the server only needs to decide on the cache content placement by further sub-dividing the packets utilized by the fixed users. In this way, we can minimize the amount of the content updating. Since the matching over bipartite graph allows us to get the sum of the coded multicast transmissions from different groups (i.e., fixed and mobile users). Motivated by this, the saturating matching based delivery scheme is proposed. Our analysis reveals that the proposed CSM coded caching scheme is order-optimal.
The rest of the paper is organized as follows. In Section II, the system model and some results of the original coded caching system in  are reviewed to introduce the dynamic coded caching design problem. Then the proposed CSM coded caching scheme and its order-optimality are presented in Section III and IV, respectively. Finally, we conclude our work in Section V.
Ii System Model and Problem Formulation
Ii-a The Centralized Coded Caching Model
Let us consider the centralized coded caching system (Fig. 1),
in which a server containing files denoted by connects through an error-free shared link to users with , and every user has a cache of size files for . The system contains two independent phases,
Placement phase: each file is divided into packets of equal size, and then each user caches some packets out of each file, which is limited by its cache size . Let denote the cache contents at user , which is assumed to be known to the server.
Delivery phase: each user randomly requests one file from the server. The requested file by user is represented by
, and the request vector by all users is denoted by. By the caching contents and requests from all the users, the server transmits some coded messages to all users such that each user’s request can be satisfied.
In such system, the amount of worst-case transmissions for all possible requests is expected to be as small as possible, which is defined as
(MN scheme ) For any positive integers , and with , there exists a scheme with transmission rate .
For better understanding, the sketch of MN scheme is depicted by Algorithm 1, in which placement and delivery phase are included.
From Algorithm 1, it is clear that each file is divided into nonoverlapping equal-sized packets, and for a given , there are in total coded messages. To sum up, the transmission rate of MN scheme can thus be derived as,
Ii-B Dynamic Coded Caching Problem Formulation
As illustrated in Fig.2, now we consider a similar network configuration as that in the above model, except the fact that there are two sets of users, namely, fixed users , each of which has a cache of size files, and mobile users , each of which has a cache of size files. It should be noted that, the cache sizes of two user sets and are not necessarily the same. For notation brevity, throughout the paper, we refer to this model as coded caching system.
The above coded caching system can be utilized to characterize the dynamic network at any round , in which mobile users join this system and want to perform coded caching with the fixed users that are already in the system. The first problem is related to the placement design. We aim to design a placement scheme intended only for the users in order to minimize the caching content update, which makes sense when there is no change in the files at server, and those fixed users have already filled their caches in previous rounds222Note that if some new users move into network during round , their cache will be filled in placement phase of the following round, i.e., round .. On the basis of designed caching contents for users in , how to design the coded multicasting among all the users to minimize the required transmission load will be the second design objective.
In order to derive the placement and delivery scheme suitable for the aforementioned dynamic coded caching applications, the bipartite graph representation of coded caching will be utilized, and the following symbol notations are utilized in the following analysis. Basically, a graph can be denoted by , where is the set of vertices and is the set of edges, and a subset of edges is a matching if no two edges have a common vertex. A bipartite graph, denoted by , is a graph whose vertices are divided into two disjoint parts and such that every edge in connects a vertex in to one in . For a set , let denote the set of all vertices in adjacent to some vertices of . The degree of a vertex is defined as the number of vertices adjacent to it. If every vertex of has the same degree, we call it the degree of and denote as .
Iii The Concatenating based Placement and the Saturating Matching based Delivery Scheme
In this section, we mainly focus on the coded caching design, and its achieved transmission rate for any pairs. Before introducing our proposed scheme, intuitively, the original MN scheme can be simply applied into our considered system by regarding and as two separate user groups. Thus, for the convenience of comparison, we firstly give the following trivial examples when directly utilizing MN scheme.
Iii-a Baseline Scheme
Intuitively, when adopting MN scheme without updating the cache contents of users in , all the
users can be classified into two groups,and , and perform MN coded caching scheme for two groups separately, which will be introduced in the following Example 1.
Consider a system consisting of fixed users , mobile user and files . When assume that , , we have , and , respectively.
Firstly, by applying the placement scheme in Algorithm 1 to all users in , each file is split into packets of equal size, i.e.,
and the four fixed users in will cache the following contents in placement phase,
Similarly for the three users in , each file will be split into packets of the same size, i.e.,
and the users in will cache the following contents in placement phase,
Without loss of generality, assume . By using the delivery scheme in Algorithm 1, we have and , then the sever respectively sends
to the users in and .
By the Lemma 1, we have for and for respectively. Then to sum up, the total transmission rate is .
From the above Example 1, it can be observed that, when directly applying MN scheme to the system, users need to be divided into two groups, and MN scheme and are utilized in each group, respectively. Although, this strategy is applicable, during delivery, the multicasting opportunities between users in and are lost, which is less efficient. At this point, how to design a scheme to maximally exploit the multicasting gain among all the users would be an interesting problem, and is worth to be investigagted.
Iii-B The Proposed Dynamic Coded Caching Design and The Main Results
Unlike the application of the MN scheme to two user sets separately, for the caching system, we propose a new design by concatenating two user groups, such that the coding gain can be enlarged. More specifically, the coded multicasting gain in our proposed scheme is . When , our coded gain is only one gain less than the maximum gain of the MN scheme . The transmission rate of our proposed scheme is given by Theorem 1.
For the positive integers , and such that , , there exists a coded caching scheme to achieve the following transmission rate
In Algorithm 2, the PLACMENT 1 is for the users in , which may be performed in any previous rounds, and the PLACMENT 2 is designed for those mobile users in . It can be observed that PLACMENT 1 is independent of PLACMENT 2, while that for users in depends on the number of users in to maximally utilize the coded multicasting opportunities among all the users. After this placement phase, the cache contents of all the users denoted by are assumed to be known by the server. Then, the delivery phase follows and is described in Algorithm 3.
From the Algorithm 3, it can be observed that, the whole delivery phase can be realized by firstly finding the vertices of the bipartite graph, and then constructing the bipartite graph according to the requirements described in Line 15. Finally based on the constructed graph we can find out the coded messages as described in Procedure Delivery. The working flow of the Algorithm 3 can be briefly summarized as below: in Lines 2-13, we try to generate the vertices of a bipartite graph by means of the required sub-packets, i.e., to create the multicasting gains and respectively; in Line 15 we can define its edge, i.e., to create the total coded multicasting gain according to the requests of users; Finally in Procedure Delivery, based on the saturating matching, the messages with multicasting gain are broadcast as much as possible.
Iii-C The Illustration of The Proposed Dynamic Coded Caching Design
To compare with the baseline scheme in Example 1, we assume the same system setup here. According to Algorithm 2 and Algorithm 3, the placement and delivery phase are explicated as follows. It needs to be highlighted that, the placement for and are assumed to be performed in different round, just like we have addressed before.
Placement phase: The placement phase can be completed by the following two steps.
By Lines in Algorithm 2, each users caches
After this step, each user caches a total of sub-packets from each packet. Since each file is firstly divided into packets, each user caches sub-packets. Since the total number of sub-packets in each file is , the user caches files, which satisfies its cache size limitation.
Delivery phase: Also assume that , and from the caching result in Algorithm 2, we have
By the first procedure in Algorithm 3, we have the following vertices, where the elements in the set of and are labeled as , and in short respectively.