On Decentralized Multi-Transmitter Coded Caching

09/14/2021
by   Mohammad Mahmoudi, et al.
University of Tehran
0

This paper investigates a setup consisting of multiple transmitters serving multiple cache-enabled clients through a linear network, which covers both wired and wireless transmission situations. We investigate decentralized coded caching scenarios in which there is either no cooperation or limited cooperation between the clients at the cache content placement phase. For the fully decentralized caching case (i.e., no cooperation) we analyze the performance of the system in terms of the Coding Delay metric. Furthermore, we investigate a hybrid cache content placement scenario in which there are two groups of users with different cache content placement situations (i.e., limited cooperation). Also, we examine the effect of finite file size in above scenarios.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

06/28/2020

Fundamental Limits of Cache-Aided Broadcast Networks with User Cooperation

We consider cache-aided broadcast networks with user cooperation, where ...
12/22/2018

Cache Placement Optimization for Coded Caching with Arbitrary Cache Size

We consider content caching between a service provider and multiple cach...
01/30/2019

Spatial Soft-Core Caching

We propose a decentralized spatial soft-core cache placement (SSCC) poli...
04/11/2019

Centralized Coded Caching with User Cooperation

In this paper, we consider the coded-caching broadcast network with user...
01/14/2020

An Index Coding Approach to Caching with Uncoded Cache Placement

Caching is an efficient way to reduce network traffic congestion during ...
10/08/2018

Decentralized Multi-Antenna Coded Caching with Cyclic Exchanges

This paper considers a single cell multi-antenna base station delivering...
05/05/2019

Stochastic Design and Analysis of Wireless Cloud Caching Networks

This paper develops a stochastic geometry-based approach for the modelin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, due to the unprecedented growth in demand for multimedia services, the usage of distributed memories across the network has been proposed to replicate the requested content as a promising method to alleviate network bandwidth bottlenecks, which is also known as Caching [1], [2]. Specially, video delivery has emerged as the main driving factor to lead a steep increase in wireless traffic in modern networks. As an example of deploying caches in wireless networks, authors in [3] propose a system where helpers with low-rate backhaul but high storage capacity store popular video files. In the delivery phase the available files at the helpers are locally delivered to requesting users, and the main base station only transmits files not available from helpers, which relieves the burden on network backhaul links.

Along another research line, the combined use of wireless edge caching and coded multicasting has been suggested to concurrently serve multiple unicast demands via coded multicast transmissions. This proposal, also known as Coded Caching, is a breakthrough idea in this paradigm which was first proposed in [6]. In a few words, in this method the caches are first filled in network low traffic hours without knowing the actual demands of users. In peak traffic hours, after revealing users’ requests, coded messages are multicast to groups of users, which complete the delivery of distinct contents of users. Following this paradigm, decentralized coded caching scheme based on independent random content placement was introduced by Maddah-Ali and Niesen [12]. It was shown that, in the large file size regime, the multicasting gain is almost the same as the centralized caching scenario for large networks. The finite file size regime was considered in [14]. Also, in [15] the authors proposed a caching scheme that outperforms the scheme in [12] when the file size is not large.

In order to boost the performance of coded caching, many works have proposed using multiple transmitters/antennas, such as [7], [10], and [11]. The main design in such works is to benefit from a combined multi-antenna multiplexing/global coded caching gain, by carefully borrowing ideas from the Zero-Forcing (ZF) techniques. It is shown that in such a scenario, one can have an additive gain from both coded caching and multi-antenna multiplexing domains. Further research have explored finite performance analysis of multi-antenna coded caching in [8], and [9], which suggest similar gains.

In this paper, we investigate a coded caching scenario with multiple transmitters (such as above works), but with a decentralized cache content placement phase assumption. The decentralized setting has been first investigated in a single transmitter setup in [12] (and many follow up works), which we extend to the multi-transmitter setup. In order to do this, we first derive the so called Coding Delay of a content delivery algorithm which combines the decentralized content delivery algorithm in [12] and the ZF-based delivery algorithm in [7], and discuss the insights it provides. Then, we consider a heterogeneous setup in which there are two groups of users at the placement phase. The first group can coordinate their caching phase and follow a centralized coded caching setup, while the users in the second group follow the fully decentralized caching scenario. At the delivery phase the groups are merged and the transmitters should fulfill all the demands. For this setup we analyze two delivery strategies. The first strategy treats the users in a TDMA delivery fashion, while the second strategy treats them altogether. Furthermore, we investigate the effect of limited subpacketization in such scenarios and see how fast the performance converges to the infinite file size case, as a function of allowed file size.

The most relevant work to ours is [13] which considers a similar problem in the wireless context, but with focus on finite SNR analysis and beamformer design for the multi-antenna transmitter. Since here we consider Coding Delay (which is equivalent to high analysis of the wireless setup) we arrive at a much simpler closed-form performance expression which helps to clearly observe the role of multiple transmitters in the decentralized setting. Furthermore, in this work we consider a hybrid cache content placement scenario which has not been considered in [13]. Moreover, here we analyze the finite file size regime as well.

Finally, let us review some notations used in this paper. We use lower case bold-face symbols to represent vertical vectors, upper case bold-face symbols to represent matrices, and

mathcal symbols to show sets. For any matrix (vector ), matrix (vector ) denotes the transpose of matrix (vector ). For vector , illustrates the condition is satisfied. For any two sets and the set contains those elements of absent in . Also, we define and to be the set of natural numbers. Moreover, shows a finite field with elements, and the set of all matrices whose elements belong to . In addition, let , then is a random linear combination of where the random coefficients are uniformly chosen from , and represents addition in the corresponding finite field.

Ii System Model and Assumption

In this paper, we consider a content delivery scenario where transmitters are connected to users via a Linear Network, as shown in Figure 1. All the transmitters have full access to a contents library of files, i.e., , in which the size of each file is bits. Also, each user is equipped with a cache of size bits. We represent data by -bit symbols as members of a finite field , where .

Fig. 1: System Model: transmitters are connected to users via a linear network with transfer matrix .

The system experiences two different traffic conditions, namely low-peak and high-peak hours, and thus operates in two phases. In the first phase, which we call the Cache Content Placement phase and happens during network low-peak hours, each user caches bits of data, which can be any function of the library , without prior knowledge of their actual requests. If the users cooperate in the first phase we call it a centralized coded caching scenario, otherwise it is called a decentralized coded caching setup.

In the second phase, which we call the Content Delivery phase and happens during network high-peak hours, each user reveals a single content request, where the request of -th user is denoted by , and therefore the requested contents can be represented by the vector . In order to fulfill users’ demands, the transmitters employ the linear network to deliver remaining portions of files not cached at the users. For the sake of presentation simplicity we assume , while, extending the results to the other cases is straightforward.

We assume an slotted time model for the delivery phase, where each slot represents a single channel use. More specifically, at the time slot , transmitter sends the symbol , which can be an arbitrary function of , and depends on the users requests , where . Also, the linear network connecting transmitters and users is assumed to be an error-free zero-delay network, and thus after injecting data from the transmitters, user receives for all . Due to the linear property of the channel we have the following relation between the symbols sent by the transmitter, and the symbols received by the users

(1)

where

    ,   

where

represents the linear transformation imposed by the channel, which is assumed to be static during the delivery phase. For simplicity, we assume that the elements of

are uniformly at random chosen from the field , and the field size is large enough to fulfill all independence requirements used throughout the paper.

The Delivery phase’s primary purpose is to reduce the number of time slots required to respond to user requests. Let us define the number of time slots (channel uses) required to satisfy a specific users’ demand vector by . Then, the Coding Delay, which is our main performance metric in this paper, is defined to be

(2)

It should be noted that the linear network model considered in this paper is very general and accommodates wired and wireless content delivery scenarios. For example, assuming that the transmitters and receivers are connected via a wired network modeled by a Directed Acyclic Graph (DAG), and intermediate nodes perform a low complexity topology-oblivious Random Linear Network Coding (RLNC) scheme, wired content delivery scenarios can be translated to this linear network model (see e.g., [7]

). On the other hand, if we assume a wireless Multiple-Input Single-Output Broadcast (MISO-BC) setup, the high SNR Degrees-of-Freedom (DoF) analysis can be translated to the coding delay analysis used in this paper (see e.g.,

[8]).

Iii Decentralized Multi – Transmitter Coded Caching

This section will show how adding transmitters can reduce the coding delay defined in the previous section in the decentralized coded caching setup. To do this, we adapt the zero-forcing based transmission scheme introduced in [7] to the decentralized setup introduced by [12] and analyze its performance in terms of coding delay. The introduced gain by the proposed algorithm is achieved by taking advantage of the possibility of vector, and parallel transmissions for a decentralized coded caching system, known as a decentralized multi-transmitter coded caching algorithm.

The primary approach is presented in Algorithm 1 which provides the details of the cache content placement and content delivery phases.

In Algorithm 1, represents the piece of required file for user which is stored in the cache of all members of except user , and the operator presents the linear combinations with random coefficients from the file components. Finally, represents the linear combination of the file components whose coefficients are generated by the operator.

In Theorem 1 we derive the performance of Algorithm 1 in terms of coding delay, which clearly illustrates the benefits of using multiple transmitters in this setting.

Theorem 1.

For a decentralized multi-transmitter coded caching setup, Algorithm 1 achieves the following coding delay :

(3)

where

is a random variable with the distribution which we approximate by the Gamma distribution as follows:


(4)

Here,

is a parameter estimated numerically.

Proof.

Please refer to the Appendix in Section VII . ∎

Theorem 1 states the performance of the system in the finite file size regime, which results in a random coding delay, highlighted by the random variables . Following the concentration of measure approach used in [12], one can arrive at a much simpler result for the large file size regime, formally stated in the following corollary.

Corollary 1.

In the limit of , Algorithm 1

achieves the following coding delay (with high probability):


(5)
Proof.

Please refer to the Appendix in Section VII . ∎

1:  Procedure PLACEMENT
2:  for   do
3:     user independently caches a subset of bits of file , chosen uniformly at random.
4:  end for
5:  end Procedore
6:  Procedure DELIVERY
7:  
8:  for   do
9:     
10:     for   do
11:        for  do
12:           
13:           for  do
14:              Design for all
15:              
16:           end for
17:           Server send
18:        end for
19:        
20:     end for
21:  end for
22:  end Procedore
Algorithm 1 Decentralized Multi-Transmitter Coded Caching

The formal proof of Theorem 1 and Corollary 1 is provided in the Appendix, but here we first discuss the implications of the results and then go through the main concepts behind the corresponding algorithm. The only difference of (5) and the result of [12] is that here in each term of the summation, we have a spatial multiplexing based delay reduction factor of which accounts for the benefits of adding multiple transmitters to increase the multicast group size for each transmission from to . Therefore, it can easily be noted that when , the coding delay in (5) reduces to the one in [12].

The proposed algorithm consists of two main phases, in which the first phase is identical to the cache content placement introduced in [12], where each user randomly and independently of other users caches the amount of bits of each file in their memory. In the delivery phase, we adapt the delivery algorithm proposed in [12] to the multi-transmitter setup. Therefore, to respond to all user requests, we consider the parameter , such that for each a common coded message can be helpful for users, in which the global coded caching gain and multi-transmitter spatial multiplexing gain is included. The parameter spans all possible values to benefit from all coded multicasting opportunities.

In order to gain more insight into the above result, one can derive the following lower bound on :

where is the coding delay when we have only a single transmitter (the same as the result in [12]), and

is the coding delay reduction factor due to deploying multiple transmitters (i.e., multiplexing gain).

Iv Hybrid Cache Content Placement – Limited Cooperation Setup

The main distinction between centralized and decentralized coded caching schemes is at the cache content placement phase. In the centralized setting, we assume full cooperation between the users, while in the decentralized setting no cooperation is assumed. In this section we propose an in-between setting which we call limited cooperation setup. Under this setup we consider users partitioned into two groups. The first group (group A) consists of users which can fully cooperate at the cache content placement phase, while the second group (group B) of , do not cooperate at all and use completely random content placement strategy. In the delivery phase, the two groups merge together and request files from the server, and the total time needed for the server to satisfy the requests of all users is the corresponding coding delay.

As the baseline delivery strategy one can consider serving the two groups one after another, which we call the Hybrid-TDMA strategy. The total coding delay of this strategy will be

(6)

In order to get more insight into the problem let us consider the single transmitter scenario. For the case of , the above equation reduces to:

(7)

By focusing on the low memory regime and using the Taylor series approach we can derive

(8)
(9)
(10)

which clearly illustrate the caching gain as the reduction in the coding delay in all three scenarios, namely hybrid, centralized and decentralized settings. First, one can note that by setting and , reduces to and , respectively. The main question is if the hybrid setting is always superior to the decentralized setting or not. One can see that this happens only when

is satisfied.

As an alternative approach to the baseline TDMA scheme discussed above, we use Algorithm 1 for the delivery phase of the Hybrid cache content placement scenario. The performance of this scheme is explored in the next section.

V Simulation Result

In this section, we first illustrate the role of using multiple transmitters in reducing the coding delay in the decentralized, setting, then we investigate the proposed hybrid caching scenario, and finally justify our choice of Gamma distribution in Theorem 1.

V-a The Effect of the number of transmitters on the performance of decentralized coded caching systems

Figure 2 shows the effect of the number of transmitters on the performance of decentralized coded caching systems in the case of finite and infinite file-size for different values of . This figure clearly shows the benefits of the multiplexing gain provided by multiple transmitters in the decentralized setting, similar to the centralized setting. Also, this figure shows the price to pay for the limited subpacketization at the finite file size regime, in terms of coding delay.

On the other hand, Figure 3 plots the coding delay as a function of file size which clearly illustrates the convergence to the infinite file size result as increases.

Fig. 2: Investigating the effect number of transmitters on the performance of decentralized coded caching System with users and files and different values and
Fig. 3: Investigating the effect of transmitters on the performance of decentralized coded caching System with users and files and cache size and different values

V-B Comparison between different caching strategies

Here, we compare the coding delay for centralized, decentralized, and hybrid schemes for different parameters for the finite file size regime. In Fig. 4, coding delay is plotted for these setups as a function of , for users, and based on the TDMA scheme. For the hybrid scheme we assume two groups of users. As expected, all the three schemes’ delays reduces by increasing , and the centralized scheme has the lowest delay among all. However, interestingly, decentralized setup outperforms the hybrid scheme. In Fig. 5 we have the same parameters as in Fig. 4, only changing the number of the second group in the hybrid scheme to . Here we see that the hybrid scheme outperforms the decentralized scheme.

Fig. 4: Comparison between the performance of the presented method in [3] and Algorithm 1 and Hybrid Strategy, for a system with users files and file size , in this figure it is assumed that half of the users are members of group A and half of them are members of group B
Fig. 5: Comparison between the performance of the presented method in [3] and Algorithm 1 and Hybrid Strategy, for a system with users files and file-size , in this figure it is assumed that about percent of users are members of group A and percent of them are members of group B

As Figures 4 and 5 show, our Hybrid Strategy method is somewhere between full centralized and full decentralized in terms of performance. Consequently, the hybrid strategy could perform better than full decentralized in some regimes. According to the simulation results, it is clear that the hybrid strategy’s performance for caching systems depends on the number of users in groups A and B. So if more users are in group A, the hybrid strategy would have better performance, and its functionality will get closer to the complete centralized scheme.

Finally, Figure 6 plots coding delay of the hybrid scheme as a function of the number of users in the centralized group i.e., (for fixed ), which shows that the hybrid scheme outperforms the decentralized scheme only in certain regimes. In this figure, in addition to the baseline TDMA scheme, we have also included delivery coding delay based on Algorithm 1 (with the label ”Hybrid” in the figure).

Fig. 6: Comparison between hybrid and pure strategies , for a system with users files, cache size and file size

V-C Fitting distribution to the data

Fig. 7: Fitting Gamma distribution to the data

As we said in Theorem 1, the coding delay in the finite file size regime is a random variable and is a function of some other random variables named . Here we justify our choice of Gamma distribution to approximate the distribution of by numerical data fit approach in Figure 7.

Vi Conclusion

In this paper we have considered downlink transmission in a multi-transmitter linear network where transmitters are connected to multiple cache-enabled end-users. We have introduced a new scheme that can be used for any type of caching strategy (centralized, decentralized, and hybrid). We examined the performance of this scheme specifically for the decentralized multi-transmitter coded caching systems by coding delay as the main metric. We also investigated a new practical scheme called hybrid strategy, which stands somewhere between full centralized and full decentralized. From simulation results, we have illustrated that the functionality of the decentralized scheme improves by adding transmitters, and the hybrid strategy performs better than the full decentralized algorithm, in certain regimes. For future work we look for more complex hybrid settings and investigating the performance of them.

References

  • [1] E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The role of proactive caching in 5G wireless networks,” IEEE Commun. Mag., vol. 52, no. 8, pp. 82–89, Aug. 2014.
  • [2] N. Golrezaei, A. Molisch, A. Dimakis, and G. Caire, “Femtocaching and device-to-device collaboration: A new architecture for wireless video distribution,” IEEE Commun. Mag., vol. 51, no. 4, pp. 142–149, April 2013.
  • [3] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless video content delivery through distributed caching helpers,” in INFOCOM, March 2012, pp. 1107–1115.
  • [4] Y. Fadlallah et al., ”Coding for Caching in 5G Networks”, IEEE Commun. Mag., vol. 55, no. 2, pp. 106-13, Feb. 2017
  • [5] D. Liu, B. Chen, C. Yang and A.F. Molisch, ”Caching at the wireless edge: Design aspects challenges and future directions”, IEEE Commmun. Mag., vol. 54, no. 9, pp. 22-28, Sep. 2016.
  • [6] M. A. Maddah-Ali and U. Niesen,“ Fundamental limits of caching ,” IEEE Trans. Inf. Theory, vol. 60 , no. 5, pp.2856-2867 , May 2014.
  • [7] S. P. Shariatpanahi, S. A. Motahari, and B. H. Khalaj, “Multi-server coded caching,” IEEE Trans. Inf. Theory, vol. 62, no. 12, pp. 7253-7271, 2016.
  • [8] S. P. Shariatpanahi, G. Caire and B. Hossein Khalaj, ”Physical-Layer Schemes for Wireless Coded Caching,” in IEEE Transactions on Information Theory, vol. 65, no. 5, pp. 2792-2807, May 2019
  • [9] A. Tölli, S. P. Shariatpanahi, J. Kaleva and B. H. Khalaj, ”Multi-Antenna Interference Management for Coded Caching,” in IEEE Transactions on Wireless Communications, vol. 19, no. 3, pp. 2091-2106, March 2020
  • [10] N. Naderializadeh, M. A. Maddah-Ali and A. S. Avestimehr, “Fundamental Limits of Cache-Aided Interference Management,” in IEEE Trans. Inf. Theory, vol. 63, no. 5, pp. 3092-3107, 2017.
  • [11] E. Lampiris and P. Elia, ”Adding Transmitters Dramatically Boosts Coded-Caching Gains for Finite File Sizes,” in IEEE Journal on Selected Areas in Communications, vol. 36, no. 6, pp. 1176-1188, June 2018
  • [12] M. A. Maddah-Ali and U. Niesen, ”Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Transactions on Networking, vol. 23, no. 4, pp. 1029-1040, 2015.
  • [13] S. T. Thomdapuand and K. Rajawat, “Decentralized Multi-Antenna Coded Caching with Cyclic Exchanges,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019.
  • [14] K. Shanmugam, M. Ji, A. M. Tulino, J. Llorca and A. G. Dimakis., ”Finite-Length Analysis of Caching-Aided Coded Multicasting,” in IEEE Transactions on Information Theory, vol. 62, no. 10, pp. 5524-5537, Oct. 2016, doi: 10.1109/TIT.2016.2599110.
  • [15] S. Jin, Y. Cui, H. Liu and G. Caire, ”Order-Optimal Decentralized Coded Caching Schemes with Good Performance in Finite File Size Regime,” 2016 IEEE Global Communications Conference (GLOBECOM), 2016, pp. 1-7, doi: 10.1109/GLOCOM.2016.7842115.

Vii Appendix

Vii-a Proof of Theorem 1

Proof.

By analyzing the performance of Algorithm 1, we obtained the value of coding delay for a linear network with transmitters and users, and also we proved that the value of the coding delay obtained from the analysis of Algorithm 1 is equal to the value expressed in Theorem 1.
Algorithm 1 consists of two steps, placement and delivery. The first step is the same as [12], each user randomly and independently of other users caches F bits of each file in its memory. So the total memory size of each user is equal to :

Delivery phase: To respond to all users’ requests, i.e., we consider the parameter , which refers to a certain number of users each time and can take values from to , respectively. When = , the corresponding transmission is full multicast, and it is no longer possible to improve the coding delay. The related transmission is specified differently from the other ones, specified in line 11 of the Algorithm 1.

For other values of , the arbitrary subset from the users set is considered, the size of which is specified as follows:

(11)

For a specific , all its -member subsets are specified as follows:

(12)

For each given set of and the corresponding we design the vector in such a way that :

(14)

Then we define for each :

(15)
(16)

represents the piece of required file for user which is cached in the memory of all members of except user , also shows that each subfile is divided into several mini files. In this equation, the operator represents linear combinations with random coefficients from the file components. As a result, the transmitted vector corresponding to the set is defined using equation (14) and (15) as follows:

(17)

This transmission will be useful for all members of . Each user can exploit the desired parts of the file from each transmission. For each set of , the corresponding transmission vector is repeated times with different linear coefficients.
Thus G represents linear combinations of file fragments with random coefficients, which are likely to be independent of each other, the final equation of transmissions corresponding to any subset of is obtained as :

(18)
(19)

As a result, for a specific subset , transmitters will send the following block:

(20)
(21)

In equation (LABEL:eq.coded_messege), represents the linear combination of the file components whose coefficients are generated by the operator and benefited by the users of set . The transmission corresponding to the intended set would be beneficial for all its users. By using these transmissions, members of the set will be able to receive some of the requested parts of the file. After the above process is fully implemented for the specified subset , we repeat the same process for all subsets of . Eventually, all users will receive their requested files completely.
Regarding the provided proof for the correctness of the content delivery strategy of Algorithm 1, we now calculate the corresponding coding delay. For a specific subset , each transmitted vector has the following size :

(22)

, which is the result of [12]. So the sent block corresponding to the subset has the size equal to :

(23)

and since each subset , also has such a sent size, so the total amount of coding delay will be equal to :

(24)

Vii-B Proof of Corrallary 1

For infinite file size total coding delay will be equal to :

(25)

In the above equations, represents the number of subsets from the users’ sets. we show that equation (25) is equal to the value expressed in corrollary 1.

To simplify the proof process, we divide equation (25) and (5) into two parts according to .
By rewriting equation (5) :

By rewriting equation (25) :

Therefore, the equality of the two equations has been proved.