Queuing Theoretic Models for Multicast and Coded-Caching in Downlink Wireless Systems

04/27/2018 ∙ by Mahadesh Panju, et al. ∙ 0

We consider a server connected to L users over a shared finite capacity link. Each user is equipped with a cache. File requests at the users are generated as independent Poisson processes according to a popularity profile from a library of M files. The server has access to all the files in the library. Users can store parts of the files or full files from the library in their local caches. The server should send missing parts of the files requested by the users. The server attempts to fulfill the pending requests with minimal transmissions exploiting multicasting and coding opportunities among the pending requests. We study the performance of this system in terms of queuing delays for the naive multicasting and several coded multicasting schemes proposed in the literature. We also provide approximate expressions for the mean queuing delay for these models and establish their effectiveness with simulations.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Future wireless networks (e.g., 5G) are expected to deliver high volumes of data ([1]). Video on demand accounts for a major fraction of the network traffic. The current network architecture cannot scale cost effectively to meet the exploding demands. However, in recent years storage has become inexpensive. Also, a small number of contents constitute a major proportion of the traffic [2]. These two factors suggest the use of caching content close to the users as one of the potential solutions for meeting user demands. This is the topic of discussion in this paper.
There are two main approaches of caching studies currently considered in the literature. In the first approach (conventional) ([3, 4]

), the focus is on considering the eviction policies at individual caches, such as First-In-First-Out (FIFO), Least-Frequently-Used (LFU), Least-Recently-Used (LRU), Time-To-Live (TTL) etc. where maximizing the hit probability in a local cache is the parameter of primary interest. In

[5], a hierarchical network based on TTL caches is analyzed. The general cache network is studied in [6]. In [4], the performance of the different caching policies under independent reference model (IRM) traffic and renewal traffic is analyzed. Another group of the works considers caching in a wireless heterogeneous network. The works in [7, 8, 9] study caches in small cell networks. Some works ([10, 11]) consider caching in the context of Device-Device (D2D) communication.
The second, more recent approach ([12, 13]) considers static caches sharing a common link with a server. These systems have two different phases: content placement and coded delivery. In the content placement phase caches are populated (usually under low network activity) with files. In the content delivery phase, requests from all the nodes are received by the server and the delivery is performed using coded multicast. This has been shown to reduce the total file transmission rate from the base station substantially as against the above conventional schemes. In [12, 14], an information theoretic approach is taken where the minimum rate required to fulfill requests from all the users is studied. The work in [13] extends similar results to D2D communication. [15] studies coded caching in a system with two layers of caches. In [16], an online coded scheme is presented. These schemes have been widely studied under uniform popularity traffic and have been further extended to general random request case also ([17, 18]).
In the above coded caching works, an important aspect of queuing at the server has been ignored. The queuing delay can be the dominant component of the overall delay experienced by a user in a content delivery network. Work in [19] addresses these issues. These authors propose a few queuing models for the cache aided coded multicasting schemes.

A queue with multicasting and network coding is also studied in ([20, 21]) in a different setting where there is no finite library of files to download from and each arriving packet is different and must be received by each receiver.
As in [19] we also consider queuing delays at the server. Our major contributions are as follows:

  1. We consider a new type of queue called the multicast queue in which new requests for a file are merged with the ones already in the queue. All pending requests for a file get served with one transmission of a file. This exploits the broadcast nature of the wireless channel more effectively than in [19] and reflects the practical scenario more realistically. An immediate impact of this model is that, unlike in [19], our queue at the server is always stable for any request arrival rate from the users for any finite number of files. Furthermore, the queue is quite different from the multicast queue studied in [20, 21] because of the difference in traffic model considered.

  2. We show existence and uniqueness of stationary distribution for the multicast queue. Next, we develop an approximate expression for the mean queuing delay for this queue. We also consider the case when there are errors in transmission.

  3. Next, we combine our multicast queue with coded delivery schemes such as in [19] to further reduce the queuing delay. We show that our schemes are stable for all request arrival rates due to merging. We prove stationarity of this system and provide an approximate mean delay expression. We also show that our schemes provide lower mean delays than the schemes in [19] except at low request rates. At high request rates, with the LRU caches at the users, the multicast queue without coded delivery performs better than several schemes with coded delivery.

Rest of the paper is organized as follows. Section II describes the system model. In Section III, we derive an approximate expression for mean delay of the multicast queue. We use this to analyze a basic system with LRU caches and multicast queue at the server. We extend the analysis to the case where transmissions are not error free, perhaps due to fading. We also study the multicast queue with coded transmission in Section IV. We provide approximate mean delay for this system. In Section V, we compare the performance of the multicast queue with the coded multicast queue. Section VI concludes the paper.

Ii System Model

We consider a network with one server and multiple users (Figure 1). The users share a common link with the server. This may model the downlink in a cellular network. Each user is equipped with a cache. The system has a total of files. The complete library of files is accessible to the server. The file library is denoted by . File is of size bits. The set of users is denoted by . The request process at each user is assumed to be IRM, i.e., request process for file at the user is a Poisson process with rate and is independent of the other request processes from it and all other users. Thus, user requests for file with probability . If the popularity profile is Zipf with exponent , then . A user forwards its request to the server if the file requested by it is not fully available in its cache.
In the following, we will consider different schemes at the server for fulfilling requests from multiple users. The simplest and a well studied system is where all requests from different users are stored in a request queue at the server. The server transmits the requested files in first-in-first-out (FIFO) fashion over the shared link. We assume that the rate on the shared link is bits/second without errors (generalization to a channel with errors will also be considered). We call this system a FIFO queue.

Figure 1: System model has users with caches sharing a common link with one server with access to files and a queuing system with one or more queues at the server. User requests file according to a Poisson process of rate .

Next, we consider the case where multicast nature of the channel is exploited: when the server transmits a file, all the users who have a pending request for that file receive the file and the corresponding requests are removed from the queue. We will describe this queue in more detail in Section III-B. This queue has not been modeled previously. We will call this multicast queue. We will see that this leads to significant improvement in performance over the FIFO queue.
It has been demonstrated recently ([12]) that with cache enabled users and coded transmissions, the multicast nature of the shared channel can be further exploited: it is possible to serve multiple users requesting different files simultaneously. Users retrieve the required files from the transmission from the server and the contents of their own caches. We provide more details in Section IV.
In the following, we theoretically study each of the above schemes and compare their performance.

Iii Performance Analysis: FIFO and multicast queue

In Section III-A, we analyze the FIFO queue. Section III-B describes the multicast queue in more detail and provides its performance analysis. Section IV presents several coded caching schemes and studies their queuing performance.

Iii-a FIFO Queue

We study the system without caches at the users. The transmission time for file is sec. Then, the request queue is an M/G/1 queue with arrival rate and i.i.d. service time with probability

(1)

The queue is stable if . Under this condition, the queue will have a unique stationary distribution and the mean delay under stationarity is ([22])

(2)

If , the queue is unstable. It does not have a stationary distribution and the queue length will tend to infinity.
Below, we study the multicast queue with caches at the users and also when transmission from the server has errors perhaps, because of fading channels. The same ideas can be used here to study the FIFO queue with caches and fading channel.

Iii-B Multicast Queue without Caches

In this queue, we exploit the broadcast nature of the link in which all users with the pending request for a given file are serviced simultaneously. For simplicity, we first study the system when there are no caches at the users. The system with caches will be studied in Section III-C.
For this queue, the forwarded file request by a user is described by tuple , where is the index of the file and is the user making this request. A request queued at the server is described by , where is the file index and is the set of users who have requested the file and not yet been served. When a new request for file arrives at the queue, the request is merged with its corresponding entry in the queue if it already exists, i.e, the user list is updated with the new users who have requested the file. If file is being served at that time, it is considered as a new request and is added to the tail of the queue. This is because the request has come from a user when part of the file has already been transmitted and hence the requesting user will not receive the full file from that transmission (we assume that a user does not receive a file being transmitted unless it has generated a request for this file). If at the time of arrival of request for file at the server, its request is not already in the queue, the new request is appended at the tail of the queue. The queue of requests is serviced one after the other from head to tail. The transmission of the file in the head of the queue serves all the users whose requests for that file have been merged.
We study this queue in considerable detail as it is a basic natural model in wireless networks. Even though this queue has been exploited in previous scenarios, a detailed study of the same is missing in literature. By first studying this system, we isolate the benefits of multicasting alone, which does not require the extra overhead in terms of initial content placement and coding at the delivering time and then decoding at the receivers. Next, by studying the coded multicasting scheme, we compare the gains obtained via coded caching. Furthermore, the analysis used for the multicast queue will be useful in the analysis of the coded multicasting system as well.
We say that a request is of type if upon arrival there was no candidate for merger already in the queue. If the request was merged with an already existing request in the queue, we say it is of type .

Iii-B1 Stationarity

We analyze this system and derive an approximate expression for the mean delay under stationarity. The maximum length of this queue is for all (because of merging of requests of each file). Hence the queue is always stable.
We consider the state

of this system by associating a vector of tuples with each position in the queue at time

: when is the queue length (including the file being served) and is the file index requested at position in the queue. Let denote the queuing delay of the arrival to the queue. We start with the following proposition.

Proposition 1.

For the multicast queue, is an aperiodic, regenerative process with finite mean regeneration interval and hence has an unique stationary distribution. Also, starting from any initial distribution, converges in total variation to the stationary distribution.

Proof.

Let the state of the queue just after the departure be . By IRM assumption,

is a finite state, irreducible discrete time Markov chain (DTMC) (

[22]). We claim that this is also aperiodic. To see this consider the state , i.e., when the system is empty just after the departure. The probability, as there is a positive probability of just one arrival before the next departure. Thus the DTMC has a unique stationary distribution.

The arrival epochs to the queue just after the epochs when

is are also the regeneration epochs for . The regeneration lengths for also have finite mean length (although it is more than that of due to the merging effect) and can be shown to be aperiodic, as for . Thus, has a unique stationary distribution and starting from any initial conditions, converges to the stationary distribution in total variation.

Using above methods, we can show that also has a unique stationary distribution and starting from any initial distribution, the process converges in total variation to the stationary distribution.
The queue length is upper bounded by and the waiting times are upper bounded by . Thus, unlike for the FIFO queue, the multicast queue that is studied here has the distinct advantage of being always stable and has bounded delays.
The dimension of the Markov chain can be very large even for modest and . Computing its stationary distribution can be prohibitive. Computing mean delay under stationarity is even harder. Next, we provide an approximation for the mean queuing delay using M/G/1 queue which is easy to compute.

Iii-B2 M/G/1 approximation:

Let

denote a random variable with the stationary distribution of queuing delay for requests of file

and type across all users. We will use the approximation that are same for all and we will denote it by (we have checked via simulations that this is a good approximation). Since all the requests for file that arrive during the delay of type 1 requests are merged and are considered as one single merged request for transmission, the effective arrival rate of file into the queue is

(3)

where . The utilization factor for this queue is

(4)

We use the queue mean delay expression [22] to approximate as

(5)
Figure 2: Solution to M/G/1 approximation. , . Zipf exponent 1. Plot shows for total rate (blue) and (red). Corresponding fixed points are also indicated.

We use Eq (3), (4) and (5) to solve for . From these equations, we can write (taking for notational simplicity) the fixed point equation , where

(6)

From this equation, we can conclude the following:

  1. is a continuous function of for all where satisfies . For , is strictly decreasing and as , . Also as , . See Figure 2. The figure shows for all . The region where is of no interest to us. For any value of , and , has a unique solution in the region . This solution is positive. This is the solution we consider for for our system. At this , .

  2. If we increase any , keeping all the other parameters fixed in Eq (4), increases and thus increases for a fixed (see Figure 2). Therefore, the resulting fixed point increases. If all , then the fixed , . Thus, in the limit, the fixed point will be at which . This has the solution . For large , . This will be an upper bound for all s.

Let denote a random variable corresponding to queuing delay of file (includes both type and type ) under stationarity. We approximate by

(7)

This reflects the fact that fraction of requests (type ) experience a delay of . The requests that get merged with already existing ones in the queue (type ) experience a mean delay of (approximately).

Figure 3: Validity of M/G/1 approximation for the multicast queue. , . user request rate = 0.1 request/sec and Zipf exponent 1.

In Figure 3, we compare the above approximation of the mean delay of the multicast queue with simulations. In this example, , for all files, popularity profile is Zipf with exponent , files/second for all . We see a good match.
In this figure, we also plot the mean delays for FIFO queue for comparison. FIFO queue becomes unstable as the number of users increases. However, the multicast queue always remains stable and has quite a low mean delay even at large arrival rates.
For the delay of , the maximum request arrival rate supported are approximately and files/sec for FIFO and the multicast queue respectively. For the mean delay of , the same are approximately and request/sec.

Iii-C Multicast queue with LRU caches

Now, we extend our analysis to the multicast system with caches. All users use LRU at their caches. This is the most studied scheme for file replacement and has good performance ([4]). Our analysis easily extends to other eviction policies also. The requests generated by a user that are not met at the local cache are forwarded to the server. At the server the requests are queued and served as described in Section III-B.

Iii-C1 Stationarity

Consider the system state , where, is the state of the multicast queue and are vectors with positions of each file in the cache . If a file is not there in the cache then the position corresponding to the file is set to 0. Let be the delay of the request arriving at the multicast queue.

Proposition 2.

The processes and have unique stationary distributions. Also, starting from any initial conditions, these processes converge in total variation to their stationary distributions.

Proof.

Denote by the state , just after the departure from the multicast queue. Due to IRM assumption, is a finite state, aperiodic, irreducible Markov chain. Thus, has a unique stationary distribution.
Fix a state of where the multicast queue is empty. The revisits to this state are regeneration epochs. Thus has finite mean regeneration times. Now, consider the continuous time process , where is the residual transmission time of the file being transmitted at the server at time This also has regeneration epochs at the regeneration epochs of mentioned above and the regeneration times of have a finite mean due to Poisson arrivals. Thus has a unique stationary distribution.
Since arrival epochs to the system form a Poisson process (IRM), by PASTA ([22]) the arrivals see the same stationary distribution of the system as of . Now, consider the arrivals at the server. These are the miss requests from the caches. Let be the process at these epochs. The Palm distribution ([22]) of at these epochs is the stationary distribution seen by these epochs. The queuing delay seen by the arrival at the server is where is a deterministic function bounded by . Thus has a stationary distribution.
These results also imply that starting from any initial conditions, and converge to their stationary distributions in total variation. ∎

Iii-C2 Approximation of delay at the multicast queue

Since, miss requests are coming from LRU caches, if the IRM requests for file to user arrive at rate , the missed traffic for file from user , is given by

(8)

where is the probability of miss for file at the cache of user . Using Che’s approximation in [4], , where is the Che’s constant. Also, the arrival process at the multicast queue can often be approximated by a Poisson process ([4]). Thus the effective arrival process at the server as in (3) can be approximated by a Poisson process with rates

(9)

Figure 4: Validity of theory for mean delay for the the multicast queue with transmission errors and LRU-multicast queue: , , , error , user rate = 0.6 files/sec and Zipf exponent 1.

Figure 4 shows the comparison between theory and simulation. The parameters are , , for all , arrival rate request/sec per user and file popularity distribution is Zipf with exponent equal to one. We have plotted the mean delay for the delays seen by all the requests coming at a user; if a request is met at the local cache, its delay is taken as zero. We see that they match well.
If an eviction policy other than LRU is used, the only changes needed in the above approximation are in , which can be obtained from the literature ([4]) for various policies.

Iii-D Multicast queue with transmission errors

Now, we consider a more realistic scenario where the channel from the server to the users experiences fading. This models a wireless cellular network, where the Base Station (BS) is the server and the users are mobile customers. The fading processes for different users are assumed independent of each other. The channel for a given user has block fading: the channel gain is same during the transmission of a file; it changes independently from one transmission to another with the same distribution. These are commonly made assumptions in literature [23].
We consider only the system without caches. We consider the multicast queue described in Sec III-B. We assume that perfect knowledge of the channel gains from the BS to the mobile users is available at the BS before transmission. Also, the BS transmits at a constant power.
We consider two schemes for transmission of a file:

  1. Worst rate: The server transmits at the rate corresponding to the user with the worst channel gain. After the transmission, all the intended users receive and decode the file transmitted without errors.

  2. Retransmission, fixed rate: There is an appropriately selected fixed rate for transmission. The intended users whose current rate is greater than will receive and decode correctly. All other users cannot decode correctly. The server retransmits the content to the remaining users till all the intended users receive and decode the content correctly. For this case, the BS does not need the channel gains; only a nack from the user not decoding correctly is sufficient. For selecting , we need the statistics of the fading channel.

Figure 5: Mean delay for the multicast queue with transmission errors. , user rate = request/sec, , all file sizes with file/sec, , Bandwidth MHz, Zipf exponent .

We compare the performance of the two schemes by an example. The channel is additive white Gaussian noise channel. The system parameters are: , user rate = request/sec, , file sizes are same with file/sec, , . Rayleigh fading is assumed with channel gain and . We plot mean delay of the queue in Figure 5. For comparison, we also plot the mean delay when there is no fading. The retransmission scheme outperforms the worst user rate scheme substantially. Thus, in the following we study only the retransmission scheme.

Iii-D1 M/G/1 approximation

For the theory, we again use M/G/1 approximation of Sec III-B

. We need to compute the mean and variance of service times. For this, we need the distribution of the number of users merged who have requested the file being serviced. On the average a request of type

for file from a user spends time equal to in the multicast queue from the time it enters the queue till its service begins. The probability that any other user makes a request (type ) for the same file is . The request process from each user is assumed to be independent. From this, we can get the distribution of the number of users merged corresponding to a given request being serviced. Let the type 1 request corresponding to the file in service be from user . The number of users is given by , where

has Bernoulli distribution with the parameter

. Also, the probability that type 1 request was generated by user is .
Let be the probability that user receives and decodes a transmission correctly. For the AWGN channel with transmit power , channel gain and noise variance ,

. The number of transmissions required for successful reception is geometrically distributed with parameter

. If denotes the set of users requesting the file that is being transmitted, the total number of transmissions required to serve all the users in has the distribution

(10)

From this and the distribution of number of users in , we can compute and in Eq (5). The resulting set of equations can be solved numerically.
In Figure 4, we compare computed theoretically with simulations. The parameters are files, all files are of the same size with transmission rate of each file = file/sec, error rate for all and arrival rate = request/sec per user. Popularity profile of the files follows Zipf distribution with exponent 1. We see that the theory matches well with simulations.
If the users also have caches, then the above analysis can be extended to this system by replacing the request rates by (8) and (9).

Iv Coded multicasting schemes with queueing

In this section, we consider the system when users are equipped with caches and use coded multicasting for transmission. We assume all files have the same size ( for all ). Each user is assumed to have a cache of size equal to files. These assumptions are being made because we are using the coding scheme in [12], but can be relaxed.
The caches at the users are populated with parts of the file as in [12]. Since, each user stores a certain part of all files, no request can be completely met locally. Hence, all requests are sent to the server. At the server, a separate queue for requests from each user is maintained. Within each user queue, requests corresponding to the same file are merged as in the multicast queue. During transmission, the head of the line requests of all the queues are considered for coding and the delivery scheme in [12] is used. Some of the queues can be empty. For such queues, a dummy request (for an arbitrary file) is assumed to be present. Note that assuming dummy requests when some queues are empty is quite wasteful. In the next section, we will consider policies which take care of this. We call this scheme Partition Coded Scheme With Merging (PCS-M). We will show in Sec V that this scheme outperforms the multicast queue substantially at low arrival rates. However, at high rates the probability that most of the users are merged in each file request for the multicast case is high and hence the multicasting gains take over.
In the following, we study this system theoretically.

Iv-a Stationarity

Let the system state, where is a vector representing the index of each file in the queue of user at the BS. Let be the delay of the arrival to the BS.

Proposition 3.

For the system with caches and coding, the queuing delay process is an aperiodic, regenerative process with a finite mean regeneration interval and hence has a unique stationary distribution.

Proof.

Let be the state of the system just after the departure from the BS. is an irreducible, finite state, aperiodic DTMC. Thus it has an unique stationary distribution. Further, we note that the epochs where , i.e., all the queues are empty, are regeneration epochs for the process . The finiteness of the mean of the regeneration intervals holds.
Now, since all the IRM arrivals at the users are forwarded to the BS, the forwarded requests also form a Poisson process. The system states seen by these arrivals also have the regeneration epochs which see the system empty (the arrivals just after , see empty system). Thus, have a unique stationary distribution. ∎

Iv-B Approximation

To compute the mean queuing delay under stationarity, we consider each user queue separately. We use the M/G/1 approximation from Sec III-B. The effective arrival rate for file into the queue for user is

(11)

where is the mean queuing delay for type 1 requests from user . The service time is equal to the time it takes to transmit coded packets for all the users. The transmission rate of the channel is file/sec. In the scheme considered, the service times are deterministic and given by ([12])

(12)

We use these in Eq (4) and Eq (5) to compute (we will have in the place of in Eq (5)) for each queue).

Figure 6: M/G/1 approximation for the coded multicasting queue at user #1. Transmission rate = 1 file/sec, , and . Zipf Exponent 1.

Figure 6 shows the mean queuing delay for user #1 as the request rate is varied. The system parameters are: transmission rate = 1 file/sec, , and . File popularity is Zipf distribution with exponent 1. We notice that M/G/1 approximation agrees well with the simulations.
These results can be extended to the fading channel, as for the multicast queue in Section III-D.

V Comparison of coded and uncoded caching schemes

In this section, we compare the mean delays of several schemes via simulations. These schemes include the above studied schemes, some from [19] and a few additional variants of the above schemes.

  1. LRU with merging and multicast (LRU-M): This scheme is described in Section III-C.

  2. Partition coded scheme with merging (PCS-M): This scheme is described in Section IV.

  3. Modified PCS with merging (MPCS-M): This is an improvement over PCS-M where the dummy requests are not used but delivery of the head of the line requests in all non-empty queues is performed as in WPCS in [19].

  4. Coded delivery LRU scheme (CDLS): This the same as CDLS in [19]. The users have LRU caches. There is a separate queue of pending requests for each user at the server. At each transmission time, head of the line of queues of each user is checked for coding opportunity. If there is no coding opportunity, the request with the largest waiting time is fulfilled.

  5. Coded delivery LRU scheme with merging (CDLS-M): In this scheme, we include merging of requests for the same file in each user queue. The delivery procedure is the same as in CDLS.

  6. Uncoded pre-fetch optimal with merging (UPO-M): This scheme uses the coded-caching scheme in [14]. Also, there is merging of requests for the same file in the queue from each user.

  7. LRU with merging and coded delivery (LRU-CM): In this scheme, users have LRU caches. The missed requests from all the users are queued in one multicast queue at the server. During delivery, coding opportunity between the requests in head of the line and the one immediately after it is searched for. The best of the coded or uncoded delivery in terms of the number of users served is chosen. This scheme should work at least as well as the LRU-M.

Figure 7: Mean delay for various schemes. , , , transmission rate file/sec and Zipf exponent .
Figure 8: Mean delay for various schemes. , , , transmission rate file/sec and Zipf exponent .

Figure 7 shows the mean sojourn times. This includes the transmission time in the BS queue, because now the transmission times of the coded and uncoded transmissions will be different for different schemes. Also, if a request is satisfied at a local cache, its sojourn time is zero. The system parameters are , , , files are of the same size with transmission rate file/sec and the popularity profile is Zipf with exponent .
The schemes UPO-M and PCS-M have similar mean delay. MPCS-M is slightly better for low arrival rate, but, eventually has almost the same performance as the UPO-M and the PCS-Merge. We see that CDLS at low rates is somewhat better than other schemes but becomes unstable as total arrival rate at the server increases. All other schemes are stable because merging of requests establishes an upper bound on the number of requests in the queue. However, the mean delay for the above three schemes using coded multicasting is much lower at low arrival rates than LRU-M but as the rates increase, LRU-M and LRU-CM perform better. For low rates, performance of CDLS-M and CDLS is similar. However, CDLS-M does not become unstable as arrival rates increase, but, performs worse than other schemes.
If we consider a mean delay limit of , the approximate maximum arrival rates are for CDLS, for CDLS-M, for PCS-M, MPCS-M and UPO-M. The maximum arrival rates is greater than for LRU-M and LRU-CM.
In Figure 8, we again compare the above schemes for , , , transmission rate file/sec and file popularity follows Zipf distribution with exponent . Here also, we see a similar comparison of different schemes as in Figure 7.

Vi Conclusion

We have studied a system where a server transmits files to multiple users over a shared wireless channel. The users have caches. New queuing models of the system are obtained when broadcast nature of the channel is fully exploited and when in addition, network coding is also used. Our models have the distinct advantage that the system is always stable for all request arrival rates. Mean delay of the two systems are obtained theoretically and compared. For our models, we have observed that at lower arrival rates, the coded multicasting schemes provide less mean delay, but, at higher rates the uncoded multicasting eventually takes over.

References

  • [1] “Cisco visual networking index: global mobile data traffic forecast update 2016-2021 white paper,” 2016.
  • [2] M. Cha, H. Kwak, P. Rodriguez, Y.-Y. Ahn, and S. B. Moon, “I tube, you tube, everybody tubes: analyzing the world’s largest user generated content video system,” in Internet Measurement Conference, 2007.
  • [3] S. Podlipnig and L. Böszörmenyi, “A survey of web cache replacement strategies,” ACM Comput. Surv., vol. 35, no. 4, pp. 374–398, Dec. 2003.
  • [4] M. Garetto, E. Leonardi, and V. Martina, “A unified approach to the performance analysis of caching systems,” ACM Trans. Model. Perform. Eval. Comput. Syst., vol. 1, no. 3, pp. 12:1–12:28, May 2016.
  • [5] N. C. Fofack, P. Nain, G. Neglia, and D. Towsleyeee, “Analysis of TTL-based cache networks,” Perform. Eval. Methodol. Tools (VALUETOOLS), 2012 6th Int. Conf., pp. 1–10, 2012.
  • [6] N. C. Fofack, D. Towsley, M. Badov, M. Dehghan, and D. Goeckel, “An approximate analysis of heterogeneous and general cache networks,” Inria, Research Report RR-8516, Apr. 2014.
  • [7] K. Poularakis, G. Iosifidis, and L. Tassiulas, “Approximation algorithms for mobile data caching in small cell networks,” IEEE Trans. Commun., vol. 62, no. 10, pp. 3665–3677, 2014.
  • [8] Y. Cui and J. Dongdong, “Analysis and optimization of caching and multicasting in large-scale cache-enabled heterogeneous wireless networks,” IEEE Trans. on Wireless Commun., vol. 16, no. 1, pp. 250–264, Jan 2017.
  • [9] C. Yang, Y. Yao, Z. Chen, and B. Xia, “Analysis on cache-enabled wireless heterogeneous networks,” IEEE Trans. Wirel. Commun., vol. 15, no. 1, pp. 131–145, 2016.
  • [10] M. Ji, G. Caire, and A. F. Molisch, “The throughput-outage tradeoff of wireless one-hop caching networks,” IEEE Trans. Inf. Theory, vol. 61, no. 12, pp. 6833–6859, 2015.
  • [11] L. Zhang, M. Xiao, G. Wu, and S. Li, “Efficient scheduling and power allocation for D2D assisted wireless caching networks,” IEEE Trans. Commun., vol. 64, no. 6, pp. 2438–2452, 2016.
  • [12] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, 2014.
  • [13] M. Ji, G. Caire, and A. F. Molisch, “Fundamental limits of caching in wireless D2D networks,” IEEE Trans. Inf. Theory, vol. 62, no. 2, pp. 849–869, 2016.
  • [14] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching,” IEEE Trans. on Inf. Theory, vol. 64, no. 2, pp. 1613–1617, Feb 2018.
  • [15] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. Diggavi, “Hierarchical coded caching,” IEEE Trans. on Inf. Theory, vol. 62, no. 6, pp. 3212 – 3229, 2016.
  • [16] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded caching,” IEEE/ACM Trans. Netw., vol. 24, no. 2, pp. 836–845, 2016.
  • [17] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, “Order-optimal rate of caching and coded multicasting with random demands,” IEEE Trans. Inf. Theory, vol. 63, no. 6, pp. 3923–3949, 2017.
  • [18] U. Niesen and M. A. Maddah-Ali, “Coded caching with non-uniform demands,” IEEE Trans. Inf. Theory, vol. 63, no. 2, pp. 1146–1158, 2017.
  • [19] F. Rezaei and B. H. Khalaj, “Stability, rate, and delay analysis of single bottleneck caching networks,” IEEE Trans. Commun., vol. 64, no. 1, pp. 300–313, 2016.
  • [20] N. Moghadam and H. Li, “Improving queue stability in wireless multicast with network coding,” IEEE Inter. Conf. on Commun. (ICC), pp. 3382–3387, 2015.
  • [21] I.-H. Hou, “Broadcasting delay-constrained traffic over unreliable wireless links with network coding,” IEEE/ACM Transactions on Networking, vol. 23, pp. 728–740, 2011.
  • [22] S. Asmussen, Applied Probability and Queues, 2nd ed.   Springer-Verlag New York, 2003.
  • [23] D. Tse and P. Viswanath, Fundamentals of wireless communication.   Cambridge Univ Press, 2005.