Within few years, it has been expected that tens of exabytes of global data traffic be handled on daily basis, and on-demand video streaming will account for about 70% of them . In on-demand video streaming services, a relatively small number of popular contents is requested at ultra high rates and playback delay is one of the most important measurement criteria of goodness [2, 3]. To deal with the characteristics, wireless caching technologies have been studied for video streaming services by storing popular videos in caching helpers located nearby users during off-peak time [4, 5, 6]. Therefore, it is obvious that storing and streaming of video files are of major research interests in wireless caching networks.
There have been major research results for caching popular files in stochastic wireless caching networks [7, 8, 9, 10, 12]. The major goal of those research results was to design the optimal caching policies according to the popularity distribution of contents and wireless network topology. The probabilistic caching policy was proposed in 
to adapt characteristics of the stochastic network. Many probabilistic caching methods have been proposed depending on various optimization goals, e.g., maximization of cache hit probability, cache-aided throughput , average success probability of content delivery , density of successful reception , and average video quality . The authors of  considered a joint optimization of caching and delivery when user demands were known in advance. In addition, the optimal caching policy which maximizes the cache hit probability in two-tier networks with opportunistic spectrum access was designed in . However, these works on the caching policy do not consider the identical content with different qualities.
Since video files can be encoded to multiple versions which differ in the quality levels, the video caching policies having different quality levels have been widely studied in [14, 15, 16, 17]. Many researchers have proposed the static content placement policies under the consideration of differentiated quality requests for the same content, given probabilistic quality requests ,  or minimum quality requirements . Further, the probabilistic caching policy for video files of various quality levels was presented in  by using stochastic geometry, given the user preference for quality level. The above works are focused only on the content placement problem with different qualities, however, the delivery policy of contents with different qualities has not yet been studied much.
For video delivery/streaming, there are some necessary decisions to be made: 1) which caching node will deliver the video, 2) which quality of video will be provided, and 3) how many video chunks will be transmitted. The first one is called node association problem, and in most research contributions that do not consider different quality levels for the same file, the file-requesting user is allowed to receive the desired video from the caching node under the strongest channel condition [9, 19]. The node associations for video delivery in heterogeneous caching networks have been studied in [20, 21, 22]. On the other hand, when videos with different qualities are independently cached, more elaborate node association algorithm is necessary, because the node association is consistent with decision on the content quality. In this case, the video delivery policy was proposed in  to pursue time-average video quality maximization while avoiding playback delays.
Since dynamic video streaming allows each chunk to have a different quality depending on time-varying network conditions, some researchers addressed transmission schemes which serve the video by dynamically selecting the quality level . In  and , the scheduling policies which maximize the network utility function of time-averaged video quality in small-cell networks and device-to-device networks were proposed. The authors of  considered scalable video coding (SVC) and proposed dynamic resource allocation and quality selection under the pricing strategy for interference. While the video delivery policies of [23, 24, 25, 26] are operated at the base station (BS) side, however the decision policy of video quality level at user sides was not considered. This scenario is consistent with the practical real-world software implementation of dynamic adaptive streaming over HTTP (DASH) , in which users dynamically choose the most appropriate video quality. Even though the work of  can choose the video quality at the user side, however it cannot dynamically change the video quality without updates of node association.
Further, control of the amount of receiving chunks depending on stochastic network states has been largely neglected in above existing researches about video delivery. Even though the authors of  and  maximize the long-term time-average video quality under the various constraints, their metrics representing video quality is obtained by averaging the number of quality selections at each time slot. This method would be not enough to evaluate the user’s quality of service, especially when the transmission rate varies over the video streaming service time. In practice, when channel experiences deep fading and only the low-quality video is available, it would not be the best choice to receive as many chunks as the channel condition can provide. Rather than receiving many low-quality chunks, the user could prefer to wait channel conditions to be better and then to receive high-quality chunks, if it guarantees no playback delay. Therefore, by considering decision process of combinations of video quality and chunk amounts, we can formulate the optimization problem which maximizes the average video quality per each received chunk.
This paper proposes a video delivery policy in the wireless caching network for dynamic streaming services. The main contributions are as follows:
This paper proposes dynamic video delivery policy depending on stochastic network states. The proposed policy makes three different but necessary decisions for the streaming user: 1) caching node for video delivery, 2) video quality and 3) the quantity of video chunks to receive. To the best of the authors’ knowledge, no research has yet considered all of those video delivery decisions.
Caching node association and decisions of video quality and the amount of receiving chunks are conducted in different timescales. Since wireless link activation for video delivery is time-consuming, it is reasonable that caching node association is performed slower than decisions of video quality and the amount of receiving chunks.
The optimization framework of video delivery policy is constructed based on frame-based Lypunov optimization theory  and Markov decision process. The optimal caching node is found by Lyapunov optimization while decisions of video quality and the amount of receiving chunks are made by using dynamic programming .
We perform simulations to verify the proposed video delivery policy and to show the advantages of using Lyapunov optimization theory and Markov decision process.
The rest of the paper is organized as follows. The wireless video caching network model is described in Sec. II. The optimization problem for dynamic video delivery is formulated in Sec. III. The rule of caching node association and control policies of quality level and receiving chunk amounts are proposed in Sec. IV and Sec. V. Simulation results are presented in Sec. VI and Sec. VII concludes this paper.
Ii Network Model
Ii-a Wireless caching network model
This paper considers wireless caching network model where a user requests certain video file for one of caching nodes around the user, as shown in Fig. 1. The BS has already pushed popular video files during off-peak hours to caching nodes which have the finite storage size. Since we focus on video delivery, the caching policy is out of scope for this paper and only the desired video is considered. Suppose that the desired video has quality levels. Therefore, there are types of caching nodes, and the type- caching nodes can deliver the video of any quality , where is the set of qualities which the type- caching node can provide. Thus, the type- caching nodes can provide all quality levels from the quality set . Note that simple definition of is assumed, but the proposed technique can be coordinated with any arbitrary quality set as long as multiple versions of the same video having different qualities are stored in caching nodes.
The identical files of different qualities are stored in multiple caching nodes, and the type- caching nodes are distributed by the independent Poisson Point Processes (PPPs) with intensity , where indicates the caching probability of the requested video encoded to provide any quality . Suppose that the caching policy is already determined, i.e., all for all are given. In addition, videos of different qualities have different file sizes and denotes the file size of quality in bits, satisfying for all and .
User mobility is also captured in network model. The user is moving towards certain direction and periodically searches for a caching node to receive the desired video file. As shown in Fig. 1, geological distribution of caching nodes around the user varies at each time slot, so the caching node decision should be appropriately updated. Further, this paper also considers how many chunks of which quality level to be requested from the user depending on the stochastic network environment. When there are other users who exploit the wireless caching network with the same resource, the target streaming user is interfering with them. We adopt the distance-based interference management to limit the interference power lower than certain threshold, and details are explained in Section II-C.
Ii-B User queue model and channel model
A video file consists of many sequential chunks. The user receives the video file from a caching node and processes data for the streaming service in units of chunks. Each chunk of a file is responsible for some playback time of the entire stream. As long as all chunks are in correct sequence, each chunk can have different quality in dynamic streaming. Therefore, the user can dynamically choose video quality level in each chunk processing time. By using the queueing model, it can be said that the playback delay occurs when the queue does not have the chunk to be played. In this sense, receiver queue dynamics collectively reflects the various factors which cause the playback delay.
In general, the user model has its own arrival and departure processes. The user queue dynamics in each discrete time slot can be represented as follows:
where stands for the queue backlog at time . In addition, the departure is a constant because the streaming user does not change the video playback rate in general. The arrival denotes the number of received chunks at time .
Let the caching node which the user chooses for video delivery be . Then, represents the Rayleigh fading channel between the user and the caching node at time , where controls path loss with being the user-caching node distance at time and
represents the fast fading component having a complex Gaussian distribution,. The link rate can be simply given by , where , , and are bandwidth, transmit SNR, and interference-noise-ratio (INR), respectively.
The number of received chunks necessarily depends on the caching node decision and its link rate. In this paper, each slot interval is determined to be channel coherence time . Then, the number of received chunks is constrained by
Since and are nonnegative integers,
Therefore, the decision of depends on the decisions of and and the random network event .
Ii-C Distance-based interference management
Although many existing works have investigated complex interference management schemes such as interference alignment and interference cancellation, most of researches on the wireless caching and delivery policy have still used simple interference avoidance based interference management schemes, e.g., by spectrum sharing  or assuming the protocol model . For simplicity, this paper considers the distance-based interference control for node association (i.e., link activation) for video delivery. The design ideas can be extended to other more sophisticated interference management schemes [32, 33].
Activation of the new link for video delivery in the wireless caching network means that the network allows the new streaming user to interfere with existing users. A new user causes two types of interference, 1) from the caching nodes already serving existing users to the new user, and 2) from the caching node associated with the new user to existing users. Therefore, we define and as the safety distances for streaming users and their associated caching nodes respectively to keep the interference levels below the predetermined threshold of . In other words, a new streaming user who wants to exploit the wireless caching network should be generated outside the radius of all caching nodes associated with the existing users. In addition, the new user has to find the caching node to receive the desired content outside the radius of all existing users. The safety distances of and should be carefully chosen, and then a new pair of a caching node and a user can be generated only when their interference power is acceptable for every existing video delivery link, as shown in Fig. 2.
In this regard, a new pair of a caching node and a streaming user is allowed for video delivery through following two steps. The first step is to confirm the INR, say , at the new streaming user to be lower than . Here,
is the ratio of the aggregated interference power from all the activated caching nodes to noise variance. If, the system does not allow the new user to exploit the wireless caching network, and the new user should directly request the desired content from the server which has a whole file library or wait for content delivery in future. For example, suppose that interference power from the nearest interfering caching node to the new user dominates . We further let be the transmit SNR of the interfering node, and be the distance from the interfering node to the new user. Then, INR becomes , and to guarantee .
Although the interference power at the new user is safe to exploit the wireless caching network, i.e., , the caching node associated with the new user will be able to degrade the signal-to-interference-plus-noise ratios (SINRs) of existing users. Thus, the second step is required, in which the new user should find the caching node to receive the desired content with sufficiently large link rate as well as not to significantly interfere with other users. Let one of existing users have a margin of INR to guarantee before activating the new link, denoted by . Since the interference signal from the new caching node is independent on other interfering nodes, is obtained similar to the case of . Therefore, the new caching node should be chosen outside the radius of from every existing user whose margin of INR would be different from each other.
Even though the new user and its caching node are found while limiting all interference levels at users lower than , the newly generated link between them could not be enough to provide reliable content transmissions due to bad channel conditions. Therefore, we investigate the existence of the caching node around the new user which stores the requested content and can deliver the content reliably. Let the minimum SINR for reliable video delivery denoted by . Then, the probability that at least one caching node can successfully deliver the desired content to the new user is represented by
where is the channel gain between the new user and the caching node whose channel condition is the strongest among the nodes storing the desired content of the user. According to order statistics and 
, the cumulative distribution function of the smallest reciprocal of channel power is, where and is the intensity of PPP of nodes caching the desired content. According to (4), can be found by
Then, by introducing the minimum probability of finding at least one caching node for reliable video delivery denoted by , a set of can be considered as a criterion for new reliable link activation which satisfies . In this regard, we can verify how much interference power is acceptable to satisfy the criterion of , as follows:
Thus, if all network parameters are given, the threshold of interference power can be determined by
On the other hand, if the network requires the target criterion of interference management, i.e., , , and are given, the system can determine how much transmit power is required and/or how many caching nodes store the desired video.
In this paper, the minimum SINR threshold is set so that the chunk of the smallest size (i.e., the lowest quality) is deliverable at least, i.e., . Then, we can say that caching nodes which store the desired content should be distributed with the intensity of at least, as follows:
Iii Dynamic Video Delivery Policies
Iii-a Video delivery decisions
The goal of this paper is to find the appropriate three decisions at each slot in the network model illustrated in Section II: 1) caching node for video delivery , 2) video quality level , and 3) the quantity of receiving chunks . However, to update the caching node association, the time-consuming process is required in which the user sends the request signal for video delivery and the caching node approves it. Therefore, new caching node association is hardly performed as frequent as receiving chunks, and we suppose that the decision on is made at larger timescale than decisions on and .
In this sense, the user decides and at time slots , but caching node decisions are performed at time slots , where is the time interval for caching node association. The time slot for the -th caching node decision is denoted by for . Different timescales of decisions on , and are described in Fig. 3. Let the -th frame for caching node decision be . As shown in Fig. 3, after associating with the caching node at time , decisions on quality level and chunk amounts are performed over to receive the desired video from . Therefore, and should be satisfied for , where is the type of the caching node .
The user can make the candidate set of caching nodes denoted by , and . All caching nodes in should be outside the radius of all existing users to limit the interference power lower than . To avoid the situation in which no caching node can deliver the desired video, i.e., , the caching nodes which provide SINRs larger than are assumed to be outside the radius of all existing users. consists of up to caching nodes, i.e., , in which each caching node in belongs to different types. If there are several nodes of type-, the user takes one of them whose channel condition is the strongest. There is no reason to choose another type- caching node while leaving the node with the strongest channel if another streaming user does not request the video from that strongest node. In addition, means that caching node types do not exist around the user. Suppose that the new streaming user is already generated outside the radius of all existing caching nodes and the INR is observed at the new user. Also, another user’s link activation is banned around the target user due to the interference issue. Then, we just consider the node association problem of the new streaming user with respect to the candidate set while the INR is observed.
Iii-B Problem formulation
For determining the appropriate video delivery policy, two performance metrics are considered: playback delay and average streaming quality. Based on these goals, we can formulate the optimization problem which minimizes the quality degradation constrained on averting queue emptiness as follows:
where is quality measure of and is the maximum quality measure, i.e., equation (9
) is the time averaged video quality degradation. Decision vectors are represented as, and . Specifically, the expectation of (9) is with respect to random channel realizations and stochastic distributions of caching nodes.
As mentioned earlier, playback delay occurs when the next chunk is not arrived in the queue, therefore the constraint (10) has a role of avoiding queue emptiness, where . Here, is introduced to make large enough to avert playback delay, and is a sufficiently large parameter which affects the maximal queue backlog. From (1), the queue dynamics of can be represented as follows:
Even though the update rules of and are different, both queue dynamics mean the same video chunk processing. Therefore, playback delay due to emptiness of can be explained by queueing delay of . By Littles’ Law , the expected value of is proportional to the time-averaged queueing delay. We aim to limit the queuing delay by addressing (10), and it is well known that Lyapunov optimization with (10) can make bounded .
From the optimization problem (9)-(11), we can intuitively see how decisions are made depending on . Suppose that the queue is almost empty. In this case, the user prefers the caching node whose channel condition is strong, pursues low-quality file, and tries to receive as many chunks as possible to stack many chunks in the queue. However, all of those decisions could degrade the average streaming quality. When the caching node with the strongest channel condition belongs to type-, it can be better to associate with the caching node of another type in terms of average quality. In addition, when low quality is chosen, receiving too many chunks may not be a good choice. The user would prefer to receive the small number of chunks in current time-step and wait the better channel condition. If the channel condition is improved at the next time-step, the user can request many chunks of high-quality video. Thus, those decisions are strongly dependent on the queue state , the caching node distribution, and channel conditions of caching node candidates.
Iv Caching Node Decision Policy
For avoiding the queue emptiness, i.e., pursuing queue stability of , the optimization problem of (9)-(11) are solved based on the Lyapunov optimization theory. However, since the timescale of decision on is larger than that of decisions on and , the frame-based Lyapunov optimization theory  is used for caching node decision. Lyapunov function can be defined as . Then, let be a frame-based conditional Lyapunov function that can be formulated as , i.e., the drift over the time interval . The dynamic policy is designed to solve the given optimization problem of (9)-(11) by observing the current queue state, , and determining the caching node to minimize a upper bound on frame-based drift-plus-penalty :
where is an importance weight for quality improvement.
First of all, the upper bound on the drift can be found in the Lyapunov function.
By summing (14) over , the upper bound in the frame-based Lyapunov function is obtained by
Thus, according to (13), minimizing a bound on frame-based drift-plus-penalty is equivalent to minimizing
where , and recall that . The above minimum is conditioned on for all . This frame-based algorithm is shown to satisfy the queue stability constraint of (10) while minimizing the objective function of (9) in . For any , the minimum bound on frame-based drift-plus-penalty can be obtained by
In Section V, we will provide an efficient method to find the minimum achieving and .
System parameter in (16) is a weight factor for the term representing the measure of video quality degradation. The value of is important to control the queue backlogs and quality measures at every time. The appropriate initial value of needs to be obtained by experiment because it depends on the distribution of caching nodes, the channel environments, the playback rate , and . Also, should be satisfied. If , the optimization goal is converted into maximizing the measure of video quality degradation. Moreover, in the case of , the user only aims at stacking queue backlogs without consideration of video quality. On the other hand, when , users do not consider the queue state, and thus they just pursue to minimize the video quality degradation. can be regarded as the parameter to control the trade-off between quality and delay, which captures the fact that the user can stack many low-quality chunks or relatively the small number of high-quality chunks in the queue, under the given channel condition.
From (16), we can anticipate how the algorithm works. When the queue is almost empty, i.e. , the large arrivals are necessary for the user not to wait the next chunk. In this case, the user prefers the caching node which gives many chunks. On the other hand, when the queue backlogs are stacked enough to avoid playback delay, i.e. , the user would request the high quality level of without worrying about playback latency.
With the initial condition of , the user computes for all . Then, the caching node which minimizes is chosen at the user,
V Decisions on Quality Level and Receiving Chunk Amounts
The goal of this section is to compute , given the associated caching node and initial queue backlogs .
V-a Stochastic shortest path problem
According to (16), we can formulate the drift-plus-penalty algorithm of the -th frame as follows:
where . The problem of (19)-(21) is similar to the stochastic shortest path problem based on Markov decision process. In the network model, and (i.e., ) are given before making decisions of and at every time .
The queue backlog represents the current state which satisfies the Markov property. Define as the state space of the user queue. It is reasonable to set be the arbitrarily predefined maximum queue backlog because the queue size is finite in practical system. The action set is defined as . Then, incurred cost at can be formulated by
The transition probabilities from to can be defined for all states and as
Since the next state is deterministic given and action , it can be seen that .
V-B Probability mass function of
The constraint (20) indicates that the maximum number of chunks which the user can receive depends on the random network event and the decision of quality . It notifies that decisions on and
should jointly made as well as the probability distribution of the random network eventis required.
V-C Dynamic programming
Given and , the user observes the queue state and the random network event , and decides the action for each time slot . Then, the minimum incurred cost based on measurements of and is
conditioned on .
Let be the marginalized function of over all possible , and it can be approximated into
where is a nonnegative integer such that . The dynamic programming provides the action that minimizes the following cost as given by 
where the expectation of (29) is with respect to and . The minimum cost is obtained over all such that .
Given , the user can find the minimum value of (30) by greedily testing all joint combinations of decisions on and . For example, let there exists quality levels and correspond to the file size of If : Kbits, where : for simplicity, then there are four possible decisions: 1) , 2) , 3) , and 4) . The user computes costs for all those possible decision cases and picks the minimum one as an optimal cost.
We set the end time slot of the -th frame as , which is the start time of the -th frame. To find the optimal costs for by using dynamic programming equation (30), the end costs of are required. Since the playback delay occurs at the end state when the accumulated chunk amounts are smaller than the departure quantity, i.e., and . Therefore, the end costs for those states, i.e., for should be very large. Even when , the more chunks are accumulated, the more likely there will be no playback delays. In this sense, for is preferred for all . Especially, as a large number of chunks are received in the queue, the effect of additional chunks to avert queue emptiness would be significantly decreased. Therefore, for
are arbitrarily modeled as the truncated form of exponential distribution. Thus, we can set the end costs for all states as follows:
where is a predefined large constant to give penalties for playback delay occurrences and is the exponential distribution coefficient.
Given the end costs , the optimal costs for all can be obtained by backtracking the shortest path based on the dynamic programming equation (30). Then, when the queue backlog at time is , becomes the averaged drift-plus-penalty term (16), i.e., . Then, after the user finds all the averaged drift-plus-penalty terms for all by using dynamic programming, the user determines the caching node to receive the desired video file by comparing all drift-plus-penalty terms, as described in (18).
V-D Decisions of quality and chunk amounts
After determining the caching node , the user should choose the video quality and the number of chunks to receive for every time slot , depending on time-varying channel conditions and its queue state. For this goal, we can simply use the principle of optimality in the dynamic programming algorithm , which argues that if the optimal policy is a solution of the stochastic shortest path problem, then the truncated policy is optimal for the subproblem over , where .
Based on this principle of optimality, the user can make the optimal decisions of and for by using the minimum costs obtained while performing dynamic programming for caching node decision . When deciding and , the channel gain can be observed, e.g. , so the optimal action is deterministic given and which provides the minimum cost , as given by
conditioned on for .
Thus, the user should store the optimal actions for all , and to deal with all possible random network events. Simply, actions are required, but some of channel realizations can give the same optimal action. Again, consider the example of quality levels and corresponding to the file size of in Kbits. Then, any : Kbits allows four combinations of decisions of and , as explained in Section V-C, and the user is enough to store the only one optimal action for all : Kbits. In this sense, define subsets of denoted by for , as follows:
Thus, the user needs to store actions. The whole steps for video delivery decisions on caching node, video quality, and receiving chunk amounts are presented in Algorithm 1.
V-E Computational complexity of dynamic programming
To determine the optimal policy at each time slot, it seems that at least computations are required, but some of channel realizations can perform the same computation as seen in Section V-D. Since all realizations not only give the same but also make the same computations of for all possible combinations of and , computations are required at least.
However, in most of random network events, more computations are required to take the minimum function in (30). As shown in the example of quality levels and corresponding to the file size of in Kbits, there are four combinations of decisions of and when : Kbits. Let the average number of these decision combinations of and for all be , then total computations are required at each time slot in dynamic programming.
Here, and obviously depend on , and for . There are not many versions of the identical video of different quality levels, i.e. is small in general, and is not controllable unless the video encoding scheme is changed. On the other hand, increases as transmit SNR grows, therefore large SNR could result in huge computational complexity as well as large number of registers to store the optimal costs for decisions of quality and chunk amounts. However, the streaming user can receive a large number of high-quality chunks enough to avoid queue emptiness in the sufficiently large transmit SNR region. Considering that the proposed video delivery scheme targets the streaming user who is worrying about playback delays as well as video quality degradation, however, huge complexity burden for large transmit SNR is out of scope in targeting scenarios. Thus, and are expected not much large in our targeting scenarios where adjustments of the tradeoff between playback delay and video quality are necessary, so computational complexity for dynamic programming can be somewhat limited.
|No. of quality levels ()||3|
|Default PPP intensity ()||0.4|
|Time interval of caching node decisions ()||5|
|User radius ()||50 m|
|Caching probabilities ()|
|Transmit SNR ()||20 dB|
|Minimum probability of finding the caching node ()||0.99|
|Queue departure ()||1|
|Bandwidth ()||1 MHz|
|Coherence time ()||5 ms|
|End cost coefficient ()|
|End cost coefficient ()||1|
Vi Simulation Results
In this section, we show that the proposed algorithm for dynamic video delivery policy works well with video files of different quality levels in wireless caching network. Simulation parameters are listed in Table I, and these are used unless otherwise noted. The proposed technique can be applied to any distribution model for caching nodes, but we suppose that caching nodes which store the desired content are modeled as an independent PPP with an intensity of , which is generally assumed for researches of wireless caching networks [7, 8, 9]. Then, the PPP intensity of type- caching nodes becomes , where denote the caching probability of the video which can be encoded into any quality in . Therefore, larger , more caching nodes of type- around the streaming user. Based on the network model described in Fig. 1, the user is slowly moving towards certain direction. In practice, the channel condition between the user and the caching node delivering the desired video could be varying due to Doppler shift as the user is moving, but this effect is not captured in this paper. Peak-signal-to-noise ratio (PSNR) is considered as a video quality measure, and quality measures and file sizes depending on quality levels are supposed as dB and Kbits which are obtained from real-world video traces . Since we assume that and , the minimum intensity of PPP distributions to satisfy the performance criterion of should be .
To verify the advantages of the proposed algorithm, this paper compares the proposed one with three other schemes:
‘Strongest’: The user receives the desired video file from the caching node whose channel condition is the strongest among at time slots of , for . Decisions of and are made based on dynamic programming results.
‘Highest-Quality’: The user receives the desired video file from the caching node which can provide the highest-quality file among at time slots of , for . Decisions of and are made based on dynamic programming results.
‘One-Step’: The user decides the caching node for video delivery based on the frame-based Lyapunov optimization theory. However, decisions of and are made by minimizing the incurred cost only at each slot without using dynamic programming results.
In summary, performance comparisons with ‘Strongest’ and ‘Highest-Quality’ can show the effects of caching node decision based on Lyapunov optimization, and comparison with ‘One-Step’ can specify the advantage of using Markov decision process and dynamic programming for decisions on video quality and the amounts of receiving chunks.
Vi-a Caching node distribution
At first, impacts of the PPP intensity, i.e. how many caching nodes are distributed around the streaming user, are shown in Figs. 5 and 5, which give the plots of playback delay occurrence rates and average video quality measures per received chunk versus , respectively. ‘Strongest’ is likely to receive many chunks from the caching node whose channel condition is the strongest, so this scheme accumulates queue backlogs enough to avoid playback delays. Therefore, ‘Strongest’ shows the best delay performance but its gain over the proposed one is very small, as shown in enlarged plots in Fig. 5. There are two reasons. The first one is that even though the channel condition of certain caching node at is the strongest, after that it could not be the strongest due to time-varying channels and user mobility. Second, the delay performance does not increase in proportional to the number of chunks accumulated in the queue. If enough chunks are already in the queue to prevent playback delays, then the delay performance is not dramatically improved as additional chunks arrive. Similar delay occurrence rates of the proposed technique and ‘Strongest’ in Fig. 5 show that the proposed scheme can accumulate chunks in the queue enough to avert playback delays.
On the other hand, since ‘Highest-Quality’ pursues the video quality when choosing the caching node for video delivery, it gives better quality performance than ‘Strongest’ with large . However, when is small, the caching node chosen by ‘Highest-Quality’ is likely to be much distanced from the streaming user and its channel condition would be usually too bad to deliver the high-quality video. Therefore, even though the caching node chosen by ‘Highest-Quality’ could provide the high quality level, the user requests large number of low-quality chunks owing to less accumulated backlogs. Since we assume that the user cannot achieve any quality-of-service when delay occurs, the quality performance of ‘Highest-Quality’ is even worse than that of ‘Strongest’ with small . As the proposed technique determines to associate with the caching node by balancing the video quality and channel condition, the proposed one can provide better quality than both ‘Strongest’ and ‘Highest-Quality’, as shown in Fig. 5. ‘One-Step’ gives the highest average quality measure per received chunk but it suffers from much more frequent delay occurrences compared to other schemes. Considering that streaming users are much more sensitive to playback delays, ‘One-Step’ is not appropriate for practical systems. From the result of ‘One-Step’, we can see that the merit of using dynamic programming which stochastically reflects future subsequent decisions is very large when determining the video quality and the number of receiving chunks. In addition, even when , PPP intensity of highest-quality videos () becomes . Therefore, the highest-quality level is rarely selected and the average quality measures of all schemes are much lower than the highest-quality measure ().
Vi-B Uniform and nonuniform caching probabilities
We set three cases of caching probabilities for the video file with different quality levels, as follows:
Note that Case 2 corresponds to the uniform caching probability case and Case 1 and Case 3 are nonuniform. In Case 3, the streaming user is more likely to receive high-quality video than other cases, on the other hand, Case 1 represents an environment where there are not many caching nodes which can provide the high-quality video around the user. The performances of playback delay and quality measure depending on those cases of caching probabilities are shown in Figs. 7 and 7, respectively.
In Fig. 7, delay incidence of ‘Highest-Quality’ definitely decreases as decreases and grows, because the caching nodes storing the high-quality video are likely to be near to the streaming user. However, since the distribution density of all caching nodes does not change according to the probabilistic caching policy which satisfies , the delay performance of ‘Strongest’ is not influenced much by different caching policies. For the ‘Strongest’ scheme, any caching probability case can deliver as many low-quality chunks of small size as possible when there are too few chunks in queue so the playback delay is about to occur. In this sense, the proposed technique shows almost the same delay performance as ‘Strongest’, because the proposed one strongly limits the playback delay compared to quality improvement.
The average quality measures of all schemes increase as decreases and grows as shown in Fig. 7. Even though ‘Highest-Quality’ pursues the video quality, its average quality measure per received chunk does not differ much from that of ‘Strongest’ for any caching probability case owing to its poor delay performance. As we have seen in Section VI-A, queue backlogs do not accumulate much in the ‘Highest-Quality’ scheme, therefore the user usually requests the small number of low-quality chunks. Especially in Case 3, caching nodes storing the highest-quality video are distributed more than nodes of other types, therefore the caching node whose channel condition is the strongest among candidate nodes would be highly probable to be type 3. Thus, the difference between quality performances of ‘Strongest’ and ‘Highest-Quality’ is not large.
The performance rankings in Figs. 7 and 7 among comparison techniques are consistent with the results of Figs. 5 and 5. Compared to those comparison schemes, the proposed technique provides quite high average video quality, while limiting delay occurrence rate as low as ‘Strongest’. Thus, the proposed scheme can be said to smooth out the tradeoff between quality and playback delay and to achieve both goals. As observed here, ‘One-Step’ provides higher quality than the proposed one but its delay performance is too poor to achieve user satisfaction.
Vi-C System parameter
Since has a role to weigh quality maximization compared to averting playback delay in Lyapunov optimization problem, delay occurrence rates increase and the expected quality measures of all techniques become improved, as grows, as shown in Figs. 9 and 9, respectively. Therefore, we can control the tradeoff between video quality and playback latency by adjusting the system parameter . Among comparison techniques, the proposed scheme improves the quality performance sufficiently while minimizing the increase in delay incidence by taking large . Quality improvements of other comparison techniques due to large are comparable to that of the proposed one, but delay performances of ‘Highest-Quality’ and ‘One-Step’ are still much worse than that of the proposed one and ‘Strongest’. As we’ve seen in Sections VI-A and VI-B, the proposed technique provides higher average video quality than ‘Strongest’ and delay performance almost same as ‘Strongest’. We can also see that ‘One-Step’ does not respond sensitively to changes in compared to other techniques, because the role of is not completely captured in this scheme. To reflect the effect of properly, minimization of the frame-based drift-plus-penalty term is necessary, but decisions of ‘One-Step’ on quality and chunk amounts are not frame-based. Those decisions are just conducted and dependent on only each time slot.
Vi-D SINR level
The delay and quality performances over INR levels are shown in Figs. 11 and 11, respectively. It is easily expected that quality performances decrease and delay occurrence rates increase as INR grows for all comparison techniques. Almost all of the performance rankings among comparison techniques remain as seen former subsections, but the performance of ‘Highest-Quality’ is influenced by INR levels much more than the proposed one and ‘Strongest’. We can expect that ‘Highest-Quality’ becomes more difficult to accumulate video chunks in the queue as the INR grows, therefore the quality level chosen by the user becomes increasingly degraded. Rather, ‘Strongest’ is not significantly affected by INR changes compared to ‘Highest-Quality’, because the channel condition of its caching node is much stronger than that of the node chosen by ‘Highest-Quality’. The proposed scheme still achieves the improved video quality while guaranteeing very low delay occurrence rate.
This paper studies the dynamic delivery policy of video files of various quality levels in the wireless caching network. When the caching node distribution around the streaming user is varying, e.g. the user is moving, the streaming user makes decisions on caching node to receive the desired file, video quality, and the number of receiving chunks. The different timescales are considered for the caching node association and decisions on quality and the number of receiving chunks. The optimization framework of those video delivery decisions conducted on different timescales is constructed based on Lyapunov optimization theory and Markov decision process. By using dynamic programming and the frame-based drift-plus-penalty algorithm, the dynamic video delivery policy is proposed to maximize average streaming quality while limiting playback delay quite low. Further, the proposed technique can adjust the tradeoff between performances of video quality and playback delay by controlling the system parameter of .
This work has supported by Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2018-0-00170, Virtual Presence in Moving Objects through 5G).
-  “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 20162021 White Paper”, Cisco. [Online]. Available: https://www.cisco.com/c/en/us/solutions/collateral/serviceprovider/visual-networking-index-vni/mobile-white-paper-c11-520862.html
-  X. Cheng, J. Liu, and C. Dale, “Understanding the characteristics of Internet short video sharing: A YouTube-based measurement study,” IEEE Trans. Multimedia, 15(5):1184–1194, Aug. 2013.
-  J. Koo, J. Yi, J. Kim, M. A. Hoque, and S. Choi, “REQUEST: Seamless dynamic adaptive streaming over HTTP for multi-homed smartphone under resource constraints,” in Proc. of ACM Multimedia, Mountain View, CA, 2017.
-  N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and G. Caire, “FemtoCaching: Wireless video content delivery through distributed caching helpers,” in Proc. of IEEE INFOCOM, Orlando, FL, USA, 2012.
-  E. Bastug, M. Bennis, and M. Debbah, “Living on the edge: The role of proactive caching in 5G wireless networks,” IEEE Communications Magazine, 52(8):82–89, Aug. 2014.
-  X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. M. Leung, “Cache in the air: exploiting content caching and delivery techniques for 5G systems,” IEEE Communications Magazine, 52(2):131–139, Feb. 2014.
-  B. Blaszczyszyn and A. Giovanidis, “Optimal geographic caching in cellular networks,” in Proc. of IEEE Int’l Conf. Communi. (ICC), London, 2015, pp. 3358–3363.
-  Z. Chen, N. Pappas, and M. Kountouris, “Probabilistic Caching in Wireless D2D Networks: Cache Hit Optimal Versus Throughput Optimal,” IEEE Communications Letters, vol. 21, no. 3, pp. 584–587, March 2017.
-  S. H. Chae and W. Choi, “Caching placement in stochastic wireless caching helper networks: Channel selection diversity via caching,” IEEE Trans. Wireless Communi., 15(10):6626–6637, Oct. 2016.
-  D. Malak, M. Al-Shalash, and J. G. Andrews, “Optimizing content caching to maximize the density of successful receptions in device-to-device networking,” IEEE Trans. Comm., 64(10):4365–4380, Oct. 2016.
-  M. Choi, D. Kim, D.-J. Han, J. Kim, and J. Moon, “Probabilistic Caching Policy for Categorized Contents and Consecutive User Demands,” IEEE International Conference on Communications (ICC), 2019.
-  M. Gregori, J. G´omez-Vilardeb´o, J. Matamoros, and D. G¨und¨uz, “Wireless Content Caching for Small Cell and D2D Networks,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 5, pp. 1222–1234, May 2016.
-  M. Emara, H. Elsawy, S. Sorour, S. Al-Ghadhban, M. Alouini and T. Y. Al-Naffouri, “Optimal Caching in 5G Networks With Opportunistic Spectrum Access,” IEEE Transactions on Wireless Communications, vol. 17, no. 7, pp. 4447-4461, July 2018.
-  K. Poularakis, G. Iosifidis, A. Argyriou, and L. Tassiulas, “Video delivery over heterogeneous cellular networks: Optimizing cost and performance,” in Proc. of IEEE INFOCOM, Toronto