Exceedingly large amounts of data traffic generated by rapidly growing wireless mobile devices in recent years have created formidable challenges for wireless communication. Within just a few years, it is expected that tens of exabytes of global data traffic be handled on daily basis with on-demand video streaming services accounting for about 70% of them .
On-demand video streaming is characterized by a relatively small number of popular contents being requested at ultra high rates; as such playback delay is often the more important measure of goodness to the user than other typical performance metrics like video quality . In this regard, the wireless caching technology as discussed in [3, 4], wherein the base station (BS) pushes popular contents for off-load time to cache-enabled nodes with limited storage spaces so that these nodes provide popular contents directly to nearby mobile users, is advantageous for video streaming services. By caching popular files on cache-enabled nodes, there is no need to repeatedly receive files from the BS every time users request.
Caching popular contents on the finite storage of the helper node near mobile users, which acts like a small BS, has been proposed to reduce latency in file transmission . Further, a device-to-device (D2D)-assisted caching network has been studied [7, 9, 8, 10], where mobile devices can store popular contents and directly respond to the file requests of neighboring users. In the wireless caching network, there are two main issues: 1) file placement problem - how to cache the popular contents at the caching nodes, e.g., caching helpers or cache-enabled devices, and 2) node association problem - which caching node is optimal to deliver the requested file to the user for providing smooth video streaming services.
Video files can be encoded to multiple versions which differ in the quality level, e.g., peak-signal-to-noise-ratio (PSNR) or spatial resolution [11, 12]. Since the file size of video varies by quality, it is also important in caching network to determine which file of what quality is stored in the caching node (file placement problem) and what quality of video is requested from which caching node by the streaming user (node association problem) .
The goal of the file placement problem is to find the optimal caching policy according to popularity distribution of contents and network topology. There have been some research efforts to find the optimal caching policy in stochastic wireless caching networks [14, 15, 16], but contents with different quality levels were not considered. Traditionally, caching strategies for videos with various qualities have been researched with radio access network (RAN) caches which enable transcoding or transrating of video files [17, 18, 19]. However, deployments of the transcoder in mobile devices are inefficient, thus it is reasonable that only the video file of certain quality pushed by the BS for off-load time can be delivered by cache-enabled devices.
Due to the finite storage size of caching devices, there exists a trade-off between video quality and video diversity, i.e., if the device wants to cache the high-quality files, it cannot store many types of videos. The authors of  consider caching files of different sizes, but they assume that the different-sized files account for the same unit of cache storage, thus it does not reflect the above trade-off. Many researchers have proposed the static file placement policies under the consideration of differentiated quality requests for the same file, given probabilistic quality requests [12, 22, 23] or minimum quality requirements . In , joint optimization of the static file placement and routing is proposed. Further, the probabilistic caching policy for video files of various quality levels is presented in  by using stochastic geometry, given the user preference for quality level.
The node association problem for video delivery in wireless caching networks has been also extensively researched. In most of the research works that do not consider different quality levels for the same file, the file-requesting user is allowed to receive the content from the caching node under the strongest channel condition , . Node associations for video delivery in heterogeneous caching networks have been studied in [27, 28, 29]. Especially, dynamic video streaming allows each chunk, which consists of the whole video file and occupies a part of playback time, to have a different quality depending on time-varying network conditions . There are some research results addressing the transmission scheme which provides the video by dynamically selecting the quality level [33, 34] or the scheduling policy that maximizes a network utility function of time-averaged video quality in a network with caching helpers . While the video delivery polices of [33, 34, 35] are operated at the BS side, however, decisions of video delivery requests at user sides have been largely neglected. This scenario is consistent with the practical real-world software implementation of dynamic adaptive streaming over HTTP (DASH) , in which users dynamically choose the most appropriate video quality.
In this paper, we consider the stochastic D2D-assisted caching network for dynamic video streaming services. For file placement problem, each BS has all video files and is equipped with a quality controller, which controls the video quality. However, deploying the video quality controller in small mobile devices is not desirable; we assume in the present paper that the BS pushes the video files with certain quality levels and the cache-enabled devices can provide given video quality measures to mobile users who request the cached files.
For node association problem, users dynamically request different quality levels for the same video and associate with one of the neighboring nodes caching the file of desired quality. Assuming that delivered video chunks are waiting for playback in the user queue, the quality level of the next chunk should be chosen at the user side depending on user’s channel condition and queue state to avoid playback delay, which is different from the assumption made in . Depending on the desired quality, node association is updated for video delivery on chunk-by-chunk basis over the playtime.
In this paper, file placement and node association operate on different time scales, unlike , which jointly optimizes caching and transmission policies at the BS side. In general, the BS pushes popular contents to caching nodes for off-load time, and users request video files after file placement is completed. In addition, file popularity does not change as rapidly as dynamic changes of quality requests during video streaming, so file placement and node association are independently considered in this paper.
The main contributions of this paper can be summarized as follows:
This paper proposes the probabilistic caching policy for video files of varying quality levels by maximizing the successfully enjoyable video quality sum. Since the streaming user dynamically requests the quality level of video, the expected quality of video which can be reliably delivered to the user, i.e., successfully enjoyable video quality summation, is a reasonable metric. We derive the closed-form caching probabilities for every video file of every quality level. The trade-off between video quality and video diversity is reflected in the proposed caching placement policy.
This paper models the node association cases when video files of different quality levels are stored in cache-enabled devices. We specify the cases which require an advanced node association scheme to carefully choose the cache-enabled device for video delivery with desired quality.
This paper proposes a node association algorithm for file-requesting users to choose the appropriate quality and to associate with the device which caches the requested file of desired video quality. The proposed algorithm maximizes the sum of the time-averaged quality measures of all users while avoiding playback delay in streaming communications. In this paper, playback delay is interpreted based on the user queue model, and the algorithm aims at avoiding playback delay by preventing queue emptiness. Simply, when there is no video chunk in the user queue, the user has to wait for the next chunk and video playback is inevitably delayed. Compared to pursuing only quality and only preventing playback latency, numerical results show that the proposed algorithm allows video chunks to be stacked in queue enough to maintain smooth video playback, while pursuing high video quality.
We provide two ways to handle request collision, which occurs when multiple users request video files simultaneously from the same cache-enabled device. One is to schedule one of the file-requesting users for video delivery. Another method is utilization of NOMA to serve all file-requesting users at the expense of data rate degradation. In the proposed algorithm, a scheduling scheme maximizes the time-average video quality for the given user while preventing playback delay.
The rest of the paper is organized as follows. The D2D-assisted caching network model with different-quality video files is given in Section II. Caching policy for video files of various quality levels is proposed in Section III. Node association cases with different-quality video files and the node association algorithm are presented in Section IV. Simulation results are shown in Section V and Section VI concludes the paper.
Ii D2D Caching Network Model with Different-Quality Video Files
This paper considers a cellular model where some cache-enabled devices exist and users enjoy video streaming services. When certain user requests a particular video file, she searches through the device candidates that cache the requested file within a radius of , as shown in Fig. 1. User selects one of the candidates for file delivery. If there is no device caching the requested file within the radius from user , the BS can transmit the desired file via a cellular link. Since the caching devices are usually much closer to the file-requesting users than the BS, the users are assumed to prefer downloading the file from the caching devices rather than directly from the BS, due to transmission delay. Therefore, direct transmission from the BS is not considered in this paper.
There is a file library and each file has a popularity probability , which follows the Zipf distribution : where
denotes the popularity distribution skewness. Letbe the index of the file requested by user . Assume that all files have quality levels. Suppose that there is no quality controller in cache-enabled devices, so devices can only transmit video files of the fixed quality which the BS pushes. In this case, user can choose the quality level of the receiving video file; let denote the desired quality level of file . The file size varies with video quality, and let be the normalized file size of quality level for every video. Each cache-enabled device has a limited storage size of .
The cache-enabled devices are modeled using the independent Poisson point processes (PPPs) with intensity . This paper utilizes the probabilistic caching placement method  for cache-enabled devices to cache file of quality with probability . Let be the intensity of the independent PPPs for the devices caching file of quality level . Suppose that the system does not allow any additional D2D link within the radius of the user who is already downloading the file from certain cache-enabled device. By taking sufficiently large and/or exploiting orthogonal resources for each D2D coverage, the system can guarantee the negligible interference among multiple D2D links. When an additional user requests a video file within the coverage, the user should download the file from the BS via the cellular link.
The Rayleigh fading channel is assumed for the communication links from the users to the cache-enabled devices. Denote the channel with , where controls slow fading with being the user-device distance and
represents the fast fading component having a complex Gaussian distribution,.
The main research issues in the entire wireless caching network can be largely classified as follows:
File placement problem: When the BS pushes video files to cache-enabled devices for off-load time, the BS determines which file of which quality level is cached in each cache-enabled device. This paper chooses the probabilistic caching placement method  for cache-enabled devices to cache file of quality with probability of .
Node association problem: Each file-requesting user should find the candidate set of devices caching the requested video first. Next, each user chooses one of the candidate devices for file delivery. A careful choice of the device to be associated with is important to ensure good video quality and smooth playback without delay.
Request collision: When multiple users request video files from the same device, we say request collision occurs. In this instance, the device should determine how to serve those users. One way is to deliver the requested file to only one user, expecting that each of the rest of the users finds another cache-enabled device to request video files. The other method is NOMA, which serves multiple users in the same time/frequency/code simultaneously, but a transmission rate reduction is inevitable.
Iii Caching Policy for Different-Quality
Iii-a Probabilistic Caching with Different Quality Video Files
As mentioned earlier, the file placement problem in this paper is based on the probabilistic caching method , where the file is independently placed in devices according to the same distribution. Since we consider video files of different sizes, however, a certain modification of the probabilistic placement policy of  is necessary. As in , we also start with continuous memory intervals of unit length, and then place all files of all quality levels one by one to fill the unit-length intervals with every . The main difference from the approach of  is that the file of quality level occupies a vertical size of . Accordingly, we need to impose the following constraints:
The constraint (2) is obvious, and the constraint (1) is necessary and sufficient for the existence of a random file placement policy requiring no more storage than . The sufficiency of (1) is proven by obtaining the caching policy requiring no more storage than in the following sections (see Table V). The necessity of (1) can be also proven by establishing that the left-hand side of (1) is equal to the expected required memory size of the caching device, similar to Fact 1 in . In addition, if the device caches file of quality , then the same file of another quality level, say , is better not to be cached in the device . However, it is not necessary to prevent caching copies of the same file with different qualities on a device for obtaining the caching policy.
Fig. 2 gives an example of the probabilistic caching method with files of different sizes where , , , , , and . This example satisfies the equality of (1). As mentioned before, there are three kinds of blocks with vertical sizes , and for each file type. After obtaining , we have to build a rectangle consisting of rectangles with heights of and widths of for all and , as shown in Fig. 2. Each element rectangle corresponds to the file of certain quality. The cache-enabled device generates uniformly a random number within , and draws a vertical line. Finally, the device stores files which the vertical line goes through. In the example of Fig. 2, assuming we draw a vertical line at 0.38, the device stores File 2 of quality level 3, File 1 of quality level 1, and File 5 of quality level 1.
Remark: Caching different-quality/different-size files would make storage inefficient in some system environments. For example, consider the cases of , , , and . In this case, a device cannot store two files of quality levels 2 and 3. The only possible combinations here are: caching one of quality 2 and three files of quality 1, and caching one of quality 3 and two files of quality 1. It is highly likely that the placements of rectangles with heights of and widths of do not fit perfectly in a rectangle in this scenario. This situation indicates that storage is not being used efficiently. Therefore, the file sizes of different qualities and the maximum storage size of the device should be carefully considered for efficient caching. However, this does not mean that the proposed constraints (1) and (2) would not lead to random file placement policy of different qualities.
Iii-B Optimal File Placement Rule
There still remains the important question: how to find the optimal ? Since we assume that multiple D2D links do not interfere with one another, we can refer to the file placement rule in the noise-limited network . The differences here are the constraint of the probabilistic caching method (1) and the optimization metric used. The method of  maximizes the average file delivery success probability, but since we are concerned with the video quality, the successfully enjoyable video quality sum is chosen as the performance metric. The successfully enjoyable video quality sum is defined as
where is the measure of the quality , is the data rate of the user to download file of quality , and is the threshold of the data rate for reliable transmission of file of quality . Denoting the Rayleigh fading channel from the user to the associated device for downloading file of quality by , the data rate of the user for downloading file of quality in the noise-limited environment is given by
is the bandwidth, assuming a unit transmit power and a normalized noise variance of. If the user desires the file of quality and there are multiple device candidates caching file of quality , it is reasonable for the user to download the file from the device whose channel condition is the strongest among the candidates.
Since the channel power
follows the chi-squared distribution, i.e., Nakagami-1 fading channel, according to, the reliable transmission probability can be obtained by
Thus, we can formulate the optimization problem to find the optimal caching probabilities:
The Lagrangian function of the objective (7) is given by
and the derivative of (10) with respect to , is
where and are the nonnegative Lagrangian multipliers. Then, the Karush-Kuhn-Tucker (KKT) conditions for the optimization problem (7) are given by
From (12), we can obtain the optimal caching probabilities:
We can easily note from (15) that the better the quality of the video file, the higher the probability of being stored in the device. On the other hand, larger file size of the higher-quality video makes caching probability smaller and decreases video diversity. Thus, the trade-off between video quality and video diversity is observed in (15). This trade-off depends on the constant value, , and Lagrangian multipliers, and .
The next step is to find the Lagrangian multipliers. We can determine the intervals of the Lagrangian multipliers by categorizing the caching probability value into three cases. First, when , because of (14). To satisfy (12), , but it is impossible because , , and are different for and . We can set to guarantee . Therefore,
When , also, and is obtained for (12). Therefore,
Finally, when , if , , otherwise, , according to (15). To satisfy ,
However, if , , and
Thus, if ,
Assuming that , since is decreasing with , we can find the optimal and by the bi-section method. The details of the bi-section method for optimal file placement are shown in Algorithm 2.
Iv Node Association Maximizing Video Quality with Playback Delay Constraint
The node association problem in this paper amounts to choosing the cache-enabled devices for users to request video files. After making the candidate set of devices which caches the requested file, the user has to choose the specific device as well as the level of quality. This paper proposes a dynamic algorithm for users to associate with cache-enabled devices to maximize time-average video quality measures with a playback delay constraint. Improvement in video playback latency can be explained based on the user queue model.
Iv-a User Queue Model
A video file consists of many sequential chunks. User terminals receive video files from cache-enabled devices and process data for video streaming services in units of chunks. Each chunk of a file is responsible for some playback time of the entire stream. As long as all chunks are in correct sequence, each chunk can have different quality in dynamic streaming. Therefore, users can dynamically choose video quality levels in every chunk processing time. By using the queue model, it can be said that the playback delay occurs when the chunk to be played does not yet arrive at the queue. In this sense, receiver queue dynamics collectively reflects the various factors which cause the playback delay.
In general, user queue models have their own arrival and departure processes. For each user , the queue dynamics in each time slot can be represented as follows:
where , , and stand for the queue backlog, the arrival and departure processes of user at time , respectively. The queue states are updated and every user performs node association in each unit time slot . In this paper, the interval of each slot is determined to be the channel coherence time, . Suppose a block fading channel, whose channel gain is static during the processing of multiple chunks, , where is a chunk processing time and is the positive integer.
In this paper, queue backlog counts the number of video chunks in the queue. and semantically mean the numbers of received and processed chunks. Simply, chunks are processed in each time slot, so . On the other hand, obviously depends on the data rate of the communication link between user and its associated device and the chunk size. The departure and arrival processes are given as follows:
where denotes the cache-enabled device associated with user at time , and is the quality level of file which user requests from the device . Also, and indicate the data rate of a D2D link between user and the device , and a chunk size of file of the desired quality at time , respectively. Some video chunks can be only partially delivered as the channel condition varies and node association is updated at every time slot . Since partial chunk transmission is meaningless in our algorithm, the flooring is used in (24).
Let the Rayleigh fading channel between user and device denoted by . Then, the link rate between user and device is simply given by
For video streaming service, it is important to avoid playback delay. The user needs a chunk in the next sequence during video playback. If the next chunk has not yet arrived in the queue, there will be a delay in playback. Therefore, stacking enough queue backlogs, i.e., video chunks in sequence, is necessary for averting playback delay. Suppose that the queue is almost empty. In this case, the cache-enabled device whose channel is strong and which stores the requested file of low quality (i.e., small chunk size) is preferable for the user. On the other hand, when the queue is filled with a lot of video chunks, the user can request the high-quality video file without worrying about playback delay.
Remark: If is too long, it is better to update node associations more frequently than channel variations. For example, consider a user whose queue is filled with many chunks and thus is associated with the caching device delivering the chunks smaller than . If this situation persists for a long time, chunks in the queue will be emptied out soon and playback delay will occur, therefore several updates of node association are required over the time interval of . On the other hand, if is too short, the requested video cannot be successfully delivered even when the data rate of the link is good, because of the flooring in (24). Therefore, block fading is assumed with large enough for the users to receive the video chunks.
Iv-B Node Association Cases
Depending on geological locations of cache-enabled devices, node association of certain user with an appropriate cache-enabled device can be classified into a number of cases. These example cases are illustrated in Fig. 3. Fig. 3 assumes that each user requests the video file from one of cache-enabled devices within radius , and there are quality levels of 1, 2, and 3. Only the devices which cache the requested file are depicted in Fig. 3. The quality levels of requested videos in cache-enabled devices are written as , to indicate that the device caches the video of quality level requested by user . In particular, the devices which receive multiple file delivery requests are shown as the shaded squares. The proposed dynamic algorithm for node association can be applied to cases 4, 5, 6, and 7.
Case 1: When there is no caching device which caches the requested file within radius of the user, the user should download the video file from the BS via a cellular link. (user 2)
Case 2: When there is only one device caching the requested file within radius of the user, the user just downloads the video from this device, but only the fixed quality can be provided. If the user wants the high-quality file, it can download file from the BS but this option is not considered in this paper. (user 3, 8)
Case 3: When there are multiple devices caching the requested files of the same quality within radius of the user, the user requests the video from one of the devices whose channel is the strongest. (user 9) Similar to Case 2, only the fixed quality can be provided.
Case 4: When there are multiple devices caching the requested files of different quality levels within radius of the user, the proposed dynamic algorithm can be applied for node association. The proposed algorithm maximizes the expected video quality constrained on sufficiently large queue backlog to avoid playback delay. (user 1)
Case 5: When the cache-enabled device receives two or more file delivery requests including the target user’s, i.e., request collision occurs, the device serves multiple users by NOMA. (user 4, 7) The proposed algorithm determines to whether to exploit NOMA.
Case 6: When the cache-enabled device receives two or more file delivery requests including the target user’s, the proposed algorithm makes the device to schedule the target user and to ignore other requests. (user 6)
Case 7: When the cache-enabled device receives two or more file delivery requests including the target user’s, the proposed algorithm determines the device to schedule another user and to ignore the request of the target user. Then, the target user should find another cache-enabled device, and if there is no other device which stores the requested file within radius , it has to download the file from the BS. (user 5)
Iv-C Dynamic Node Association for Video File Delivery under Queue Stability
We specifically go after the following optimization problem:
where is the set of file-requesting users via D2D links, and . The optimization metric (27) is the sum of the time averaged video quality measures of the file-requesting users as given by
Here, is introduced to make large enough to avoid playback delay, and is a sufficiently large parameter which affects the maximal queue backlog. From (22) and (23), the queue dynamics of can be represented as follows:
Even though the update rules of and are different, both queue dynamics mean the same video chunk processing. Therefore, playback delay due to emptiness of can be explained by queuing delay of . By Little’s theorem , the expected value of is proportional to the time-averaged queuing delay. Therefore, we hope to limit the queuing delay by addressing the constraint (28), and it is well known that Lyapunov optimization with the constraint (28) can make bounded .
denote the column vector ofof all users at time , and define the quadratic Lyapunov function as follows:
Then, let be a conditional quadratic Lyapunov function that can be formulated as , i.e., the drift on . The dynamic policy is designed to solve the given optimization problem (27) by observing the current queue state, , and determining the node association to minimize a upper bound on drift-plus-penalty :
At first, find the upper bound on the change in the Lyapunov function.
Then, the upper bound on the conditional Lyapunov drift is obtained as
According to (33), minimizing a bound on drift-plus-penalty is consistent with minimizing
We now use the concept of opportunistically minimizing the expectations, so (40) is minimized by the algorithm which observes the current queue state, (i.e., given ) and chooses for all to minimize
where and is replaced by to emphasize the decision parameter of .
From (41), we can anticipate how the algorithm works. When the queue of user is almost empty, the large arrivals are necessary for user not to wait the next video chunk. In this case, user prefers the device which gives many arrivals. On the other hand, when the queue backlogs are stacked enough to avoid playback delay, , user requests the video of high quality without worrying about playback latency.
System parameter in (41) is a weight factor for the term representing video quality measure. The relative value of to is important to control the queue backlogs and quality measures at every time. The appropriate initial value of needs to be obtained by experiment because it depends on the distribution of the cache-enabled devices, the channel environments, and the threshold of queue backlog, . Also, should be satisfied. If , users prefer low-quality videos even when a lot of video chunks have already arrived at the user queue. Moreover, in the case of , the user only aims at stacking queue backlogs without consideration of video quality. On the other hand, when , users do not consider the queue state, and thus they just request the highest-quality files. can be regarded as the parameter to control the trade-off between video quality and playback delay.
Since streaming users cannot know other users’ channel gains, each user independently finds the cache-enabled device which stores the video file of desired quality. Therefore, (41) is treated separately, and each user minimizes its own objective function:
Since there is a finite number of cache-enabled devices within the radius of the user, each user can easily find the device for video delivery, i.e., determination of , by greedy search.
However, if two or more users simultaneously request files from the same cache-enabled device, the objective functions of those users are not independent. The reason is that the data rates of the users are obtained for one-to-one communication, (26), but the device which receives multiple file requests cannot provide the data rate of (26) to all file-requesting users. We shall call this situation the request collision. Since the cache-enabled device which experiences request collision can receive channel information of all file-requesting users from them, the device should resolve request collision by jointly minimizing the sum of objective functions of those users.
Assume that there are user sets, , whose element users request files from the same device. Note that is the same for all . Let . Then, (41) can be re-written as
The first term of (43) is separable, so each user just minimizes its own objective function of (42). Likewise, the summations over users for different are also separable, so we can independently minimize
for every . However, the element terms of summation over certain user set are not independent, so additional steps are necessary to handle the occurrence of request collisions. There are two solutions: 1) scheduling of one user minimizing objective function (44) and 2) NOMA to response to the multiple requests simultaneously.
Iv-D Approaches Against Request Collision
Iv-D1 Scheduling of One User Minimizing Objective Function
In this approach, the cache-enabled device at which request collision occurs simply schedules one of the file-requesting users for video delivery, by minimizing the value of (44). After scheduling of only one user, say user , others find another cache-enabled devices, , within radius of each user, separately. For the choices of , users follow the steps of Section IV-C, without the consideration of the cache-enabled device chosen at first, . If there is no device for video delivery except for , then this user should request the file from the BS.
Then, the caching device at which request collision occurs, , computes metrics of (44) for every case of scheduling of user and finds the one giving the minimum value. Thus, a choice of user can be obtained by
and let the minimum value denoted by :
Unfortunately, scheduling of one user could have a serious problem that conflicts with the noise-limited constraint, which does not allow the additional D2D link within the radius of the streaming user whose D2D link is already constructed. If there are large overlaps among users’ coverages of radius , the cache-enabled device which is newly found by the unscheduled user , , would be in the coverage of the user . If so, this newly found link cannot be activated, and the unscheduled user should find another device again or directly receive the file from the BS. Furthermore, when is small, i.e., cache-enabled devices are sparsely located, it is likely that the unscheduled users cannot find another neighboring device. To combat these problems, NOMA is proposed to handle the multiple requests simultaneously. Since receiving the file from the neighboring device is much more advantageous in terms of transmission latency than downloading from the BS via a cellular link, NOMA would be preferred in above cases.
The cache-enabled device can respond to multiple file requests simultaneously by employing NOMA. Although the NOMA signals transmitted to users interfere with each other, an advanced receiver, e.g., successive interference cancellation (SIC), can successfully remove interference . However, since multiple users are served within the same resource in NOMA, degradations of data rates are inevitable. Therefore, NOMA would be useful if the system prefers to guarantee reduced transmission latency at the expense of data rate degradation.
When the cache-enabled device utilizes power-multiplexing NOMA, different power ratios, , are weighted on the signals of all users, . Larger power is usually allocated to the user which experiences the weaker channel condition, so power allocation ratios for file-requesting users satisfy , with the assumption that . The data rate of user in NOMA system is given by 
The data rate of (47) can be obtained by performing SIC for the signals of the users with weaker channels than user . In this case, as increases, data rates of all file-requesting users are significantly degraded. The objective function for users is changed as follows:
where is obtained by substituting for in (42).
Finally, we decide which approach is better to handle the request collision for each user set, for all , by comparing with . If , scheduling of one user is better than NOMA but, otherwise, NOMA is preferred.
V Performance Evaluation
In this section, we show that the proposed algorithms for file placement and node association work well with video files of different quality levels. We set the parameters, , , and . Also, we assume that and . PSNR is considered as a video quality measure, and according to , quality measures and file sizes depending on quality levels are dB and kbits, respectively. Especially for finding the optimal caching probabilities, the approximately normalized file size is used.
|1||0.2222 (0.2438, 0.1972)||0.2183 (0.2399, 0.1932)||0.2126 (0.2474, 0.1689)|
|2||0.1904 (0.2120, 0.1653)||0.1865 (0.2080, 0.1614)||0.1807 (0.2155, 0.1371)|
|3||0.1717 (0.1933, 0.1467)||0.1678 (0.1894, 0.1428)||0.1621 (0.1969, 0.1185)|
|4||0.1585 (0.1801, 0.1335)||0.1546 (0.1762, 0.1296)||0.1489 (0.1837, 0.1052)|
|5||0.1483 (0.1699, 0.1233)||0.1444 (0.1660, 0.1193)||0.1387 (0.1735, 0.0950)|
V-a Optimal Caching Probabilities and Effects of Storage Size, Device Intensity, and SNR
According to (15), the optimal caching probabilities depend on and SNR.
As an example, the optimal caching probabilities with and SNR = 20dB are shown in Table V.
In Table V, the caching probability of the popular and low-quality file is larger than that of the unpopular and high-quality file.
However, caching probabilities for different quality levels are not much different in this system, and this means that the trade-off between video quality and video diversity is unbiased.
Actually, this trade-off depends on the relative values of the quality measures to the file sizes.
If we arbitrarily change the file size of the quality level 3 with the fixed quality measure value, different caching probabilities are obtained.
In Table V, all the first values in parentheses are for