Delivery-Aware Cooperative Joint Multi-Bitrate Video Caching and Transcoding in 5G

05/18/2018
by   Sepehr Rezvani, et al.
Carleton University
European Union
0

This paper proposes a two-phase resource allocation framework (RAF) for a parallel cooperative joint multi-bitrate video caching and transcoding (CVCT) in heterogeneous virtualized mobile-edge computing (HV-MEC) networks. In the cache placement phase, we propose delivery-aware cache placement strategies (DACPSs) based on the available video popularity distribution (VPD) and channel distribution information (CDI) to exploit the flexible delivery opportunities, i.e., video transmission and transcoding capabilities. Then, for the delivery phase, we propose a delivery policy for given caching status, instantaneous requests of users, and channel state information (CSI). The optimization problems corresponding to both phases aim to maximize the total revenue of slices subject to the quality of services contracted between slices and end-users and the system constraints based on their own assumptions. Both problems are non-convex and suffer from their high-computational complexities. For each phase, we show how these two problems can be solved efficiently. We also design a low-complexity RAF (LC-RAF) in which the complexity of the delivery algorithm is significantly reduced. Extensive numerical assessments demonstrate up to 30% performance improvement of our proposed DACPSs over traditional approaches.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

02/25/2019

Tile-Based Joint Caching and Delivery of 360^o Videos in Heterogeneous Networks

The recent surge of applications involving the use of 360^o video challe...
02/27/2018

On Coded Caching with Heterogeneous Distortion Requirements

This paper considers heterogeneous coded caching where the users have un...
10/17/2018

Mixed-Timescale Online PHY Caching for Dual-Mode MIMO Cooperative Networks

Recently, physical layer (PHY) caching has been proposed to exploit the ...
03/20/2021

Joint Resource Allocation and Cache Placement for Location-Aware Multi-User Mobile Edge Computing

With the growing demand for latency-critical and computation-intensive I...
12/28/2021

Stochastic Coded Caching with Optimized Shared-Cache Sizes and Reduced Subpacketization

This work studies the K-user broadcast channel with Λ caches, when the a...
01/21/2021

Probabilistic Placement Optimization for Non-coherent and Coherent Joint Transmission in Cache-Enabled Cellular Networks

How to design proper content placement strategies is one of the major ar...
02/12/2019

Towards Jointly Optimal Placement and Delivery: To Code or Not to Code in Wireless Caching Networks

Coded caching techniques have received significant attention lately due ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Wireless edge caching has been developed as a candidate solution for next generation wireless networks, e.g., 5G, to address high data rate and/or low latency multimedia services by proactively storing contents at the edge of wireless networks and in so doing offloading scarce and expensive backhaul links [1, 2, 3]. Among various mobile services, mobile video services and applications are expected to account for a major percentage of the global mobile data traffic in coming years [4, 5]. For this reason, video caching at the network edge has drawn a lot of attention recently [2, 6, 7, 8, 5, 9, 4].

In practical scenarios, due to the multiple bitrate variants of each unique video file, service providers often need to transcode video files into multiple bitrates [7, 8, 6, 9, 5]. To this end, adaptive bit rate (ABR) streaming techniques have been developed to enhance the quality of delivered video in radio access networks (RANs) where each video file is adjusted according to users’ requests based on their display size and network channel conditions [6, 7, 5].

Recently, mobile edge computing (MEC) networks have emerged as a promising technology for next generation wireless networks, providing cloud caching and computing capabilities within the RAN [8, 10, 11, 12, 13]. Thanks to this paradigm, video files could be prefetched and/or transcoded in close proximity to end-users, leading to enormous latency and backhaul traffic reductions in wireless networks. One problem with this, however, is that duplicated video caching and transcoding in multiple resource-constrained MEC servers wastes both storage and processing resources. To tackle this issue, cooperative joint multi-bitrate video caching and transcoding (CVCT) technology is proposed where each MEC server is able to receive the requested video files from neighboring MEC servers via fronthaul links [7]. In this architecture, each MEC server is deployed side-by-side with each base station (BS) using the generic computing platforms which provides the caching and computation capabilities in heterogeneous networks (HetNets) [8, 7, 6, 5]. By sharing both the storage and processing resources among multiple MEC servers, more video files can be prefetched within RANs which results increasing the cache hit ratio [8, 7]. However, non-simultaneous transferring and transcoding video files wastes more time and physical resources in the CVCT system, which is not beneficial for delay-sensitive services. To cope with this challenge, parallel video transmission and transcoding capability [9, 14] can be deployed. In the parallel CVCT system, video transcoding runs in parallel with video transmission, and all the multi-hop video transmissions (between backhaul, fronthaul, and wireless access links) also run in parallel.

Non-orthogonal multiple access (NOMA) has recently considered as a promising technology to improve the spectral efficiency of 5G wireless networks [15, 16]. Unlike conventional orthogonal multiple access (OMA) techniques, NOMA can significantly improve the system throughput and support the massive connectivity by using successful interference cancellation (SIC) at the receivers and a mixture of multiple messages at the transmitter [15, 16]. The spectral efficiency can be further improved by combining NOMA with multicarrier systems, called multicarrier NOMA (MC-NOMA), which utilizes multicarrier diversity [17]. To reduce the capital expenses (CapEx) and operation expenses (OpEx) of RANs, wireless network virtualization technology is developed where the infrastructure providers (InPs) resources are abstracted and sliced into a number of virtual networks, also known as network slices. In this technology, InPs lease the physical resources to slices according to their availability and/or service level agreements between InPs and slices. On the other hand, each slice acts as a service provider for its own users with specific QoS level agreements [18, 19, 20]. In this paper, the term slice refers to virtual network unless otherwise indicated. Exploiting MC-NOMA in virtualized networks can further reduce the wireless bandwidth cost of slices by reusing each subcarrier for multiple users owned by each slice. In this way, the integration of the aforementioned technologies enables the CVCT technology at MEC servers and network infrastructure abstraction for different cost-efficient wireless servicing.

Transcoding a large number of videos at each resource-constrained MEC server simultaneously poses another challenge for delay-sensitive services [6, 7, 5]

. The significant performance gain of the CVCT system can only be achieved when a joint distributed video caching and transcoding strategy is designed

[7]. Accordingly, the major question is which bitrate variant of a video file should be cached or transcoded to another lower bitrate variant? An efficient design of joint power and subcarrier allocation is required to achieve the benefits of MC-NOMA in the virtulized wireless networks as well as the improved throughput. Additionally, the scheduler should be fast enough to readopt the video delivery policy based on the arrival requests of users and channel state information (CSI), specifically in realistic ultra dense 5G wireless networks with a larger number of videos. To this end, the video delivery policy needs to be lightweight.

Recently, designing efficient proactive cache placement strategies (CPSs) under the consideration of transmission strategies has become more attractive to utilize the physical-layer delivery opportunities in order to have an efficient delivery performance [3, 5, 4, 21, 22]. Actually, unlike to the conventional baseline popular/bitrate video CPSs, the performance of the proactive video CPSs can be improved by utilizing the delivery opportunities, i.e., designing a proactive delivery-aware CPS (DACPS). Despite the huge potential, designing an efficient DACPS needs to address the following challenges: 1) A DACPS should be efficient for the long-term of the next delivery phase.

This algorithm should cover all the delivery moments and also be efficient in the long-term. Therefore, appropriate methodologies have to be devised to handle the variations of CSI and requests of users in different moments of the next delivery phase. In other words, some estimation approaches should be performed in the proactive DACPS design. A delivery-aware cache refreshment strategy (DACRS) is also useful to tackle the unpredicted changes of network stochastic information in the delivery phase.

2) A DACPS should efficiently utilize the delivery opportunities. To consider all the transmission and transcoding technologies in the design of an efficient DACPS with a preferable delivery observation, joint optimization of the available physical resources, such as storage, transmission, and processing with user association and request scheduling decisions is required. The output of this optimization is only the video placement.

I-a Related Works

Roughly speaking, all research on wireless edge caching can be classified into two categories: 1) delivery performance analysis for different CPSs; 2) designing CPSs in order to have an efficient delivery performance. In the first category,

[23] and [24] investigate the benefits of designing joint user association and radio resource allocation for different CPSs. The authors in [23] proposed a joint resource allocation and user association algorithm in orthogonal frequency division multiple access (OFDMA)-based fronthaul capacity constrained cloud-RANs (C-RANs) in order to minimize the network delivery cost. In [24], the authors devised a joint resource allocation and user association algorithm in HetNets with device-to-device (D2D) communications to maximize the user revenues. In the second category, a lot of work has been done to investigate the benefits of considering transmission opportunities in the CPS design for different cache-assisted schemes, such as backhaul-limited networks [3, 2, 4, 25, 18], cooperative transmission networks [26, 22], cooperative caching networks [27], joint cooperative caching and transmission networks [21, 1], NOMA-assisted networks [15, 16, 28, 29], and network virtualization [19, 20, 18, 30]. However, none of these works have utilized the benefits of video transcoding in their systems.

In the context of cloud-based video transcoding, some research efforts investigate the advantages of cloud computing and devise joint processing resource allocation and scheduling policies to reduce the transcoding delays in the delivery phase [31, 9]. In addition, [6, 5, 7] investigate joint multi-bitrate video caching and transcoding by utilizing the ABR streaming technology in C-RANs. In [6], a transmission-aware joint multi-bitrate video caching and transcoding policy is devised to maximize the number and quality of concurrent video requests in each time slot in a single-cell scenario. In [5], the benefits of joint caching and radio resource allocation policy is investigated for a multi-cell MEC network without any cooperation between MEC servers. Additionally, [7] investigates the design of a transcoding-aware cache replacement strategy in the online delivery phase of a non-parallel CVCT system based on the arrival video requests. Accordingly, designing an efficient proactive DACPS for the CVCT systems is still an open problem. Furthermore, the parallel transmission and transcoding capability is not applied for the CVCT system in [7] which can avoid wasting time and physical resources. Besides, prior works do not utilize the benefits of jointly allocating physical resources for designing an efficient DACPS. In addition, based on our most up-to-date knowledge, the impact of applying MC-NOMA in virtualized wireless networks in terms of bandwidth cost reduction is not yet addressed in the related works. In our research, we address these aforementioned challenges.

I-B Our Contributions

In this paper, we consider a parallel CVCT system in a MC-NOMA-assisted heterogeneous virtualized MEC (HV-MEC) network. This network consists of multiple remote radio systems (RRSs) each equipped with a BS and MEC server that enables the CVCT capability at network edge. For this setup, we propose a virtualization model with a pricing scheme where the network slices are isolated based on the QoS level agreements. In contrast to [7], where the main goal was only to decrease network cost, we aim to maximize the slice revenues by jointly increasing slice incomes (which is obtained by providing access data rates for subscribed users) and decreasing slice costs. In this system, allocating more radio resources can increase the user data rates. Besides, increasing data rate of users needs more processing and/or fronthaul/backhaul resources to avoid wasting resources in the parallel system. However, allocating more radio and physical resources causes a network cost increment that degrades slice revenues. Accordingly, the inherent trade-off between the slice incomes and costs should be carefully handled.

To address this, we propose a resource allocation framework (RAF) where network operational time is divided into two phases: a cache placement phase (Phase 1); and a delivery phase (Phase 2). To the best of our knowledge, this paper is the first in the literature to propose efficient DACPSs in a parallel CVCT system for a MC-NOMA-assisted HV-MEC network based on available stochastic wireless channel distribution information (CDI) and video popularity distribution (VPD). This novel strategy is designed on the basis of jointly optimizing available physical resources as storage, processing, and transmission (transmit power of RRSs, subcarriers, and backhaul and fronthaul capacities) with user association and request scheduling to maximize the slice revenues and avoid over utilizing the available network resources.

Since the benefits of each component of the CVCT system are not well evaluated numerically in the literature, via simulation results, we investigate the performance gain of each of the caching, transcoding, and cooperative technologies when it is adopted to an initial system, where no caching, transcoding, and cooperation capabilities are considered for RRSs. Some of these performance gain results are summarized in the following (when our proposed DACPS is adopted for the system):

  • Non-cooperative caching with no transcoding (only caching at RRSs without any transcoding and cooperation capabilities): 459.8% (see Fig. 8(b))

  • Non-cooperative caching and transcoding (only caching and transcoding at RRSs without any cooperation capability): 982.9% (see Fig. 8(b))

  • Cooperative caching with no transcoding (RRSs have storage and can collaborate with each other, but they have no processing capability.): 1076.8% (see Fig. 7(b))

  • Cooperative caching and transcoding (CVCT system): 1740.2% (see Fig. 6(b))

For the above results, the system settings are based on Table III and the simulation environment is according to Fig. 4. This work can be considered a benchmark for future HV-MEC networks. Extensive numerical results show that MC-NOMA outperforms the system revenue nearly compared to OMA. Moreover, we show that our proposed DACPSs have up to performance gain in terms of the total revenue of slices compared to the conventional baseline video popularity/bitrate strategies.

This paper also provides a novel solution to reduce the computational complexity of our delivery algorithm with on-demand and real-time cloud services. We show that our proposed low-complexity RAF (LC-RAF) can be efficiently utilized for dense environments where there are higher levels of path loss. Last but not least, we propose a DACRS to tackle the dynamically unpredicted changes of VPD and CDI during the delivery phase. It is shown that the proposed DACRS can improve the slice revenues up to compared to our proposed proactive DACPSs where the adopted CPS remains fixed through the whole delivery phase.

I-C Paper Organization

The rest of this paper is organized as follows. Section II presents the network architecture and formulates the cache placement and delivery optimization problems. Section III contains the solution of the problems and the proposed LC-RAF. The numerical results are presented in Section IV. Our concluding remarks are provided in Section V. The abbreviations used in the paper are summarized in Table I.

Abbreviation Definition Abbreviation Definition
ABR Adaptive bit rate IPM Interior-point method
BS Base station LC-RAF Low-complexity RAF
CCNT Cooperative caching with no transcoding LD Low-diversity
CDI Channel distribution information LP-RRS Low-power RRS
CPS Cache placement strategy MBS Macro BS
CSI Channel state information MC-NOMA Multicarrier NOMA
CVCT Cooperative video caching and transcoding MEC Mobile edge computing
D.C. Difference-of-two-concave-functions MINLP Mixed-integer nonlinear programming
DACPS Delivery-aware CPS MPV Most popular video
DACRS Delivery-aware cache refreshment strategy NC No caching
DCP Disciplined convex programming NoCoop Non-cooperative
FBS Femto BS OMA Orthogonal multiple access
HBV High-bitrate video PSD Power spectral density
HD High-diversity RAF Resource allocation framework
HP-RRS High-power RRS RAN Radio access network
HV-MEC Heterogeneous virtualized MEC RRS Remote radio system
IDCP Integer disciplined convex programming SCA Successive convex approximation
INLP Integer nonlinear programming SIC Successful interference cancellation
InP Infrastructure provider VPD Video popularity distribution
Table I: Abbreviations.

Ii System Model and Problem Formulation

Ii-a Network Architecture and System Settings

Consider a multiuser HV-MEC network consisting of multiple RRSs, each equipped with one type of access node, e.g., macro BS (MBS) or femto BS (FBS), and a MEC server that enables video caching and transcoding capabilities at the RRS [8, 7]. The set of users and RRSs are denoted by and , respectively. All RRSs are connected by a limited wired fronthaul mesh network, which provides cooperative communication between RRSs [7]. An origin cloud server denoted by is connected to each RRS through a limited wired backhaul link. Here, we assume that a hypervisor enables the virtualization of the network where the radio and physical resources of InP are abstracted into a set of virtual networks, i.e., slices, such that slice owns a subset of users in , i.e., , and is responsible for providing a specific QoS for its own users [20, 18]. We assume that each user is subscribed to only one slice. Fig. 1 shows an illustration of this network.

Figure 1: An example of the multiuser HV-MEC architecture consisting of RRSs, slices, and one InP.

Assume that there exist unique transcodable videos, each having bitrate variants111The lowest bitrate of each video type is denoted by and the highest is denoted by . in the origin cloud server with unlimited storage capacity [7, 5, 22]. The video library is denoted by , where belongs to the video type with the bitrate variant with the size of . Consider that video can be transcoded to , if [8, 7, 6]. Note that , if [7].

This network operates in two phases: Phase 1, where the scheduled video files are proactively stored in the cache of RRSs during off-peak times [7, 6]; and Phase 2, where the requested videos are sent to the end-users according to the adopted delivery policy [31, 9, 14, 7]. In Phase 1, we aim to design an efficient DACPS based on the available VPD and CDI. Phase 2, which is followed by Phase 1, is divided into multiple finite time slots where in each time slot, we propose a delivery policy based on the arrival requests of users, CSI, and the caching status. The proposed RAF is illustrated in Fig. 2.

Figure 2: The proposed two-phase RAF for a parallel CVCT system in HV-MECs.

The main goal of this paper is proposing a CPS that considers all the existing delivery opportunities, including association of users, request scheduling, network channel conditions, user data rates, and availability and allocation of network transmission and transcoding resources in its algorithm design. To design a DACPS in Phase 1 utilizing the delivery opportunities in the system, we first need to describe the system delivery model and parameter in Phase 2. Then, we explain Phase 1 and formulate the cache placement optimization problem according to the stochastic model of Phase 2.

Ii-B Phase 2

In this phase, we assume a time-slotted system where at the beginning of each time slot, each user requests one video file. Similar to related works [3, 18, 6, 5, 30, 19, 4, 32, 33, 34], the CSI and requests of users remain fixed through a time slot and are completely independent from other time slots222Similar to previous works, the main motivation for considering this model is to simplify the video transmission model. Actually, dynamic video requesting and changes of CSI during each data transmission time cause a more complex delivery model, which is not yet investigated in the joint radio resource allocation and content placement context [3]. Assuming a dynamic video requesting with changes of CSI during a time slot can be considered as a future work.. All requests of users at each time slot should also be served within the time slot [3, 33, 18]. Hence, the adopted delivery policy for each time slot is completely independent from other time slots. Therefore, we focus on only one time slot in Phase 2. The request of user for video

is indicated by a binary variable

such that if user requests video , and otherwise, . Thus, we have . The binary parameter determines the user association indicator where if user is associated with RRS , and otherwise, . In this system, we assume that each user can be connected to at most one RRS, which is represented by [3, 18, 30, 6, 7]333This parallel CVCT system can also be extended to a coordinated multi-point-enabled one in which each user is able to access to more than one transmitter. Despite the significant potential, the coordinated multi-point system increases the complexity of the delivery model. Therefore, we consider this scheme as a future work.

(1)

The requests of users associated with RRS for video can be served by one of the following binary events denoted by [7]:

  1. represents that video can be sent directly from cache of RRS .

  2. indicates that video is directly served by RRS after being transcoded from a higher bitrate variant at RRS .

  3. denotes that video is provided from cache of RRS via fronthaul link.

  4. represents that video is served by transcoding from at RRS and then, sending to RRS via fronthaul link.

  5. indicates that video is obtained by sending video from RRS to RRS via fronthaul link and then, transcoding to at RRS .

  6. represents that the requests of users of RRS for video are served from the origin cloud server via backhaul link .

Fig. 3 shows all possible events that happen to serve requests for each video file at each RRS.

Figure 3: The possible events of serving a request of a user in the CVCT system. We assume that there are two bitrate variants of a video file at 480p and 720p where the user requests the 480p bitrate variant.

To avoid duplicated video provisioning at each RRS, we assume that all the requests for each video file from users associated with RRS can be served by only one type of events [7], i.e.,

(2)

In practical terms, the video transcoding can only be performed for higher bitrate variants of a transcodable video file to lower bitrate variants. Accordingly, we have [7, 9]. Note that each event can be chosen if the required video exists in the target storage [7, 6, 9]. Let be the binary cache placement indicator where if video is cached by RRS , and , otherwise. Therefore, we have

(3)

In this parallel CVCT system, the MC-NOMA technology is deployed at each RRS such that the total frequency bandwidth is divided into a set of orthogonal subcarriers where the frequency band of each subcarrier is . In this scheme, we assume that users aim to download the video files; online video servicing based on the playback rate of videos in the HV-MEC networks is considered for future work. To this end, MC-NOMA allows each orthogonal subcarrier to be shared among multiple users in each RRS via applying a superposition coding at the transmitter side444During the data transmission time of this parallel system, the transmitter is always able to superpose the received bits of multiple video files. and a SIC at the receiver side [35, 36, 17]. The binary subcarrier assignment indicator is also indicated by , where if subcarrier is assigned to the channel from RRS to user , and otherwise, . Note that each user can take subcarriers from RRS if the user is associated with that RRS. Therefore, we should have [3, 18, 5]

(4)

We denote by the transmit power of RRS to user on subcarrier , and, the instantaneous channel power gain between RRS and user on subcarrier . After performing SIC, the instantaneous signal-to-interference-plus-noise ratio (SINR) at user associated with RRS on subcarrier is [35, 17]

(5)

where represents the induced intra-cell interference on user over subcarrier , is the received inter-cell interference at user over subcarrier , and is the additive white Gaussian noise (AWGN) power, in which is the noise power spectral density (PSD). Therefore, the instantaneous data rate at user from RRS on subcarrier is To apply the SIC technique for MC-NOMA, the following constraint should be satisfied [17]:

(6)

Accordingly, the instantaneous data rate at user assigned to RRS is Under the assumption that each RRS has a maximum transmit power , we have

(7)

Consequently, the instantaneous latency of user to receive video from RRS can be calculated as .

Transcoding video to , is performed via the ABR streaming technique where the transcoding operation is mapped to a -bits computation-intensive task [8, 7]. Let be the number of central processing unit (CPU) cycles required to compute bit of the computation-intensive task of transcoding video555 is referred to the workload of the task of transcoding video to in the ABR technique. to [10, 11]. Each RRS performs all scheduled computation tasks in parallel by efficiently allocating its computation resources [12, 13]. The number of CPU cycles per second allocated to RRS for transcoding video to is also indicated by [10, 31, 9]. Let be the maximum processing capacity of RRS . Therefore, the per-RRS maximum processing capacity constraint can be expressed as

(8)

The speed of the transcoding process is obtained by the video transrating, i.e., transcoding bit rate, which is the number of bits transcoded by the processor per second [14]. Therefore, the delay of transcoding video to at RRS can be obtained by [11, 12, 13].

For this setup, we consider and as the maximum capacity of backhaul link , and fronthaul link from RRS to RRS , respectively [32]. We also denote and the adopted data rate for RRS to receive video from the origin cloud server and from the neighboring RRS , respectively. Hence, the following maximum channel capacity constraints should be satisfied:

(9)

and

(10)

Accordingly, the delays of receiving video from the origin cloud server and RRS at RRS are represented as and , respectively.

From the isolation perspective in slicing context, to guarantee the QoS at users owned by each slice , i.e., , we apply a minimum data rate constraint as666In this paper, similar to the prior works, e.g., [3, 18, 30, 19, 37, 4], it is assumed that the set of active users and their QoS requirement are fixed in Phase 2 and also available in Phase 1. Since users are subscribed to the slices based on their QoS level agreements, the subscribed user set is also fixed in Phase 2 and available in Phase 1.

(11)

where represents the minimum required access data rate of users in [30]. Satisfying (11) for a non-zero variable needs allocating at least one subcarrier to each user in . According to (4), this condition can only be applied if each user be associated with at least one RRS. Therefore, for each non-zero variable , all users in should be associated with at least one RRS. Accordingly, based on constraints (1), (4), and (11), if , the inequality in (1) turns into an equality for all users in . It is noteworthy that means slice does not guarantee any QoS for its own users. In this regard, this slice provides a best-effort service in which the requests of its own users are not guaranteed to be served.

The video transcoding process runs in parallel with the video transmission, where the delay of each transcoding in the system is measured by transcoding the first several segments of a video file [9, 14]. This is negligible compared to the corresponding wireless transmission delay. To efficiently allocate physical resources in this parallel system, some transmission/transcoding delay constraints should be held for each multi-hop scheduling event. For instance, in Event 2, the delay of transcoding video to at RRS should not be greater than the access latency of user to receive video from RRS [6, 14], i.e.,

(12)

For Event , the delay of fronthaul transmission should not be greater than the access delay [32, 30, 33]. Therefore, we have

(13)

For Event , the delay of transcoding and fronthaul transmission should not be greater than the fronthaul and access delays, respectively. These practical constraints can be represented as

(14)
(15)

For Event , fronthaul and transcoding delays should not be greater than the transcoding and access delays, respectively. Hence, we have

(16)
(17)

Finally, for Event , the backhaul delay should be equal or less than the access delay for each video transmission. Accordingly, we have

(18)

If all conditions in (12)-(18) hold, the total latency of each user comes from its access delay (wireless transmission delay) [30, 22, 14]. This parallel system prevents the extra fronthaul/backhaul transmission and video transcoding delays in the network.

Since slices lease the physical resources of InP, we intend to propose a new pricing model to represent this structure. For the access transmission resources, the unit price of transmit power and spectrum of RRS are indicated by per Watt and per Hz, respectively [19, 30]. Moreover, the unit price of backhaul and fronthaul rates are defined as and per bps [30]. For the storage resources, each slice pays per bit to utilize the memory of RRS [19]. The price of the processing resources usage at RRS is also defined as per CPU cycle. On the other hand, each slice gets rewards from its own users due to providing their access data rates [18, 19, 30]. We define as the reward of slice from each user per unit of received data rate (bit/s). Let us consider that is an increasing function of , i.e., for , for , we have . In this scheme, we aim to maximize the revenue of slices, which can be defined as the reward minus cost of each slice. The reward of each slice is . To define the cost of each slice, we first formulate the cost of provisioning video to RRS caused by one of the scheduling events as

(19)

Furthermore, the cost of the access transmission resource usage for transferring the requested video file to user is Therefore, the cost of serving the request of user for video is which should be paid by slice , if . Hence, the revenue of each slice for serving video to user is .

Based on (2), each slice pays the cost of storage, processing, backhaul, and fronthaul resource usages for each video provisioning to each cell only once. Moreover, each slice pays the usage cost of each subcarrier only once in each cell, even when the subcarrier is shared among users in in that cell via the MC-NOMA technology. Accordingly, the revenue of slice in Phase 2 can be defined as

(20)

In Phase 2, with the objective of maximizing the total delivery revenue of slices denoted by under the QoS requirements of users, we jointly optimize the user association, access transmit power and subcarrier allocation, fronthaul and backhaul rate adaption, processing resource allocation, and request scheduling to have an efficient delivery performance. For ease of notations, we denote , , , , , , , , , , , , and . The delivery optimization problem can be formulated as

(21a)
s.t. (1)-(4), (6), (7)-(18),
(21b)
(21c)
(21d)

where (21c) represents that in each cell , each subcarrier can be assigned to at most users [17].

Ii-C Phase 1

In this phase, we aim to design a proactive DACPS by utilizing the delivery opportunities in Phase 2. Note that the users’ request and CSI for Phase 2 are not available in this phase. To utilize the delivery model in the DACPS design, we need some stochastic information about videos popularity and wireless channel conditions. Similar to [3, 32, 34, 22], we assume that the VPD changes slowly compared to the instantaneous requests of users and remains fixed for the entirety of the network’s operational time. Also, this parameter can be estimated by the operators by collecting prior set of requests of users [3, 32, 34]. Accordingly, we assume that VPD is available at the scheduler in Phase 1 and does not change during Phase 2, i.e., is valid for Phase 2. Similarly, the CDI can be averaged over various CSIs in different time slots of prior Phase 2 [3]. Assume that the VPD follows the Zipf distribution with the Zipf parameter and is the same among all users in the network [3, 22, 27]. Therefore, the popularity of requesting video with rank777Consider that the videos in are randomly sorted from to . is given by To design a DACPS which covers the whole Phase 2, we propose an averaged based joint cache placement and ergodic resource allocation based on the VPD and CDI. Our proactive DACPS is valid until the VPD and/or the CDI changes [3].

In Phase 1, we aim to formulate the stochastic problem of maximizing the total revenue of slices based on the available VPD and CDI to have an efficient delivery performance in Phase 2. In other words, since requests of users and CSI are not available in Phase 1, (21) should be reformulated based on the VPD and CDI. In this regard, the average or ergodic data rate of wireless access link between user and RRS is , where is the expectation operator on the channel power gains [3]. This expectation is necessary even in slow fading scenarios, since the whole of Phase 2 has a much longer time length compared to Phase 1. In this phase, in contrast to the instantaneous access delays formulated in Subsection II-B, the average access delay for receiving video from RRS at user obtained by is considered. Moreover, in order to apply SIC, the average SIC constraint should be satisfied as888In contrast to Phase 2 where the SIC of MC-NOMA is applied based on the CSI, in Phase 1, we apply SIC based on available CDI.

(22)

Furthermore, since the requests of users () are unknown in Phase 1, constraint (2) should be reformulated based on the VPD () which is non-achievable. To tackle this challenge and cover all possible situations in Phase 2, we assume that all videos should be served by each RRS based on one of the events described in Fig. 3 if at least one user is associated with that RRS. Hence, we have

(23)

Although this constraint consumes more physical resources, it covers all possible situations. In other words, it guarantees that various sets of arrival requests in different time slots of Phase 2 can be served [7]. With this assumption and available CDI, constraints (12), (13), (15), (17), and (18) in Phase 2 are reformulated as

(24)

respectively. Let be the maximum storage capacity of RRS . In contrast to Phase 2, in this phase, we add a cache size constraint for each RRS as follows:

(25)

In Phase 1, our pricing model presented in Subsection II-B is also averaged based on both VPD and CDI. In this line, the average provisioning cost of a video file at RRS can be formulated as Moreover, the average reward of each slice for providing a video file to user based on the considered average data rate can be obtained by . Therefore, the average revenue of slice is

(26)

where is a function of which represents the number of specific videos in requested by users in associated with RRS . The main challenge of (26) is obtaining an exact closed-form representation for based on the available VPD which cannot be obtained, since the available VPD is independent from the user association process. Obviously, the value of is upper-bounded by and also is lower-bounded by . Generally, the diversity of requesting video files affects . Specifically, if users have more diverse requests, i.e., , increases which degrades the revenue of slices. Besides, if users have less diverse requests, i.e., or only a few video files are requested among all users, decreases. Therefore, to obtain a closed-form representation for in (26), we propose two low-diversity (LD) and high-diversity (HD) schemes. In the LD scheme, each slice assumes that all of its own users have the same requests based on the VPD. Hence, this scheme considers the best requesting situation which provides the maximum achievable revenue of slices. Conversely, in the HD scheme, each slice assumes that all of its users have different requests, i.e., the worst requesting situation is considered. To handle the different requesting diversity situations in the CPS design, we propose two baseline diversity CPSs, namely LD and HD. In the LD strategy, we consider the upper-bound value of the average revenue of each slice formulated as

(27)

which is compatible with the LD scheme. On the other hand, for the HD strategy, we consider the lower-bound average revenue of each slice which is expressed as

(28)

This strategy is also compatible with the HD scheme. In this phase, we design DACPSs in the LD and HD schemes to maximize the total estimated average revenue of slices which are formulated as and , respectively. The cache placement optimization problem in the LD scheme is

(29a)
s.t. (1), (3), (4), (7)-(10), (14), (16), (21b)-(21d), (22)-(25),
(29b)
(29c)

The cache placement optimization problem in the HD scheme is