Cloud-Edge Coordinated Processing: Low-Latency Multicasting Transmission

03/15/2019 ∙ by Shiwen He, et al. ∙ University of Waterloo Central South University 0

Recently, edge caching and multicasting arise as two promising technologies to support high-data-rate and low-latency delivery in wireless communication networks. In this paper, we design three transmission schemes aiming to minimize the delivery latency for cache-enabled multigroup multicasting networks. In particular, full caching bulk transmission scheme is first designed as a performance benchmark for the ideal situation where the caching capability of each enhanced remote radio head (eRRH) is sufficient large to cache all files. For the practical situation where the caching capability of each eRRH is limited, we further design two transmission schemes, namely partial caching bulk transmission (PCBT) and partial caching pipelined transmission (PCPT) schemes. In the PCBT scheme, eRRHs first fetch the uncached requested files from the baseband unit (BBU) and then all requested files are simultaneously transmitted to the users. In the PCPT scheme, eRRHs first transmit the cached requested files while fetching the uncached requested files from the BBU. Then, the remaining cached requested files and fetched uncached requested files are simultaneously transmitted to the users. The design goal of the three transmission schemes is to minimize the delivery latency, subject to some practical constraints. Efficient algorithms are developed for the low-latency cloud-edge coordinated transmission strategies. Numerical results are provided to evaluate the performance of the proposed transmission schemes and show that the PCPT scheme outperforms the PCBT scheme in terms of the delivery latency criterion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I. Introduction

Driven by the visions of ultra-high-definition video, intelligent driving, and Internet of Things, high-data-rate and low-latency delivery become two key performance indicators of future wireless communication networks [1]. The vast resources available in the cloud radio access server can be leveraged to deliver elastic computing power and storage to support resource-constrained end-user devices [2]. However, it is not suitable for a large set of cloud-based applications such as the delay-sensitive ones, since end devices in general are far away from the central cloud server, i.e., data center [3, 4]. To overcome these drawbacks, caching popular content at the network edge during the off-peak period is proved to be a powerful technique to realize low-latency delivery for some specific applications, such as real-time online gaming, virtual reality, and ultra-high-definition video streaming, in next-generation communication systems [5, 6, 7]. Consequently, an evolved network architecture, labeled as cache-enabled radio access networks (RANs), has emerged to satisfy the demands of ultra-low-latency delivery by migrating the computing and caching functions from the cloud to the network edge [8, 7, 9]. In the cache-enabled RANs, the cache-enabled radio access nodes named as enhanced remote radio heads (eRRHs) have the ability to enable processing at the network edge and to cache files at its local cache [9, 10, 11].

A. Related Works

The main motivation of caching frequently requested content at the network edge is to reduce the burden on the fronthaul links and the delivery latency. Existing studies on the cache-enabled RANs are mainly on two-fold, i.e., the pre-fetching phase and the delivery phase. The pre-fetching phase studies focus on caching strategies, while accounting for the caching capacity of eRRHs, popularity of contents and user distribution [12, 13, 14, 15, 16, 17, 18]. The delivery phase deals with the requested data transmission for different performance criteria [19, 20, 22, 23, 25, 24, 21].

1) Optimization of content placement: Content placement with a finite cache size is the key issue in caching design, since unplanned caching at the network edge will result in more inter-cell interference or delivery latency. Therefore, how to effectively cache the popular content at the network edge has attracted extensive attention from both academia and industry. The cache placement problem in cache-enabled RANs is investigated while accounting for the flexible physical-layer transmission and diverse content preferences of different users [12]. Hybrid caching together with relay clustering is studied to strike a balance between the signal cooperation gain achieved by caching the most popular contents and the largest content diversity gain by caching different contents [13]. In [14, 15, 16, 17, 18], an edge caching strategy is investigated for cache-enabled coordinated heterogeneous networks. Probabilistic content placement is designed to maximize the performance of content delivery for cache-enabled multi-antenna dense small cell network [14]. Dynamic content caching is studied via stochastic optimization for a hierarchical caching system consisting of original content servers, central cloud units and base stations (BSs) [15]

. The successful transmission probabilities are analyzed for a two-tier large-scale cache-enabled wireless multicasting network 

[16, 17]. Proactive caching strategies are proposed to reduce the backhaul transmission for large-scale mobile edge networks [18]. Though efficient caching at the network edge can effectively reduce the burden on the fronthaul links, how to effectively design a transmission strategy is key problem, especially for content-centric ultra-dense massive-access networks.

2) Optimization of transmission strategy: How to timely transmit the cached and uncached requested files to users is another key problem for cache-enabled coordinated RANs [19, 20, 21, 22, 23, 24, 25]. Joint optimization of cloud-edge coordinated precoding using different pre-fetching strategies is investigated respectively for cache-enabled sub-6 GHz and millimeter-wave multi-antennas multiuser RANs in [19, 20]. He et al. propose a two-phase transmission scheme to reduce both burden on the fronthaul links and delivery latency with a fixed delay caused by fronthaul links for cache-enabled RANs [21]. Tao et al. investigate the joint design of multicast beamforming and eRRH clustering for the delivery phase with fixed pre-fetching to minimize the compound cost including the transmit power and fronthaul cost, subject to predefined delivery rate requirements [22]. Research on the energy efficiency of cache-enabled RANs shows that caching at the BSs can improve the network energy efficiency when power efficient cache hardware is used [23, 24]

. Studies on cache-enabled physical layer security have shown that caching can reduce the burden on fronthaul links, introduce additional secure degrees of freedom, and enable power-efficient communication 

[25]. However, maximizing the (minimum) delivery rate or minimizing the compound cost function may not necessarily minimize the delivery latency, especially when partial requested contents are not cached at the network edge.

B. Contributions and Organization

In general, the delivery latency is incurred at least by the propagation of fronthaul links, signal processing at the BBU, and signal transmission for wireless communication systems. Furthermore, the limited capacity of fronthaul links is a key factor in determining the delivery latency and is the key motivation for mitigating the cache and baseband signal processing to the network edge. However, to the best of the authors’ knowledge, how to effectively exploit the cache and baseband signal processing at the network edge to minimize the delivery latency is an open problem for cache-enabled multi-antennas multigroup multicasting RANs with limited capacities of fronthaul links. In this paper, we study the minimization of delivery latency of three different transmission schemes, accounting for the delay caused by fetching the uncached requested files from the BBU and the signal processing at the BBU for cache-enabled multi-antennas multigroup multicasting RANs. The main contributions of this paper can be summarized as follows:

  • When the caching capability of each eRRH is sufficient large to cache all files, a full caching bulk transmission (FCBT) scheme is proposed as a performance benchmark for minimizing the delivery latency;

  • In practice, only a part of files is cached at network edge due to the limited caching capability of each eRRH. For this case, we first present a partial caching bulk transmission (PCBT) scheme that transmits simultaneously all requested files to the users and then a novel partial caching pipelined transmission (PCPT) scheme to further reduce the delivery latency;

  • Three optimization problems are formulated to minimize the delivery latency that includes the delay caused by fetching the uncached files from the BBU, signal processing at the BBU, and transmitting all requested files to the users, subject to constraints on the fronthaul capacity, per-eRRH transmit power constraint, and file size;

  • An efficient algorithm that is proved to converge to a Karush-Kuhn-Tucker (KKT) solution is designed to solve each of optimization problems, respectively;

  • Numerical results are provided to validate the effectiveness of the proposed methods. Compared to the other non-FCBT transmission schemes, the PCPT scheme achieves obvious performance improvement in terms of delivery latency.

The remainder of this paper is organized as follows. The system model is described in Section II. In Section III, three transmission schemes are formulated. Design of optimization algorithms are investigated in Section IV. Numerical results are presented in Section V and conclusions are drawn in Section VI.

Notations

: Bold lower case and upper case letters represent column vectors and matrices, respectively;

is a diagonal matrix whose diagonal elements are the elements of vector ; and denote the zero and identity matrices, respectively; , , and

denote the trace, the Euclidean norm, and the Frobenius norm, respectively. The circularly symmetric complex Gaussian distribution with mean

and covariance matrix is denotes by ; is a positive semidefinite matrix; represents the element in row and column of matrix , and denotes the column vector obtained by stacking the columns of matrix on top of one another. Superscripts , , and represent transpose, conjugate, and conjugate transpose operators, respectively. For set , denotes the cardinality of the set, while for complex number , denotes the magnitude value of ; and are the fields of real and complex numbers, respectively. Function rounds to the nearest integer not larger than ; denotes the complement

of a binary variable

; and is the logarithm with base . The circularly symmetric complex Gaussian distribution with mean and covariance matrix is denoted by . The symbols used frequently in this paper are summarized in Table I.

flushleft

TABLE I: List of important mathematical symbols.
Symbol
Meaning
Symbol
Meaning
, Numbers of users and eRRHs , Sets of users and eRRHs
Numbers of antennas equipped at each eRRH Maximum transmit power of eRRH
Capacity of fronthaul link to eRRH Normalized cache size of eRRH
Number of multicast groups -th group
Number of files in the library , Sets of all files and all requested files
Normalized size of files Index of file requested by the -th group
Binary caching variable of file at eRRH Complement of
Signal received by user Channel matrix from eRRH to user

Noise variance at user

Set of beamforming vector
Quantization noise covariance matrix of eRRH Set of quantization noise covariance matrices
SINR of user for full caching case or partial caching case
Achievable rate of user for full caching case or partial caching case,
Beamforming vector for basedband signal at eRRH for full caching cse
Beamforming vector for cached basedband signal at eRRH for partial caching case
Beamforming vector for uncached basedband signal at eRRH for partial caching case
, ,
, , , ,

II. System Model

Consider the downlink transmission of a cache-enabled multigroup multicasting RAN, as illustrated in Fig. 1, comprising one baseband unit (BBU), eRRHs, and single-antenna users.

flushleft

Fig. 1: Illustration of downlink transmission of a cache-enabled multigroup multicasting RAN.

In the system, eRRH is equipped with a cache that can store nats, where is the normalized cache size; eRRH is equipped with antennas and connected to the BBU through an error-free fronthaul link with normalized capacity nats/Hz/s. The user set is partitioned into multicast groups, denoted by , , . Each user, , independently requests only a single file in a given transmission interval, i.e., each user belongs to at most one multicast group.

Without loss of generality, we assume that all files in library at the BBU have the same size of nats/Hz, where is the normalized file size. The assumption of equal file sizes is standard and reasonable in that the most frequently requested and cached files by users are chunks of videos, e.g., fragments of a given duration, which are often segmented with the same length [6]. In general, eRRH selectively pre-fetches some popular files from library to its local cache during the off-peak period, according to the content popularity and predefined caching strategies [19, 22]. The cache status of file can be modeled as a binary variable , , , given by

(1)

The complement of is denoted as . In this work, we focus on transmission strategies given cache state information, i.e., , , . Let be the index set of requested files of all user groups, where is the index of the requested file by the users in group . We denote the respective group index of user by a positive integer .

III. Design of Transmission Schemes

In this section, we consider three transmission schemes according to the caching capabilities of eRRHs and then formulate the corresponding optimization problems for cache-enabled multigroup multicasting RANs. In particular, the design objective is to minimize the delivery latency subject to the constraints on the fronthaul links, eRRH transmit power, and file size.

A. Full Caching Bulk Transmission (FCBT) Scheme

In this subsection, we assume that all files are cached at the local cache, i.e., , , . Consequently, all requested files can be directly retrieved from the local caches of eRRHs and transmitted to the users111The reason of considering full caching is to provide a benchmark for performance comparison. In practical communication networks, eRRH cannot cache all requested files even if it has sufficient large caching capacity due to the diversity of files, massive users, and mobility of users.. We refer to this transmission scheme as full caching bulk transmission (FCBT) scheme, which is similar with the coordinated multiple points (CoMP) mechanism in wireless communication systems [26].

In the FCBT scheme, the users first send the requirement of files to the eRRHs and then the eRRHs coordinately transmit the requested files to the users. Consequently, the received baseband signal at user can be expressed as

(2)

where , with being the beamforming vector for the users in at eRRH , with denoting the channel coefficient between user and eRRH , represents the baseband signal of requested file for the users in , and denotes the additive white Gaussian noise with . Thus, the signal-to-interference-plus-noise ratio (SINR) of user is calculated as

(3)

Based on Shannon capacity, the corresponding achievable rate on unit bandwidth in one second of user is given by in unit of nats/Hz/s. Let in unit of nats/Hz/s be the delivery rate of group for the cached files transmission. We consider minimizing the delivery latency and meanwhile providing fairness for all multicast groups. To achieve this two-fold goal, the design problem is formulated as

(4a)
(4b)
(4c)

where the optimization variables are and , . Constraints (4b) means that the delivery rate of the file requested by user is constrained by the achievable rate. Constraints (4c) is the power constraint per eRRH. The goal of problem (4) is to minimize the maximum group delivery latency, as the accomplishment of the transmission of requested files is determined by the worst group for the whole data transmission.

B. Partial Caching Bulk Transmission Scheme

In practice, each eRRH can only cache a fractional of files at its local cache due to the limited caching capabilities of eRRHs. As a result, some requested files may not be cached at the cache of the network edge. In this subsection, we explore the problem of minimizing the delivery latency for the case that only a fractional of requested files are cached at the local cache of eRRHs. In this case, a common transmission scheme contains three phases [19], as illustrated in Fig. 2, which is termed as partial caching bulk transmission (PCBT) scheme. In particular, in phase I, the users first send the requirement of files to the eRRHs. Because only partial required files are cached at the local cache, the eRRHs fetch the uncached requested files from the BBU in phase II. After obtaining the uncached requested files, in phase III, the eRRHs coordinately transmit the cached and uncached requested files to the users.

flushleft

Fig. 2: Flowchart of partial caching buck transmission scheme.

The uncached files in the BBU are quantized and precoded and then delivered to the eRRHs via the fronthaul links. Let denote the precoded signal of the requested files that are not stored at eRRH , which is given by

(5)

where, is the beamforming vector for the uncached requested file at eRRH . Let be the quantized version of precoded signal at the BBU, where is the quantization noise independent of with distribution . We assume that the quantization noise is independent across the eRRHs, i.e., the signals intended for different eRRHs are quantized independently [27]. Let be the covariance matrix of quantization noise, i.e., .

The signal transmitted by eRRH is a superposition of two signals, where one is the locally precoded signal of the cached requested files and the other is the precoded and quantized signal of the uncached requested files stored at the BBU, which is delivered to eRRH via the error-free fronthaul link. Therefore, we have

(6)

where denotes the beamforming vectors for the cached requested file at eRRH . The received baseband signal at user is expressed as

(7)

where , , , and . It is easy to see that . Furthermore, only one of and holds. Thus, the SINR at user is given by

(8)

The corresponding achievable rate on unit bandwidth in one second of user is calculated as in unit of nats/Hz/s.

In general, fetching a requested file from the BBU via the fronthaul link incurs a certain delay since the propagation of fronthaul links and signal processing at the BBU. Such a delay is mainly determined by the worst transfer of the fronthaul links. We define the worst delay , in unit of second, as follows222If eRRH has cached all requested files at its local cache, i.e., , the value of in the denominator of the second item in (9) is set to be a very large constant value.

(9)

where denotes the constant delay for constant route time and signal processing at the BBU and . The second term in (9) accounts for the worst transfer delay of the propagation of fronthaul links. Consequently, the delivery latency minimization problem is formulated as follows

(10a)
(10b)
(10c)
(10d)

where the optimization variables are , , , and , , in unit of nats/Hz/s denotes the delivery rate of group , and denotes the rate on the fronthaul link of eRRH and is given by

(11)

where and . Constraints (10b) means that the delivery rate of file requested by the users in group is no larger than the achievable rate of user that belongs to group . Constraints (10c) is the power constraint per eRRH. Constraint (10d) is the fronthaul capacity constraint ensuring signal can be reliably recovered by eRRH  [28, Ch. 3]. Note that when no requested files are cached at the eRRHs, i.e., , , , the delivery latency minimization problem can still be formulated by (10). When all requested files are stored at the eRRHs, i.e., , , the PCBT scheme reduces to the FCBT scheme, i.e., problem (10) is equivalent to problem (4).

C. Partial Caching Pipelined Transmission Scheme

In the PCBT scheme, the eRRHs have to wait for the arrival of the uncached requested files before transmitting all requested files to the users, that is, waiting for delay . In practice, an eRRH is able to receive data from its fronthaul link while sending wireless signals. Thus, we design a novel partial caching pipelined transmission (PCPT) scheme that also contains three phases, as shown in Fig. 3. Specifically, in phase I, the users first send the requirement of files to the eRRHs. After receiving the requirements of the users, in phase II, according to the caching status of the requested files, the eRRHs transmit the cached requested files to the users while fetching the uncached requested files from the BBU. In phase III, after the arrival of the uncached requested files, the eRRHs transmit the remaining cached requested files and uncached requested files to the users. Different from the PCBT scheme, the eRRHs do not have to wait for the uncached requested files to arrive before sending the cached requested files.

flushleft

Fig. 3: Flowchart of partial caching pipelined transmission scheme.

The duration of phase II is delay given by (9), i.e., the time of fetching the uncached requested files from the BBU and the signal processing at the BBU, etc. In phase II, the eRRHs cooperatively transmit the cached requested files to the users. Thus, the received signal at each user is expressed as (2). In phase III, after the quantized precoded signals of the uncached requested files arrives at all eRRHs, the remaining cached requested files and uncached requested files are simultaneously transmitted to all users as in the PCBT scheme. Hence, the received signal at each user is expressed as (7). Consequently, the delivery latency minimization problem is formulated as333The existing work in [21] maximizes the minimum delivery rate with fixed delay incurred by the propagation of fronthaul links and signal processing at the BBU. However, this work aims to minimize the delivery latency and optimize the delay . As a consequence, the problem considered in this paper is more comprehensive as compared with that in [21].

(12a)
(12b)
(12c)

where the optimization variables are , , , , and , , . In (12a), denotes the remaining cached requested files after the transmission of phase II. Constraints (12c) imposes that the amount of data transmission of each file be limited by file size . Note that in problem (12), when all requested files are locally cached by the eRRHs, i.e., , , the value of is zero and problem (12) is equivalent to problem (4). When no requested files are stored at the cache of the network edge, i.e., , problem (12) reduces to problem (10). When requested file is not cached at the eRRHs, i.e., , , constraints , , and corresponding to group in (4b) and (12c) are redundant.

IV. Design of Optimization Algorithms

In this section, we focus on designing an efficient optimization algorithm to solve the corresponding optimization problem of each transmission scheme. The common characteristics of these problems are the non-convex group rate constraints (4b) and (10b), due to the existence of non-convex achievable rate in the right side of constraints (4b) and (10b). Problems (10) and (12) are even more challenging due to the non-convex fractional objective function and fronthaul capacity constraints. Therefore, problems (4), (10) and (12

) are non-convex and it is difficult to obtain the global optimum. In what follows, we design heuristic optimization algorithms to solve the problems locally via the penalty dual decomposition (PDD) method and the successive convex approximation (SCA) methods 

[30, 31, 32, 33].

A. Solving Problem (4) FCBT Scheme

In this subsection, we focus on solving problem (4). The main barriers of solving problem (4) are the non-convexity of the objective function (4a) and the group rate constraints (4b). First, we need to convert the non-convex forms into convex ones. Introducing auxiliary variables , , , , problem (4) can be equivalently reformulated into the following

(13a)
(13b)
(13c)
(13d)
(13e)

where the optimization variables are , , , , and , . At the optimal point of problem (13), the inequality constraints (13c) and (13d) are activated. In (13c), and , , are convex, respectively. But, constraints (13c) is still non-convex. To deal with the non-convex constraints, we invoke a result of [34, 35, 36] which shows that if we replace by its convex low bound and iteratively solve the resulting problem by judiciously updating the variables until convergence, we can obtain a Karush-Kuhn-Tucker (KKT) point of problem (13). To this end, we approximate problem (13) as follows

(14a)
(14b)
(14c)

where the optimization variables are , , , , and , . In (14c), is a convex low boundary of function and is defined as

(15)

where denotes the index of iteration, and represent the values of variables and obtained at the -th iteration, respectively.

Next, we pay our attention to objective function (14a). By introducing auxiliary variable , we can transform problem (14) equivalently into the following convex form

(16a)
(16b)
(16c)

where the optimization variables are , , , , and , . Note that in constraint (16c), we exploit the positive nature of and , i.e., and . This is because if , the delivery latency is infinite, i.e., problem (16) becomes meaningless. At the -th iteration, problem (16) is convex and can be easily solved with a classical optimization solver, such as CVX [29, 37]. The detailed steps of solving problem (16) are summarized in Algorithm 1 that converges to a KKT solution of problem (10), please see Appendix A for the detailed proof.

1:  Set and to a non-zero value. Initializing to be a non-zero beamforming vector, , such that constraint (12c) is satisfied;
2:  Compute as follows
(17)
3:  Solve problem (16) to obtain , , , , and , ;
4:  If , stop iteration. Otherwise, set and go to Step 2.
Algorithm 1 Solution of problem (16)

B. Solving Problem (10) for PCBT Scheme

In this subsection, we focus on investigating the solution of problem (10), and propose an effecient optimization method to solve it. Compared to problem (4), solving problem (10) becomes more challenging, because there are additional non-convex fractional item in the objective (10a) and non-convex fronthaul capacity constraints (10d). To overcome these difficulties, we need to leverage some new mathematical methods to transform non-convex problem (10) into convex one. For simplicity, define , , and , . Note that if and only if and . Dropping the rank one constraint of , , problem (10) can be rewritten as

(18a)
(18b)
(18c)
(18d)
(18e)

where the optimization variables are , , , . In (18b), and are defined respectively by

(19a)
(19b)

In (18c), permutation matrix function is defined as

(20)

Problem (18) is non-convex due to the non-convexity of objective function (18a) and constraints (18b) and (18e). Consequently, it is difficult to obtain the global optimal solution of problem (18). In what follows, we aim to relax the optimization conditions in order to provide reasonable design for practical implementation.

The first thing of addressing problem (18) is to transfer it into a solvable form by using some mathematical methods. By introducing auxiliary variables and , problem (18) can be equivalently reformulated as

(21a)
(21b)
(21c)
(21d)

where the optimization variables are , , , , , . Note that constant in objective function (21a) is omitted. Problem (21) can be convexified as problem (22) via some basic mathematical operation and using the PDD and SCA methods [30, 31, 32, 33], please see Appendix B for the details.

(22a)
(22b)
(22c)
(22d)
(22e)
(22f)

where the optimization variables are , , , , , . In problem (22), is the Lagrange multiplier and is a scalar penalty parameter. This penalty parameter improves the robustness compared to other optimization methods for constrained problems (e.g. dual ascent method) and in particular achieves convergence without the need of specific assumptions for the objective function, i.e. strict convexity and finiteness [30, 31, 32, 33]444In constraint (33c), we exploit the positive nature of and , i.e., and . This is because that if , the delivery latency is infinite, i.e., problem (18) becomes meaningless..

When Lagrange multiplier and penalty parameter are fixed, problem (22) is convex and can be easily solved by a classical optimization solver, such as the CVX [29, 37]. Based on this observation, in the sequel, we adopt an alternative optimization method to address problem (22). In particular, we first solve problem (22) with fixed and , and then update Lagrange multiplier and penalty parameter according to the constraint violation condition [32]. A step-by-step description for solving problem (22) is given in Algorithm 2, where and denote the number of iterations, respectively. and are a stopping threshold and an approximation stopping threshold, respectively. is a control parameter. denotes the objective value of problem (22) at the -th iteration.

1:  Set to be a non-zero value and initialize non-zero beamforming matrix , , , , such that constraints (18c), (18d) and (18e) are satisfied;
2:  Let , initialize and to be a non-zero value;
3:  Let , compute as follows:
(23)
4:  Let . Solve problem (22) to obtain , , , , , , , , ;
5:  If , go to Step 6. Otherwise, compute , , and go to Step 4;
6:  If , stop iteration. Otherwise, go to Step 7;
7:  If , update and as follows
(24a)
(24b)
Otherwise, update and as follows
(25a)
(25b)
8:  Let , , and go to Step 3.
Algorithm 2 Solution of problem (22)

According to [32, Corollary 3.1], Algorithm 2 guarantees convergence to a KKT solution of problem (21). In Algorithm 2, Step 4 solves a convex problem, which can be efficiently implemented by the primal-dual interior point method with approximate complexity of  [29]. The overall computational complexity of Algorithm 2 is , where denotes the number of the operations of Step 4. Due the influence of the rank relaxation, an optimal solution of problem (22) is not necessary an optimal solution of problem (18). Therefore, we need to adopt a specific method that can be found in Appendix C to obtain the solution of problem (18) from the solution of problem (22). The initialization of Algorithm 2 is finished using the method proposed in Appendix D.

C. Solving Problem (12) for PCPT Scheme

In this subsection, we focus on the optimization of problem (12) for the PCPT scheme, under the assumption that partial requested files are cached at the local cache of the network edge and partial requested files need to be fetched from the BBU. Compared to problems (4) and (10), solving problem (12) is more challenging as the pipelined transmission of requested files. Following the similar procedure used for problem (10), problem (12) can be reformulated as

(26a)
(26b)
(26c)
(26d)
(26e)

where the optimization variables are ,