Asynchronous Mobile-Edge Computation Offloading: Energy-Efficient Resource Management

by   Changsheng You, et al.
The University of Hong Kong

Mobile-edge computation offloading (MECO) is an emerging technology for enhancing mobiles' computation capabilities and prolonging their battery lives, by offloading intensive computation from mobiles to nearby servers such as base stations. In this paper, we study the energy-efficient resource-management policy for the asynchronous MECO system, where the mobiles have heterogeneous input-data arrival time instants and computation deadlines. First, we consider the general case with arbitrary arrival-deadline orders. Based on the monomial energy-consumption model for data transmission, an optimization problem is formulated to minimize the total mobile-energy consumption under the time-sharing and computation-deadline constraints. The optimal resource-management policy for data partitioning (for offloading and local computing) and time division (for transmissions) is shown to be computed by using the block coordinate decent method. To gain further insights, we study the optimal resource-management design for two special cases. First, consider the case of identical arrival-deadline orders, i.e., a mobile with input data arriving earlier also needs to complete computation earlier. The optimization problem is reduced to two sequential problems corresponding to the optimal scheduling order and joint data-partitioning and time-division given the optimal order. It is found that the optimal time-division policy tends to balance the defined effective computing power among offloading mobiles via time sharing. Furthermore, this solution approach is extended to the case of reverse arrival-deadline orders. The corresponding time-division policy is derived by a proposed transformation-and-scheduling approach, which first determines the total offloading duration and data size for each mobile in the transformation phase and then specifies the offloading intervals for each mobile in the scheduling phase.



There are no comments yet.


page 1

page 2

page 3

page 4


Energy-Efficient Task Offloading and Resource Allocation for Multiple Access Mobile Edge Computing

In this paper, the problem of joint radio and computation resource manag...

Energy-Efficient Mobile-Edge Computation Offloading over Multiple Fading Blocks

By allowing a mobile device to offload computation-intensive tasks to a ...

Energy-Efficient Task Offloading and Resource Allocation in Mobile Edge Computing with Sequential Task Dependency

In this paper, we investigate the computation task with its sub-tasks su...

Peer Offloading in Mobile Edge Computing with Worst-Case Response Time Guarantees

Mobile edge computing (MEC) is a new paradigm that provides cloud comput...

Energy-Efficient Mobile-Edge Computation Offloading for Applications with Shared Data

Mobile-edge computation offloading (MECO) has been recognized as a promi...

Data Partition and Rate Control for Learning and Energy Efficient Edge Intelligence

The rapid development of artificial intelligence together with the power...

Stochastic Control of Computation Offloading to a Helper with a Dynamically Loaded CPU

Due to densification of wireless networks, there exist abundance of idli...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Realizing the vision of Internet of Things (IoT) has driven the unprecedented growth of small mobile devices in recent years. This stimulates the explosive data/computation traffic increase that is constantly generated from a wide range of new applications such as online gaming and video streaming. Such mobiles, however, typically suffer from finite computation capabilities and batteries due to their small form factors and low cost. Tackling these challenges gives rise to an emerging technology, called mobile-edge computation offloading (MECO), which allows computation data to be offloaded from mobiles to proximate servers such as base stations (BSs) and access points (APs), for achieving desirable low latency and mobile energy savings [1, 2, 3]. In a typical asynchrobous MECO system as shown in Fig. 1, different mobiles generate different amounts of computation data at random time instants, and moreover, have diverse latency requirements depending on the applications. This complicates the multiuser offloading and resource management in MECO systems, which shall be investigated in this work.

Figure 1: Multiuser asynchronous MECO systems.

I-a Prior Work

Designing efficient MECO systems has attracted extensive attention in recent years. In the pioneering work considering single-user MECO systems [4], the mobile CPU-cycle frequencies and offloading rates were optimized for maximizing the energy savings of local computing and offloading, leading to the optimal binary offloading decision. This work was extended in [5] by powering MECO with wireless energy. In addition, for applications with partitionable data, the performance of energy savings can be further enhanced by partitioning data for local computing and offloading, called partial offloading. A set of partitioning schemes have been proposed, including live prefetching [6], program partitioning [7], and controlling offloading ratio [8].

The offloading design in multiuser MECO systems is more complicated. Particularly, one of the main issues is how to jointly allocate radio-and-computational resources. Most prior work on this topic assumes synchronous MECO, where all the mobiles have the identical data-arrival time instants and deadlines. Under this assumption, the resource allocation for minimizing the total mobile-energy consumption was studied in [9] for both time-division multiple access (TDMA) and orthogonal frequency-division multiple access (OFDMA) MECO systems, where the derived optimal policy is shown to have a simple threshold-based structure. This framework was extended in [10] to design energy-efficient multiuser MECO accounting for the non-negligible edge-cloud computing latency by using flow-shop scheduling techniques. Further research in this direction considers more complex systems such as multi-cell MECO [11, 12], and wirelessly-powered MECO [13, 14]. On the other hand, another line of research considers partially-synchronous MECO, for which the mobiles only share identical data-arrival time instants but may have different computation deadlines. For such systems, a set of offloading scheduling policies have been proposed to minimize the total mobile latency using techniques such as flow-shop queuing theory [15], and joint scheduling and data partitioning [16]. In addition, cooperative computing among mobiles was investigated in the recent work [17, 18, 19, 20] for reducing energy consumption and offloading latency via data partitioning and offloading scheduling techniques. Specifically, the peer-to-peer offloading given the computation deadline was investigated in [19] by using the “string-pulling” approach.111Compared with [19], the current work considers heterogenous computation deadlines for different mobiles, which is more complex and thus cannot be directly solved using the “string-puling” approach or traditional data-transmission techniques. Note that in the above work, the assumption of synchronous or partially-synchronous MECO is unsuitable for many practical asynchronous MECO systems that consist of mobiles with heterogeneous data-arrival time instants and deadlines. This motivates the current work that studies fully asynchronous MECO systems.

Last, it is worth mentioning that in traditional communication systems without MECO, asynchronous packet transmission with individual latency constraints has been widely studied for designing offline and online scheduling policies[21, 22, 23]. The above work only focuses on data transmissions following the first-come-first-serve rule. In contrast, for asynchronous MECO systems, the transmission techniques should be integrated with the joint radio-and-computational resource management, local computing, and interwound computation and transmission, which is the new theme of this work.222The current work differs from [21, 22, 23] in the problem formulation and transformation, as well as providing new insights for asynchronous offloading.

I-B Contributions

To the best of the authors’ knowledge, this work was the first attempt on designing the energy-efficient offloading controller for practical asynchronous MECO systems with non-identical task-arrivals and deadlines among mobiles. Compared with synchronous MECO studied in most prior work, the current design eliminates the overhead required for network synchronization, and reduces offloading-and-computation latency. Towards developing a framework for designing asynchronous offloading, the main contributions of the work are twofold: 1) characterizing the structure of the optimal policy that helps simplify offloading-controller design and deepen the understanding of the technology, and 2) proposing the approach for designing practical offloading algorithms via decomposing a complex problem into low-complexity convex sub-problems. The specific technical contributions and findings are summarized as follows.

  • General arrival-deadline orders: Consider the general case with arbitrary orders of data-arrival time instants and deadlines for different mobiles (see Fig. 1). The design of offloading controller is formulated as an optimization problem under the criterion of minimum total mobile-energy consumption and the constraints of time-sharing and deadlines. An iterative solution method is proposed to iteratively optimize data partitioning for individual mobiles and multiuser time divisions. The computation complexity is reduced by analyzing the policy structure for each iteration. The analysis reveals that the optimal data-partitioning policy is characterized by a threshold-based structure. Specifically, each mobile should attempt to increase offloading or reduce it if the computation capacity of the mobile or cloud server becomes a bottleneck as measured using corresponding derived thresholds.

  • Identical arrival-deadline orders: To gain more insights, consider the special case where the data-arrival time instants and deadlines of different mobiles follow the identical orders. The optimization problem is decomposed into two sequential problems, corresponding to optimizing the scheduling order and energy-efficient joint data partitioning and time division given the optimal order. Thereby, we show that without loss of optimality, the mobiles should be scheduled for offloading according to their data-arrival order. Leveraging this result, the original problem is simplified as the problem of joint optimization of data partitioning and time division. Then the simplified problem is solved using the proposed master-and-slave framework, where the slave problem optimizes data partitioning, the master problem corresponds to the energy-efficient time division, and both are convex. Interestingly, it is discovered that the optimal time-division policy attempts to equalize the differences in mobile computation capacities via offloading time allocation to mobiles.

  • Reverse arrival-deadline orders: For the same objective as the preceding task, we further consider another special case with the reverse arrival-deadline orders, where a mobile with later data arrival must complete the computation earlier. The derived optimal scheduling order suggests two non-overlapping offloading intervals for each mobile. To obtain the optimal offloading durations given the optimal order, we propose a new and simple transformation-and-scheduling approach. Specifically, the transformation phase converts the original problem into the counterpart with identical arrival-deadline orders, allowing the use of the previous solution approach. Then given the scheduling order, individual offloading intervals are computed in the the scheduling phase.

The differences between this paper and its conference version [24] are as follows. First, this paper considers the finite computation capacities at the mobiles and edge cloud, while infinite computation capacities are assumed in [24]. Second, several useful discussions are added in this paper to demonstrate the versatility of proposed algorithms. Last, the paper studies the resource management for the case of reverse arrival-deadline orders, which is not addressed in [24].

Ii System Model

Consider a multiuser MECO system (see Fig. 1), comprising one single-antenna BS connected to an edge cloud and single-antenna mobiles, denoted by a set . Each mobile has one-shot input-data arrival at a random time instant and is required to complete the computation before a given deadline. We consider asynchronous computation offloading, where the data-arrival time instants and deadlines vary for different mobiles. The input data is partitioned into two parts for parallel computation: one at the mobile’s local CPU and the other offloaded to the BS.333For tractability, we assume that the input data can be arbitrarily partitioned following the literature (see e.g., [5]

). This is in fact the case for certain applications such as Gzip compression and feature extraction.

In the message-passing phase prior to computation offloading, each mobile feeds back to the BS its state parameters, including the estimated channel gain, data-arrival time instant and deadline (acquired by CPU profiling or CPU-utilization prediction techniques

[25, 26]). Using the information, the BS determines the energy-efficient resource-management policy for controlling the mobiles’ offloaded bits and durations, and then broadcasts the control policy to mobiles.

Ii-a Model of Input-Data Arrivals

The asynchronous data arrivals for the mobiles are modeled as follows. As shown in Fig. 1, each mobile, say mobile , needs to complete a computation task with -bit input data within the time interval , where is the data-arrival time instant and is the computation deadline. The required computation latency for mobile , denoted by , is thus given by , in second (s). Without loss of generality, assume that and .444We assume that for , such that the computing intervals of each mobile always overlaps with that of others (see Fig. 1). Otherwise, the total duration can be decoupled into several non-overlapping durations. To facilitate the exposition in the sequel, we define two useful sets as below.

Definition 1 (Epoch-Set, User-Set).

Let with denote a sequence of ordered time instants and the permutation matrix given by

such that , and . The time interval between two consecutive time instants is called an epoch with length for . For each mobile, say mobile , let denote its epoch-set

which specifies the indexes of epochs that constitute the computing interval of mobile

. For each epoch, say epoch , define the user-set as the indexes of mobiles whose computing intervals cover epoch .

For an example shown in Fig. 1, the epoch set for mobile is , and the user-set for epoch is . If given , and , can be constructed as where the vector is the

-th column of the identity matrix


Ii-B Models of Local Computing and Computation Offloading

Let denote the offloaded bits of mobile during epoch . To finish the computation before the deadline, the remaining -bit data is computed by the mobile’s CPU. The models of local computing and computation offloading are described as follows.

Ii-B1 Local Computing

Based on the model in [27], let denote the number of CPU cycles required for computing -bit data for mobile , which may be different for different mobiles depending on their specific computing-task complexities. During the computing duration , since operating at a constant CPU-cycle frequency is most energy-efficient for local computing [28], the CPU-cycle frequency for mobile is chosen as . Following the model in [29], under the assumption of low CPU voltage, the energy consumption for each CPU cycle can be modeled by , where is a constant determined by the circuits. Then the local-computing energy consumption for mobile , denoted by , is obtained as:

Let denote the maximum CPU frequency of mobile . Then we have . As a result, the offloaded data size of mobile is lower-bounded as , where .

Ii-B2 Computation Offloading

For each mobile, computation offloading comprises three sequential phases: 1) offloading data from the mobile to the edge cloud, 2) computation by the edge cloud, and 3) downloading of computation results from the edge cloud to the mobile. Assume that the edge cloud assigns an individual virtual machine (VM) for each mobile using VM multiplexing and consolidation techniques that allow for multi-task parallel computation [30]. Based on the model in [9], the finite VM computation capacity for each mobile can be reflected by upper-bounding the number of offloaded CPU cycles, denoted by , for which the required computation time remains negligible compared with the total computation latency . Mathematically, it enforces that

. Moreover, assuming relatively small sizes of computation results for applications (such as face recognition, object detection in video, and online chess game) and high transmission power at the BS, downloading is much faster than offloading and consumes negligible mobile energy.

555For data-intensive applications such as virtual/augmented reality, the energy consumption and latency for result downloading is non-negligible. In these cases, we expect that the current framework for offloading can be modified and applied to designing asynchronous downloading control as well. Under these conditions, the second and third phases are assumed to have negligible durations compared with the first phase. Assume that the mobiles access the cloud based on TDMA. Specifically, for each epoch, say epoch , the mobiles belonging to the user-set time-share the epoch duration . For these mobiles, let denote the allocated offloading duration for mobile , where corresponds to no offloading. For the case of offloading (), let denote the channel power gain between mobile and the BS, which is assumed to be constant during the computation offloading for each mobile. Based on a widely-used empirical model in [31, 32, 4, 6], the transmission power, denoted by , can be modeled by a monomial function with respect to the achievable transmission rate (in bits/s) :


where denotes the energy coefficient incorporating the effects of bandwidth and noise power, and

is the monomial order determined by the adopted coding scheme. Though this assumption may restrict the generality of the problem, it leads to simple solutions in (semi-) closed forms as shown in the sequel and provides useful insights for practical implementation. Moreover, it provides a good approximation for the transmission power of practical transmission schemes. For example, considering the coding scheme for the targeted bit error probability less than

[33], Fig. 2 gives the normalized signal power per symbol versus the rate, where the monomial order of can fairly approximate the transmission power.666In practice, the value of can be determined by curve-fitting using experimental data. Note that it is possible to achieve better curve-fitting performance by using the polynomial function, for which the proposed iterative design in the sequel can be extended to solve the corresponding convex optimization problem with key procedures remaining largely unchanged. Thus, the offloading energy consumption can be modeled by the following monomial function with respect to and :


Note that if , we have and thus . The total energy consumption of mobile for transmitting the offloaded input data, denoted by , is given by: .

Figure 2: Modulation scheme given in the table is considered in [33], where SNR is short for signal-to-noise ratio and represents the minimum distance between signal points. The corresponding plot shows to the scaled piecewise linear power-rate curve.

Iii Problem Formulation

In this section, the energy-efficient asynchronous MECO resource management is formulated as an optimization problem that jointly optimizes the data partitioning and time divisions for the mobiles. The objective is to minimize the total mobile-energy consumption: . For each epoch, the multiuser offloading should satisfy the time-sharing constraint:


For each user, the total offloaded data size and computation are constrained by:

(Data constraint) (4)
(Local computation capacity constraint) (5)
(VM computation capacity constraint) (6)

Note that the deadline constraint for each mobile is enforced by setting the local-computing data size as -bits. Under these constraints, the optimization problem is readily formulated as:


where . One can observe that Problem P1 is feasible if and only if , which is equivalent to . Next, note that the variables and are coupled in the objective function. To overcome this difficulty, one important property of Problem P1 is provided in the following lemma, which can be proved in Appendix -A.

Lemma 1.

Problem P1 is a convex optimization problem.

Thus, Problem P1 can be directly solved by the Lagrange method that involves the primal and dual problem optimizations [34]. This method, however, cannot provide useful insights on the structure of the optimal policy, since it requires the joint optimization for the data partitioning and time division that have no closed form. To address this issue, in the following sections, we first study the optimal resource-management policy for the general case where deadlines of mobiles are arbitrary (e.g., ) by using the block coordinate decent (BCD) optimization method [35]. Subsequently, we derive more insightful structures of the optimal policy for two special cases, namely asynchronous MECO with the identical and reverse arrival-deadline orders. Recall that for the data-arrival order, we have without loss of generality. The so-called identical and reverse arrival-deadline orders refer to the cases where it satisfies and , respectively, as illustrated in Fig. 3.

Figure 3: Illustration for asynchronous MECO systems with the identical and reverse arrival-deadline orders.

Iv Optimal Resource Management with General Arrival-Deadline Orders

This section considers the asynchronous MECO with general arrival-deadline orders and designs the energy-efficient resource-management policy. To characterize the structures of the optimal policy, we propose an iterative algorithm for solving Problem P1 by applying the BCD method. Specifically, given any offloading durations for all the mobiles , we optimize the offloaded data sizes for each mobile, corresponding to energy-efficient data partitioning. On the other hand, the offloading durations of the mobiles, , are optimized given any offloaded data sizes , referred to as energy-efficient time division.

Iv-a Energy-Efficient Data Partitioning

This subsection aims at finding the optimal offloaded data sizes for the mobiles, given any feasible offloading time divisions . For each mobile , let denote its offloading epoch set comprising the epoch indexes for which . Mathematically, . Then it can be easily observed that Problem P1 reduces to parallel sub-problems, each corresponding to one mobile as:


Problem P2 can be easily proved to be a convex optimization problem. Applying the Lagrange method leads to the optimal data-partitioning policy as follows, which is proved in Appendix -B.

Proposition 1 (Energy-Efficient Data Partitioning).

For each mobile, say mobile , given the offloading time divisions , the optimal data-partitioning policy for different epochs for solving Problem P2, denoted by , is given by


for , where


, , and is the solution to , with


Proposition 1 shows that the optimal offloaded data sizes, , are determined by a single parameter . Specifically, as , where and defined in (9) monotonically decreases with , and thus can be uniquely determined and efficiently computed using the bisection-search algorithm [34]. In addition, it can be proved by contradiction that for the case of , the optimal offloaded data size in each epoch, , is monotonically-increasing with , and , and monotonically-decreasing with and . This is consistent with the intuition that it is desirable to offload more bits as the channel condition improves, the local-computing complexity increases, the allocated offloading time duration increases, or the computation deadline requirement becomes more stringent. Moreover, when the monomial order increases (e.g., when the offloading wireless transmission targets for a lower error probability), it is more energy-efficient to reduce the offloaded data size since the required transmission power increases with .

Remark 1 (Identical Offloading Rates).

It can be inferred from Proposition 1 that given the optimal time divisions , for each mobile, the optimal offloading rates in different epochs are identical. This is expected, since for each mobile, the channel power gain, bandwidth and noise power are the same in different epochs.

To further characterize the effects of offloading duration and computation capacities of the mobile and cloud on the data-partitioning policy, we define an auxiliary function for each mobile , denoted as for simplicity, as the root of the following equation with respect to .


Two useful properties of can be easily derived: 1) , and 2) is monotonically decreasing with the total offloading duration . Then the optimal data-partitioning policy in Proposition 1 can be restated as follows, which is proved in Appendix -C.

Corollary 1.

For each mobile, say mobile , given the offloading time divisions , the optimal data-partitioning policy in Proposition 1 can be re-expressed as


for , where is defined in (8).

Corollary 1 shows that each mobile should perform the mobile-constrained minimum or cloud-constrained maximum computation offloading (with the total offloaded data sizes being and , respectively), if the mobile or VM server becomes a bottleneck with insufficient computation capacities less than the given thresholds, respectively. It is worth mentioning that if both the mobile and VM have insufficient capacities, computing the input-data by the deadline is infeasible. Moreover, it can be observed that as the total offloading duration grows, decreases and increases, meaning that the mobile tends to offload more data provisioned with a longer offloading duration.

Iv-B Energy-Efficient Time Division

For given offloaded data sizes , this subsection focuses on optimizing the time-division policy, , in all epochs to minimize the total mobile-energy consumption. For each epoch , let denote the offloading user-set comprising the mobile indexes for which . Mathematically, . Since the time-sharing constraints can be decoupled for different epochs, Problem P1 reduces to solving the following parallel sub-problems:


Problem P3 is a convex optimization problem and its optimal solution can be easily derived by using the Lagrange method, which is given in the following proposition.

Proposition 2 (Energy-Efficient Time Division).

For each epoch, say epoch , given any offloaded data sizes , the optimal time-division policy for different mobiles for solving Problem P3, denoted by , is given by


where .

Proposition 2 shows that the optimal offloading duration for each mobile is proportional to the epoch duration by a proportional ratio , which is determined by the offloaded data size and channel gain. Specifically, to minimize the total mobile-energy consumption in each epoch, the mobile with a larger offloaded data size and poorer channel should be allocated with a longer offloading duration.

Last, based on the results obtained in these two subsections, the optimal solution to Problem P1 can be efficiently computed by the proposed iterative algorithm using the BCD method, which is summarized in Algorithm 1. Since Problem P1 is jointly convex with respect to the data partitioning and time divisions , iteratively solving Problem P2 and P3 can guarantee the convergence to the optimal solution to Problem P1.

Remark 2 (Low-Complexity Algorithm).

Given offloading time-divisions, the computation complexity for the optimal data partitioning is up to , where characterizes the complexity order for the one-dimensional search. Given offloaded data sizes, the optimal time-division policy has the complexity order of owing to the closed-form expression. Thus, the total computation complexity for the proposed BCD algorithm is accounting for the iterative procedures. Simulation results in the sequel show that the proposed method can greatly reduce the computation complexity, especially for larger number of mobiles and epochs compared with the general convex optimization solvers, e.g. CVX, which is based on the standard interior-point method that has the complexity order of [36].

  • Step 1 [Initialize]: Let ; , and .

  • Step 2 [Block coordinate descent method]: Repeat
    (1) Given , compute the optimal data-partitioning policy as in Proposition 1.
    (2) Given , compute the optimal time-division policy as in Proposition 2.
    (3) Update .
    Until: The fractional decrease of the objective value of Problem P1 is below a threshold .

Algorithm 1 The Proposed Block Coordinate Descent Method for Problem P1

Iv-C Extension: Asynchronous MECO Based on Exponential Offloading Energy-Consumption Model

In this subsection, the solution approach developed in the preceding subsections is extended to the case with the exponential offloading energy-consumption model. Specifically, based on Shannon’s equation, the achievable rate can be expressed as where denotes the bandwidth, and the noise power. Since constant-rate transmission is the most energy-efficient transmission policy [5], it follows that the energy consumption for offloading -bit data with duration is given by


where the function is defined as . Based on this model, Problem P1 is modified by replacing the objective function with the following and the resulting new problem is denoted as Problem P4.


By following the similar procedure as for deriving Lemma 1, it can be shown that Problem P4 is a convex optimization problem. To characterize its optimal policy structure, we apply the BCD method to derive the energy-efficient data-partitioning and time-division policies as detailed in the following.

Iv-C1 Energy-Efficient Data Partitioning

For any given offloading division , Problem P4 reduces to parallel sub-problems:


where is similarly defined as in Problem P2. Problem P5 is a convex optimization problem. Directly applying Lagrange methods yields the optimal solution as below.

Proposition 3.

Consider asynchronous MECO based on the exponential offloading energy-consumption model. For any given offloading time division , the optimal offloading data size for each mobile is given by


for , where


, and is the solution to with

This proposition shows that given the offloading time division, if , the optimal offloading policy for the offloading data size has a threshold-based structure. Specifically, the mobile offloads partial input data or performs full local computing if is above or below the threshold , respectively. This is expected since offloading can reduce energy consumption only under the conditions of a good channel, stringent latency requirement or high local-computing complexity.

Iv-C2 Energy-Efficient Time Division

Similar to Section IV-B, for any given offloading data sizes , Problem P4 reduces to the following parallel sub-problems:


It can be proved that Problem P6 is a convex optimization problem. Define a function as . Following the similar procedure as for deriving Proposition 2, the optimal time-division policy for this case is characterized as below.

Proposition 4.

Consider asynchronous MECO based on the exponential offloading energy-consumption model. For each epoch, say epoch , given any offloading data sizes , the optimal offloading time division for solving Problem P6, denoted by , is given by


where is the inverse function of given by , and satisfies .

Last, combining the results of the optimal data partitioning and time division, the optimal solution to Problem P4 can be obtained by an iterative algorithm using the BCD method, which is similar to Algorithm 1 and omitted for brevity.

Iv-D Discussions

Extension of the proposed BCD solution approach to other more complicated scenarios are discussed as follows.

  • Robust design: To cope with imperfect mobile prediction and estimation in practice, the current framework can be modified as follows by applying robust optimization techniques. Based on a model of bounded uncertainty (see e.g., [37]

    ), the system-state parameters, including channel gain, data-arrival time and deadline, can be added with unknown bounded random variables representing estimation-or-prediction errors. Then using the worst-case approach

    [37], Problem P1 can be modified by replacing these parameters with their “worse cases” and then solved using the same approach, giving a robust offloading policy.

  • Online design: Similar to the online design approach in [22], upon new input-data arrivals or variations of mobiles’ information, the proposed control policies can be adjusted by updating information (e.g., new channel gains) and applying the current offline framework to determine the updated data-partitioning and time-division policies. Note that reusing the former results as the initial policy in the iterative recalculation is expected to reduce the computation complexity in temporally-correlated channels. Moreover, the disruptions of task computing can be avoided by continuing the former policy until obtaining the updated one. Last, assuming instantaneous mobiles’ information available at the BS, the policy-update approach can also be used for designing the greedy online policy. For frequent arrivals, the computation complexity can be reduced by designing a random policy-update approach, where the update probability depends on instantaneous mobiles’ information.

  • Time-varying channels: Assuming block-fading channels where the channel gain is fixed in each fading block and independent and identically distributed (i.i.d.) over different blocks, the solution approach can be easily modified that essentially involves re-defining the epoch-set as the fading-block indexes within the computation duration and the corresponding user-set in Definition 1. Then Problem P1 can be extended by replacing in the objective function with that denotes the channel gain of mobile in epoch . This problem can be solved using the same solution approach developed in the paper.

  • Non-negligible cloud-computation time: In the case of non-negligible cloud-computation time, the current problem in Problem P1 can be modified to include the said time in the deadline constraint as a function of the number of offloaded bits. For example, following the model in [9], the cloud-computation time is a linear function of the number of offloaded bits scaled by the fixed cloud-computation duration per bit. Though this entails more complex problems, the general solution approaches developed in this paper for asynchronous MECO should be still largely applicable albeit with possible modifications by leveraging results from existing work that considers cloud-computation time (see e.g., [9]).

  • OFDMA MECO: Consider the asynchronous MECO system based on OFDMA. Similar to [9], the corresponding energy-efficient resource management can be formulated as a mix-integer optimization problem where the integer constrains arise from sub-channel assignments. Though the optimal solution is intractable, following a standard approach, sub-optimal algorithms can be developed by relaxing the integer constraints and then rounding the results to give sub-channel assignments.

  • Binary offloading: The current results can be used to design the asynchronous MECO based on binary offloading. Note that the corresponding problem is a mixed-integer optimization problem, which is difficult to solve. To address this issue, a greedy and low-complexity algorithm can be designed by using probabilistic offloading. Particularly, with the obtained results for partial offloading in this work, the offloading probability for each mobile can be set as the ratio between offloaded and total data sizes. Then a set of resource-management samples can be generated, each randomly selecting individual mobiles for offloading following the obtained probability. Last, the sample yielding the minimum total mobile energy consumption gives the greedy policy. It is worthy mentioning that the policy can be further improved by using the cross-entropy method, which adjusts the offloading probability based on the outcomes of samples, but it will result in higher computation complexity [38].

V Optimal Resource Management with Identical Arrival-Deadline Orders

To gain further insights for the structure of the optimal resource-management policy, this section considers the special case of asynchronous MECO with identical arrival-deadline orders, i.e., a mobile with earlier data arrival also needs to complete the computation earlier. This case arises when the mobiles have similar computation tasks (e.g., identical online gaming applications) but with random arrivals. For this case, the solution to Problem P1 can be further simplified by firstly determining an optimal scheduling order and then designing energy-efficient joint data-partitioning and time-division policy given the optimal order. Note that this design approach does not require the resource management in each epoch. We consider that the mobiles and VMs have unbounded computation capacities and the monomial order , since it can fairly approximate the transmission-energy consumption in practice.777The results can be extended to derive the suboptimal policy for the case of by using approximating techniques, although the corresponding optimal policy has no closed form which can be computed by iterative algorithms. More importantly, it will lead to useful insights into the structure of the optimal policy as shown in the sequel that the optimal time-division policy admits a defined effective computing-power balancing structure. Moreover, the optimal policy is simplified for a two-user case.

First, we define the offloading scheduling order as follows.

Definition 2 (Offloading Scheduling Order).

Let denote the offloading scheduling order with for . Under this order, mobile is firstly scheduled for offloading, followed by mobile , mobile until mobile . Note that in general since each mobile can be scheduled more than once.

Figure 4: Mapping between the time-division policy and scheduling order.

Note that given a scheduling order (e.g., ), one specific mobile (e.g., mobile ) can be repeatedly scheduled, corresponding to computation offloading in multiple non-overlapping epochs. Recall that Problem P1 optimizes the offloading time divisions and offloaded data sizes for the mobiles in all epochs. Specifically, for each epoch, the derived time-division policy only determines the offloading durations allocated for different mobiles, without specifying the scheduling order. In other words, if considering the scheduling order, one time-division policy resulted from the solution to Problem P1 can correspond to multiple scheduling orders as illustrated in Fig. 4. On the other hand, if given the scheduling order, the time-division policy for solving Problem P1 can be uniquely determined.

Based on the above definition and discussions, in the following subsections, we first derive one optimal scheduling order and then optimize the joint data-partitioning and time-division policy given the optimal order.

V-a Optimal Scheduling Order

Recall that given the identical arrival-deadline orders, we have and . This means that mobile has earlier data arrival than mobile and also requires the computation to be completed earlier. Using this key fact, we characterize one optimal offloading scheduling order as follows, which is proved in Appendix -D.

Lemma 2 (Optimal Scheduling Order).

For the case of identical arrival-deadline order, one optimal scheduling order that can lead to the optimal solution to Problem P1 is .

Lemma 2 shows that for the case of identical arrival-deadline orders, there exists one optimal deterministic and simple scheduling order that entails sequential transmission by mobiles following their data-arrival order. The intuitive reason behind the optimality of such an order is that the mobile with an earlier input-data arrival has a more pressing deadline. On the other hand, for the case with general arrival-deadline orders, the optimal scheduling has no clear structure, due to the irregularity in data arrivals and deadlines across mobiles.

V-B Energy-Efficient Data Partitioning and Time Division Given the Optimal Scheduling Order

Given the optimal scheduling order in Lemma 2, this subsection aims to jointly optimize the offloaded data sizes and offloading durations for the mobiles for achieving the minimum total mobile-energy consumption.

Note that, instead of partitioning each epoch duration for relevant mobiles, the introduced scheduling order helps provide an alternative design approach that can directly partition the total time interval for the mobiles given the optimal scheduling order. This approach yields new insights for the policy structure as elaborated in the sequel. Specifically, let , and denote the starting-time instant, total offloading duration and offloaded data size for mobile , respectively. The offloading for the mobiles should satisfy the following constraints. First, under the data causality constraint which prohibits input data from being offloaded before it arrives, we have


Next, the deadline constraint requires that


In addition, the time-sharing constraint in (3) reduces to the time non-overlapping constraint as:


where is defined as . Based on Lemma 2 and above constraints, the solution to Problem P1 assuming can be derived by solving the following problem:


Problem P4 can be proved to be a convex optimization problem using the similar method as for deriving Lemma 1. One important property of Problem P4 is given below, which can be proved by contradiction and the proof is omitted for brevity.

Lemma 3.

For the case of identical arrival-deadline orders, the optimal offloading starting-time instants and durations for solving Problem P4, denoted by , satisfy the following:


Lemma 3 indicates that the multiuser offloading should fully utilize the whole time duration, which is expected since offloading-energy consumption decreases with the offloading duration. Using Lemma 3, Problem P4 can be rewritten as follows.


Note that given the constraint of , the data causality constraint is always satisfied since . Moreover, indicates the deadline constraint. It can be easily proved that Problem P5 is a convex optimization problem. To characterize the structure of the optimal policy, we decompose Problem P5 into two sub-problems, namely the slave problem corresponding to the energy-efficient data partitioning given offloading durations and the master one for the energy-efficient time division.

V-B1 Slave Problem for Energy-Efficient Data Partitioning Given Offloading Durations

For any given offloading durations , Problem P5 reduces to the slave problem that optimizes the offloaded data sizes . It is easy to see that this slave problem can be decomposed into parallel subproblems as