Energy-Efficient Offloading in Mobile Edge Computing with Edge-Cloud Collaboration

11/09/2018 ∙ by Xin Long, et al. ∙ Microsoft USTC 0

Multiple access mobile edge computing is an emerging technique to bring computation resources close to end mobile users. By deploying edge servers at WiFi access points or cellular base stations, the computation capabilities of mobile users can be extended. Existing works mostly assume the remote cloud server can be viewed as a special edge server or the edge servers are willing to cooperate, which is not practical. In this work, we propose an edge-cloud cooperative architecture where edge servers can rent for the remote cloud servers to expedite the computation of tasks from mobile users. With this architecture, the computation offloading problem is modeled as a mixed integer programming with delay constraints, which is NP-hard. The objective is to minimize the total energy consumption of mobile devices. We propose a greedy algorithm as well as a simulated annealing algorithm to effectively solve the problem. Extensive simulation results demonstrate that, the proposed greedy algorithm and simulated annealing algorithm can achieve the near optimal performance. On average, the proposed greedy algorithm can achieve the same application completing time budget performance of the Brute Force optional algorithm with only 31% extra energy cost. The simulated annealing algorithm can achieve similar performance with the greedy algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The recent tremendous growth of various wireless devices and diverse applications has brought the challenge in wireless systems. Since the proliferation of smart mobile devices and wearable sensors, mobile traffic and computation tasks have increased dramatically. Therefore, cloud computing [2] as well as 5G communication [5, 10] has been proposed to deal with this challenge in the big data era. Despite the potential in data storage and analysis, cloud computing cannot fulfill the growing application requirements such as low latency and context awareness. Multiple-access mobile Edge Computing (MEC) [14] that serves as a complement for cloud computing can potentially overcome the weakness of mobile cloud computing by offloading computation intensive tasks at the edge of wireless networks [6].

Task allocation and computation resource assignment are crucial to MEC, especially in the presence of an application with a large number of delay sensensitive subtasks. For example, on-line gaming for recreation or face recognition for security purposes. Those tasks should be handled in time taking the finite bandwidth and limited computation resources into consideration. The offloading problem that taking into consieration the above factors jointly are usually mixed integer programming problems which are non-convex and NP-hard

[8] [9]. Among the task allocation and resource assignment schemes, energy optimization is one of the key factors that affect the performance of the computaton resource limited mobile devices. That’s because the energy consumption of mobile devices would exponentially grow when there are multiple complex tasks on the devices.

Earlier works on energy optimization for MEC, such as [3, 18], assumed unlimited energy supply of edge servers. Bi et al. [3] addressed the computation rate maximization problem in wireless powered MEC networks. Mobile devices can harvest energy from the cellular base station that with an MEC server. The original problem was non-convex and a decoupled optimization with coordinate descent method was proposed to solve the proposed problem. Lyu et al. in [18] studied the total energy consumption of multiple devices with latency constraints. The problem was modeled as a mixed-integer programming, followed by a dynamic programming algorithm based on Bellman equation. More recent researches [4, 15] have been focused on delay minization with energy or budget constraints of edge servers. Chen et al. [4] carried out with a novel multi-cell MEC architecture where edge devices such as base stations can cooperate with remote server on task execution. Considering the ON/OFF nature of edge servers, they used Lyapunov optimization technique to obtain optimal decisions on task offloading. Considering task dependency, Kao et al. [15] presented Hermes, aiming at minimizing total execution time of tasks with user budget constraints.

Existing works [3] [4] Hermes [15] [18] This work
Task Dependency No No Yes No Yes
Edge-Cloud Collaboration No No No No Yes
Energy Constraint of Users No Yes Yes No Yes
Server Utility Constraint No No No No Yes
Objective Computation Rate Delay Delay Energy Energy
Table 1: Comparison between existing works and this work.

Based on the literature reviews, task dependency was not properly investigated by [3, 4, 18], which is important for real deployment. Although task dependency was used in the model by [15], authors in [15]

merely neglected the influence of remote cloud servers. Moreover, all the above works assume the remote cloud server can be viewed as a special edge server or the edge servers are willing to cooperate. In real senarios, the remote cloud server has higher computation capability than the edge server and the transmission delay between edge cloud and remote server cannot be neglected when designing proper offloading schemes. Take face recognition as an example. The feature extraction tasks for face images obtained by individual mobile devices can be offloaded to edge servers while the machine learning and face recognization, i.e., image matching tasks can be executed on the remote cloud servers. Therefore, with edge-cloud cooperation, the target faces can be detected with certain bounded delay for distributed mobile devices.

In this work, we investigate computation offloading decision and resource allocation problem with given delay requirements of mobile applications. The objective is to minimize sum energy consumption of mobile devices. Different from above works, we take edge-cloud cooperation into account, which being new challenges for the energy optimization problem. Since there are heterogeneous network resources, it is necessary to determine which the computation tasks should be done at remote clouds, proccessed at edge servers or local mobile devices. From the perspective of edge and remote cloud servers, their service for mobile devices should be compensated for the cost of execution and their profits should be guaranteed. Since the tasks of one application is delay bounded, how to handle edge-cloud cooperation with user budget constraints should be carefully designed.

The main contributions of this paper can be summarized as follows:

  • A novel edge-cloud cooperation architecture is proposed in wireless heterogeneous network with edge servers deployed at small-cell base stations and remote cloud servers connected to the macro-cell base station. The edge server can hire remote edge servers to process some of the tasks originated from mobile devices.

  • The offloading problem is modeled as a mixed integer non-linear programming, which is NP-hard. We then propose a greedy algorithm as well as a simulated annealing algorithm to effectively solve the problem.

  • To provide incentive for edge servers, we propose a pricing scheme with virtual currency from mobile users to edge servers and remote cloud servers for the dedication of servers serving mobile users.

  • Extensive simulation results demonstrate that, the proposed greedy algorithm and simulated annealing algorithm can achieve the near optimal performance. On average, the proposed greedy algorithm can achieve the same application completing time budget performance of the Brute Force optional algorithm with only 31% extra energy cost. The simulated annealing algorithm can achieve similar performance with the greedy algorithm.

The remainder paper is organized as follows. System model and computation model are presented in Section 2. Section 3 presents the problem formulation. The proposed algorithms is described in Section 4. Section 5 presents the performance evaluation. Section 6 concludes this paper with future remarks.

2 System Model and Computation Model

This section firstly describes the system model and formulates the offloading problem for energy saving with local computing, edge computing and the collaboration between edge and cloud servers.

2.1 System Model

Figure 1: System Architecture

As shown in Fig. 1, each edge server is located at the access point (AP) [7] which is also being attached by multiple mobile devices. The edge server is deployed at the AP and is linked to the remote cloud via high speed fiber links. Let be the set of mobile devices, We assume that there are mobile devices. Therefore, we have , where . Meanwhile, there is a set subtasks on the -th mobile device, which cloud be denoted as , where .

Next, we will introduce the communication and computation models for mobile devices, edge servers and remote cloud in detail.

Notation Descriptions
Number of mobile devices
Number of subtasks
Data size of subtask on mobile device
Workload of subtask on mobile device
Uplink data rate for subtask of mobile device
Time spent when sending subtask of device to edge server
Time spent when sending subtask of device from edge server to remote cloud
Energy cost during transmission between mobile device and edge server for subtask of device
Energy cost during transmission between edge and cloud for subtask of mobile device
The delay when executing subtask locally
Energy consumption when executing subtask of device
Completing time of subtask on mobile device that executed locally
Energy cost during the completing time of subtask on device with local computing
Budget or allowed delay threshold for subtasks on device
Total energy cost for all subtasks of device
Total time consumed for all subtasks of mobile device
Profite of the edge server
Offloading policy for subtask of device on local computing
Offloading policy for subtask of device on edge computing
Offloading policy for subtask of device on remote execution
Table 2: Basic Notations

2.2 Communication Model

Transmission between mobile devices and edge.

Let , and each represents the computation offloading policy made by the -th mobile device. Particularly, denotes that the subtask on mobile device is executed locally while denotes that the subtask of mobile device is executed on the edge server. Similarly, denotes that the subtask on mobile device is executed on the remote cloud. We can compute the uplink data rate for wireless transmission between mobile device and edge server as [5]:

(1)

where is the transmission power of mobile device to upload the subtask to the edge server via AP, is the channel gain between the th mobile device and the corresponding AP when transmitting subtask . where denotes the Euclidean distance between mobile device and edge server, is the corresponding Rayleigh fading channel coefficient that obeys the distribution of [23]. The surrounding noise power at the receiver, i.e. the AP, is [23].

It should be noted that, for the benefit of presentation, the downlink transmisssion rate is represented by the corresponding uplink rate. In the following expressions, we also utilize the expression of uplink transmission delay to repsent the downlink transmission delay. That’s because the downlink transmision rate is usually a few times larger than the uplink transmission rate due to the channel allocation result of network operator. With this change, we can reduce the complexity of delay and energy cost expressions, which will be described in detail in following paragraphs.

The transmission delay of subtask between mobile device and the corresponding edge server thus can be [12]

(2)

where represents the time spent on sending the subtask on mobile device to the edge server, while is the data size of the subtask of device . Based on the above equations, we can obtain the energy consumption when transmitting subtask of mobile device to the edge server as

(3)

where is the power of mobile device when sending subtask .

Transmission between edge and cloud.

Due to fact that the edge server links the remote cloud via wired connection, the delay of data transmission from edge server to the cloud thus is

(4)

where denotes the transmission delay for subtask of mobile device from edge server to the cloud. denotes the upstream bandwidth. Given the transmission delaybetween edge and remote cloud and the transmission power , can be expressed as

(5)

where is the energy consumed when sending the subtask of mobile device from edge to the cloud.

2.3 Computation Model

Computation on local device.

Let be the CPU clock speed of mobile device and be the workload of subtask of mobile device , if the subtask on mobile device is executed locally, then the subtask’s execution time is

(6)

Given the computation time , the energy consumed for subtask of mobile device for local computing is

(7)

By default, is set as following [13].

Computation on edge.

Let be the CPU frequency of edge server, if the subtask of mobile device is executed on the edge server, the computation time of the edge server can be

(8)

and the energy cost of edge server can be expressed as:

(9)

According to [20], and are the positive constants which can be obtained by offline power fitting and ranges from 2.5 to 3. If subtask of mobile device is executed on the cloud, the computation delay and energy cost of remote cloud are as follows:

(10)

and

(11)

2.4 Dependency Constraints

Definition 1

Subtask’s completing time: subtask of mobile device can only start when all its predecessor subtasks has been completed. The completion time for the th subtask of mobile device is consisted of two parts: the time spent to obtain the results of all its predecessor tasks and the time spent for its own computation.

Definition 2

Energy cost to accomplish one subtask: it is also consisted of two parts: the energy spent getting the result of predecessor tasks and the energy spent for its own execution.

Base on the above definitions, if subtask of mobile device is assigned to be executed locally, its completion time can be expressed as:

(12)

and the energy cost for local completion is

(13)

In (12) and (13), . The notation in (12) means all the predecessor subtasks of the th subtask. In (12), the term is the delay to obtain the predecessor subtask’s result of the th subtask, if the predecessor subtask of is executed on the edge server. Similarly, the term is the delay to obtain the result if the predecessor subtask of is accomplished on the cloud server.

If subtask of mobile device is assigned to be executed on the edge server, the completion time of subtask can be defined as:

(14)

where is predecessor subtask’s assignment strategy on mobile device. means the th subtask is computed on the local mobile device, while , otherwise. The term is the delay to transmit the result of predecessor task from mobile device to the edge server while is the delay to send the prior result from the remote cloud to the edge server.

Let be the energy cost for subtask of device executed on the edge server, similarly as (13), it can be defined as

(15)

Similarly as (12) and (14), if subtask of mobile device is assigned to be executed in the remote cloud, its completion time can be expressed as

(16)

and the corresponding energy cost to complete the subtask on the remote cloud, is

(17)

2.5 Utility constraints

Next, we drive the utility constraints of edge server and the time budget for the completion time. The utility of edge server is

(18)

where is the utility of the edge server, is service price of edge server.

3 Problem Formulation

In this section, we will present the problem formulation with constraint of time budget and utility constraint . Firstly, the completion time of all tasks on mobile device can be defined as

(19)

where is the task completion time of subtask if it is executed locally, is the task completion time of subtask if it is executed on the edge server and is the task completion time of subtask if it is executed on the remote cloud.

The total energy consumption of one application, which is denoted as is

(20)

where is the energy consumption of subtask if it is executed on the mobile device, is the energy cost of subtask if it is executed on edge server and is the energy cost of subtask if it is executed on the remote cloud.

In this work, the goal is to minimize the total energy consumption of tasks while meeting the completion time constraint. Meawhile, the utility of the edge server is guaranteed. The energy consumption minimization problem thus can be defined as:

Where constraint is the utility constraint which guarantees the positive utility of the edge server. is the task completion time budget, i.e., the delay constraint. lists binary constraints and is the unique solution constraint, which means that one subtask can only be executed at one place.

Theorem 3.1

The sum task completion energy minimization problem for computation offloading in this study is NP-hard.

Proof

We transform the oritnal problem depectied in and consider a special case that the mobile device, edge server and remote cloud server are with the same configurations, which result in the same energy costs and executing time when executing tasks. Regarding each subtask as a goods with value and weight, then the value corresponds to the execution time while the weight corresponds to the energy cost. Then we ignore the task dependency constraint between subtasks as well as the constraint . can then be viewed as the knapsack’s value constraint. Therefore, the relaxed problem of has changed into a knapsack problem [16] which is NP-hard. Therefore, the oritinal problem is also NP-hard, which concludes this proof.

4 Algorithms

4.1 Gain method

Based on the above models and analysis, first of all , we design a greedy method named Gain to minimize the energy consumption of mobile device when finish executing tasks. To acquire the minimum energy cost of all subtasks in an application on mobile device , the minimum energy cost of subtask is selected from . This subtask-procedure is shown between Lines 1 to 11 of Algorithm 1.

Then, we iteratively adjust the initial offloading policy to fit for the constraint of and the completion time budget . If the offloading policy does not satisfy the constraint of , which means that the number of subtask executed on remote cloud is too much to make the edge server get profits when serving mobile users. To fit the constraint of , we must offload some subtasks from the remote cloud to mobile device or to the edge servers.

Then the algorithm chooses subtask considering which subtask will be offloaded. To obtain the minimum energy cost, we take the changing energy cost as the criteria to set the priority. The smaller the changing energy cost is, the higher the priority will be.

To fit for the constraint of completion time budget, we compute the changing completion time and the changing energy cost in each offloading choice. We choose the corresponding offloading stategy in the choice, which decreases the changing completion time and guarantees the minimum changing of energy cost. Due to the constraint of utility , the choosing of offloading site for subtasks should be very careful. If subtask is assigned to be executed on mobile device, the offloading choice must be from mobile device to the edge server. If subtask is assigned to be executed on edge server, the offloading choice must be from edge server to mobile device. If subtask is assigned to be executed on remote cloud, the offloading choice can either be from the remote cloud to edge serve or from the remote cloud to mobile device. The detail of the Gain algorithm is depicted in Algorithm 1.

0:    : a sequence of subtask-tasks mobile device , the execute order of subtasks; : the workload size of subtasks; : the data size of subtasks; : the completion time budget for subtasks; : 2-D array for each subtask’s predecessor’s task;
0:    : the policy of subtask executed on mobile device locally; : the policy of subtask executed on edge server; : the policy of subtask executed on remote cloud;
1:  for  do
2:     computer , , by Equation (13), (15), (17)
3:     if   then
4:         1, 0, 0
5:     end if
6:     if   then
7:         0, 1, 0
8:     end if
9:     if   then
10:         0, 0, 1
11:     end if
12:  end for
13:  compute and
14:  while   do
15:     if  then
16:        choose the subtask that bings about minimum changing energy consumption when offloading the subtask from the remote cloud to the edge server, or from the remote cloud to mobile device.
17:     end if
18:     if  then
19:        for  do
20:           if  then
21:              compute the changing energy cost when offloading the subtask from mobile device to the edge server.
22:           end if
23:           if  then
24:              compute the changing energy cost when offloading the subtask from the edge server to mobile device
25:           end if
26:           if  then
27:              compute the changing energy cost when offloading the subtask from remote cloud to mobile device or from remote cloud to edge server.
28:           end if
29:           choose the offloading policy with the minimum changing energy cost and decrease changing completing time
30:        end for
31:     end if
32:  end while
Algorithm 1 Gain method for mobile device
Theorem 4.1

The time complexity of the Gain algorithm is .

Proof

In algorithm 1, the time complexity of subprocess from line 1 to 12 is and the time complexity of subprocess from line 14 to 31 is for the reason that the adjust time of time won’t be more than .

Let be the the optimal energy cost, where corresponding optimal strategy is denoted as . Let be the energy cost of by executing Algorithm 1 and be the strategy of Algorithm 1. Then we have .

Theorem 4.2

Proof

As shown in (21), the energy cost of the strategy can be defined as the sum of energy cost of each subtask .

(21)

It stands to reason that the optimal strategy always include the offloading strategy with minimum energy cost, so the optimal energy cost can be defined as:

(22)

where denotes the minimum energy cost for the -th subtask. It also stands to reason that the Algorithm 1 not always includes the offloading strategy with maximum energy cost, so the (23) is true.

(23)

denotes the maximum energy cost for the -th subtask. If  is true that is true as well. For the energy cost of each subtask , it includes the part of the energy cost of requesting the result from predecessor subtasks and the part of the energy cost of executing the subtask . so   can be rewritten as

(24)

Base on Equations (7) (9) (11), we obtain from the maximum value of , and . Similarly, we also obtain . Based on the (13) (15) (17), we obtain from the minimum value of , and . Similarly, we also obtain . Base on the above, we can know that and the (24) is true. So is true and is true.

4.2 Simulated annealing

The simulated annealing (SA) [22] is a local search algorithm. In the basic SA, there is always a randomly selected solution in the begin. But in our algorithm, we obtain the initial solution from the Gain algorithm. Next, we initialize the temperature of SA. Then when the temperature is greater than , we take the subprocess of adjusting the offloading policy in random and determine if the SA algorithm accepts the offloading policy. The detail of the SA algorithm is shown in Algorithm 2.

0:    : a sequence of subtask-tasks mobile device , the execute order of subtasks; : the workload size of subtasks; : the data size of subtasks; : the completing time budget for subtasks; : 2-D array for each subtask’s predecessor’s task; : initial temperature; : the speed of cooling;
0:    : the policy of subtask executed on mobile device locally; : the policy of subtask executed on edge server; : the policy of subtask executed on remote cloud;
1:  get the initial offload policy from gain method
2:  
3:  while  do
4:     compute based on the offload policy
5:     randomly choose a subtask
6:     
7:     
8:     if  then
9:         1, 0, 0
10:     end if
11:     if  then
12:         0, 1, 0
13:     end if
14:     if  then
15:         0, 0, 1
16:     end if
17:     compute , and
18:     if  then
19:        accept with probility
20:        
21:     end if
22:  end while
Algorithm 2 Simulated annealing for mobile device

5 Performance Evaluation

To study the performance of proposed algorithms, We implement the algorithms on a high performance work station with an Intel I7 processor at frequency and has a 8G RAM. We use Python [1] to simulate the offloading of subtasks and evaluate the algorithms in terms of running time, application completion time and energy cost with repeated trials.

In order to simulate real-world tasks, we use a typical task graph [21] as shown in Fig. 2. In Fig. 2, dependency constraints exists between subtasks, which determine the execution order. Based on the task graph, one possible execution sequence for subtasks is .

Figure 2: The task graph.

5.1 Simulation Setup

We set the 8 subtasks with evenly distributed workload and evenly distributed data size. The signal noise between the edge server and mobile device is set as , the wireless bandwidth of upload is set as and the wireless bandwidth of download is set as [11]. The bandwidth between edge server and remote cloud of upload is and the bandwidth between edge server and remote cloud of download is [11]. The CPU frequency of mobile device is , while the CPU frequency of edge server is [19]. The CPU frequency of remote cloud is set as [19]. System parameters [19]. The communication chip power of mobile device is watt [17]. The communication chip power of edge server is watt [17] and the communication chip power of remote cloud is watt [17].

5.2 Algorithm Performance

Figure 3: The comparison of three algorithm’s executing time on different average workload size.

Fig. 3 shows the comparison of Gain, Brute Force and SA in terms of running time with different workload sizes. From Fig. 3, we observe that, the running time of Brute Force is range from 7.54s to 7.68s while the running time of SA is almost and the running time of Gain is less than 0.02s. This is because of that the Brute Force tries to exhaustively all solutions and the solution space of the problem is , where denotes the number of subtasks. From Fig. 3, we can observe that, the running time of three algorithms almost no fluctuations, which indicates the robustness of algorithms. In Brute Force, the maximum running time is , while the minimum running time is , the differential value of the maximum running time and the minimum running time is only . In Gain, the maximum running time is and the minimum running time is . In SA, the running time is almost .

Figure 4: The energy cost of SA with the change of workload size.
Figure 5: The energy cost of Gain and Brute Force with the change of workload size.

Fig. 5 and Fig. 5 show the comparison of Gain, Brute Force and SA on energy cost with different workload size. In Fig. 5 and Fig. 5, The Brute Force always obtains the minimum energy cost compared with two other algorithms. On the other hand, because SA uses the results of Gain to initialize its initial solution and can not find more effective offloading strategy, SA always obtains the same result as Gain. From the comparison of Brute Force and Gain, we observe that Gain can optimally achieve the same completion time budget performance of optional result with only extra energy cost. In Fig. 5, when the workload size has grown from to , the energy cost also increases by . but the energy cost falls by when the workload size has grown from to due to the constraint of task dependency. From Fig. 5, the trends in energy consumption of Gain almost the same as the trends in energy consumption of Brute Force which indicates that The Gain always goes in the direction of the optimal solutions.

Figure 6: The comparison of application completion time of Gain, Brute Force and Budget on different workload size.
Figure 7: The comparison of application completion time as a percentage of Budget.

Fig 7 shows the comparison of application completing time of Gain, Brute Force. The completion time budget can be represent as (25) and denotes the workload matrix, denotes the number of subtask of the mobile device .

(25)

From Fig 7, we observed that Gain and Brute Force always obtain a efficient solution which satisfies the constraints. In Fig 7, the completion time of Gain is range from to of the completion time budget.

From Fig 9, along with the growth of completing time budget, the number of the task executed on edge server decrease from 40 to 34 and the number of budget executed on mobile device increases from 0 to 6. Finally, they achieve balance. it is the reason of the constraint of and the design of algorithm. In Fig 9, the number of tasks assigned to be executed on edge server changes from 40 to 34 , the profile of edge server is smaller.

Figure 8: The change of the number of task on mobile device and edge server along with the increase of budget.
Figure 9: The change of with the increase of budget.

6 Conclusions

This paper has addressed novel computation offload schemes with device, fog and remote cloud collaboration. We have formulated the offload problem as a energy cost minimize problem with application completing time budget and fog profit’s constraint. The problem is NP-hard problem. Focus on that problem, we design a greedy algorithm aimed to minimize the the energy cost, which also fit the constraint of completion time, utility and task dependency. After analyzing the results of the experiment, the following points are obtained. Firstly, the implementation shows that in a three-tier structure such as Mobile, Edge Server and Remote cloud, Edge server plays a very important role in effectively reducing the energy consumption during task execution. Secondly, the proposed greedy algorithm can achieve the same application completing time budget performance of the Brute Force optional algorithm with only 31cost. The simulated annealing algorithm can achieve similar performance with the greedy algorithm.

In the future, we will devise online algorithms by modifying the initialization process of each algorithms and discuss the min energy cost problem with each subtask’s completing time budget.

7 Acknowledgment

This work was supported by the National Key R&D Program of China under Grant No. 2018YFB1003201. It was also supported in part by the National Natural Science Foundation of China under Grant Nos. 61702115 and 61672171, by the Major Research Project of Educational Commission of Guangdong Province under Grant No. 2016KZDXM052, and the China Postdoctoral Science Foundation Fund under Grant No. 2017M622632.

References