BS-assisted Task Offloading for D2D Networks with Presence of User Mobility

01/09/2019 ∙ by Ghafour Ahani, et al. ∙ Uppsala universitet 0

Task offloading is a key component in mobile edge computing. Offloading a task to a remote server takes communication and networking resources. An alternative is device-todevice (D2D) offloading, where a task of a device is offloaded to some device having computational resource available. The latter requires that the devices are within the range of each other, first for task collection, and later for result gathering. Hence, in mobility scenarios, the performance of D2D offloading will suffer if the contact rates between the devices are low. We enhance the setup to base station (BS) assisted D2D offloading, namely, a BS can act as a relay for task distribution or result collection. However, this would imply additional consumption of wireless resource. The associated cost and the improvement in completion time of task offloading compose a fundamental trade-off. For the resulting optimization problem, we mathematically prove the complexity, and propose an algorithm using Lagrangian duality. The simulation results demonstrate not only that the algorithm has close-to-optimal performance, but also provide structural insights of the optimal trade-off.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Task offloading is a key component in mobile edge computing. Typically, tasks are offloaded to remote servers [1, 2, 3] or to computing resources near to users, e.g., base stations (BSs) [4, 5]. However, these incur significant overhead in communications and networking [6]. An attractive alternative is to offload tasks to nearby users [7, 8]. For example, a user that currently runs on low energy offload send its task to an idle user with energy available for computation.

In mobility scenarios, the data of a task can be delivered via Device-to-Device (D2D) communications as the users move and meet each other [9, 10]. However, this is not a system-wide optimal strategy especially when some users have low contact rates with others. In such a situation, the system can be enhanced by letting the BSs act as relays for task distribution and result collection. In fact, this approach enables to utilize better the energy capacity of users. On the other hand, all tasks should not be relayed via the BSs as this requires a large number of communications with the BSs. Thus, which device to offload and how long one should wait before the BS is called for are both key aspects in optimal task offloading.

Looking into the literature, there are relative few works [7, 8, 11, 12] that considered task offloading in mobility scenarios. The works in [7, 8] assumed that the connection between two users is stable during the entire offloading process. The authors of [11] considered offloading one task to nearby users with maximization of success ratio of obtaining the result. However, none of these studies utilized BSs as relays. The investigation in [12] considered a hybrid method where a task can be offloaded to a remote server, a BS, or a nearby mobile user. BSs are utilized as relays for delivering the results, however the trade-off between completion time and the cost of using BS is not accounted for.

In this paper, we study task offloading where users can offload their tasks to either remote servers or peer devices, possibly using the BSs, in a mobility scenario. For each task, we define a cost related to the completion time and processing. For offloading, a user can wait longer time to increases the opportunity of contact and then collecting the result via D2D, but the completion time could be quite long. The completion time can be made shorter if a BS assists with the offloading, but this will involve additional communications costs. Therefore, we optimize the time before the BS is involved in task offloading. In addition, each task has a completion time deadline before which the result of the task must be obtained. Our aim is to minimize the total cost of the system. Moreover, the available energy of the users for processing is taken into consideration. The contributions of this study are as follows. We formulate the task offloading problem and show how it can be effectively linearized. We also prove mathematically the complexity of the problem. Next, an algorithm based on Lagrangian duality is provided for problem solving. Our algorithm is compared to other algorithms. Simulation results demonstrate not only that the algorithm has close-to-optimal performance, but also provide structural insights of the optimal trade-off.

Ii System Model and Problem Formulation

Ii-a System Model

In our system scenario, a set of users need to offload their tasks. We call them requesters and the set is denoted by . The second set of users, referred to as helpers, have energy available for task processing. The index set of helpers is denoted by . We assume that all users are within the coverage of network such that BSs can be used as relays for task distribution or result collection. Merely to simplify the presentation, we assume there is one BS. The system scenario is shown in Figure 1.

Figure 1: System scenario of D2D task offloading with possible BS assistance and presence of user mobility.

For the sake of presentation, we assume each requester has only one task. However, our formulations and algorithms can be generalized easily to a more general scenario where each requester has multiple tasks. Hereafter we use task and requester interchangeably. The required amount of energy for processing task and one communication with the BS are denoted by and respectively. Each helper , , can provide at most amount of energy to process tasks. Processing task by helpers and the remote server incur costs and respectively. These costs typically relate to the amount of energy required for computation. Each communication with the BS and remote server incurs a cost, denoted by and respectively. The cost of D2D communications is negligible.

The inter-contact model is widely used to characterize the mobility pattern of mobile users [13, 14]

. Hereafter, the term contact is used to refer to the event that two users come into the communication range of each other. The inter-contact time, that is the time between two consecutive time point of meeting each other, for any two users follows an exponential distribution

[15]

. Hence, the number of contacts between any two mobile users follows Poisson distribution

[16]. Moreover, it is assumed that the contact of user pairs are independent.

We consider a time slotted system consisting of time slots, , each with duration . The deadline of task is denoted by . As mentioned earlier, we optimize the time before which a requester uses the BS for task distribution and/or result collection. Thus for requester and helper there is a timer and its value is denoted by . The tasks are assumed to be delay tolerant, hence the magnitude of time slot111 The magnitude of a time slot is in a range of hour. is considerably larger than task processing time. Therefore, we do not account for the processing time of the tasks. Moreover as the contact between the helpers and requesters are stochastic, we consider the expected value of the total cost of system. The following five events may occur once helper is designated to task :

  1. They meet at least twice before , then the task is collected and result is obtained, both via D2D.

  2. They meet exactly once before , then the task is collected, and they meet at least once again between and , then the result is obtained. This case also uses D2D communications twice.

  3. They do not meet before , but they meet at least once between and . Then the BS is involved to deliver the task to the helper (with two communications: requester BS, and BS helper) and the result is obtained via D2D communications.

  4. They meet exactly once before , however, they do not meet after this time point until . Then the task is given to the helper via D2D communications and the result is obtained via the BS (with two communications: helper BS, and BS requester).

  5. They do not meet at all within . In this case, the task is sent to the server for processing222In this case, the BS also can be used for both the task distribution and result collection, but we do not account for this solution because it involves four communications with the BS..

There is a cost associated with task completion time defined as the starting time point until the requester obtains the task’s result. We introduce a cost function for which is the cost for a completion time of slots. For the events above, we will derive the total expected task completion cost including the task completion time and the communication if applicable.

Ii-B Cost Model

Denote by binary variable

representing if requester offloads its task to helper . The corresponding variable for task offloading to the server is denoted by . Denote by

the probability that requester

meets helper exactly times during time slots to . Note that when , it is the probability of having contacts within time slot . For special case , there are two cases, i.e., and . Intuitively, their corresponding probabilities are defined to zero and one. The probability is defined in a similar way. Here, follows a Poisson distribution with mean , where represents the average number of contacts per unit time. Denote by , the probability that event occurs and the expected cost of event , , is denoted by .

The cost of assigning the task to helper originates from waiting time before task completion, communications with BS (if applicable), and task processing. The associated expected cost of each event is derived and shown in Table I. For events and , the first and second terms are the expected costs related to task completion time and processing respectively. For events and , the first, second, and third terms are the expected costs related to task completion time, processing, and communication with BS. For event , we have costs of processing and communications with server.

Event Expected cost Probability
1
2
3
4
5
Table I: Expected costs and probabilities of events.

Thus, the total expected cost for using helper for task is:

(1)

Hence, the overall cost for all helpers and requesters is:

(2)

where and are two matrices of dimensions and , respectively, representing the offloading and timer variables. Note that the cost function is highly nonlinear, but we prove in Section IV that this function can be linearized and the optimal value of timers , , can be preprocessed.

Ii-C Energy Consumption on Helpers

The energy consumed on a helper consists of those for processing and communications, whereas that for D2D communications is negligible in comparison. Therefore, we consider the expected consumed energy. Hence, for requester we have:

(3)

Ii-D Problem Formulation

The problem is formulated as follows:

(4a)
(4b)
(4c)

Constraints (4b) indicate that a requester must offload its task to either a helper or to the server. Constraints (4c) respect the available energy of helpers.

Iii Complexity Analysis

Theorem 1.

The task offloading problem is -hard.

Proof.

We adopt a polynomial-time reduction from the Knapsack problem of items having weights , values , and capacity . Our reduction is as follows. We have one helper, i.e., , with total available energy . There are requesters, i.e., . The expected amount of energy for processing task is . We set for all requesters and helpers where is a small positive number. Therefore, the requesters and helpers meet at least once with probability . Also, we use and set the same deadlines for all requesters, i.e., . By construction the optimal timer value is for all requesters. The number of time slots is set to the deadline of tasks, i.e., . The costs for processing task by the helper and the server is set to and . The cost of communications with BS and server is set to and . This setting results in the overall completing cost for task by the helper and the server as and respectively, i.e., and . Consequently, if task is offloaded to the helper, its gain is . By construction, the optimum to our problem solves the Knapsack problem instance. As the Knapsack problem is -hard, the conclusion follows. ∎

Iv Algorithm Design

The cost in equation (2) has a rather complicated structure because of the nonlinearity. However, in the following we provide a structure insight stating that for each pair task and candidate helper , the optimal value of can be preprocessed. This enables us to reformulate the cost function as a linear function without loss of optimality.

Lemma 2.

For any pair and , the optimal value of timer can be obtained with linear complexity.

Proof.

For each possible value of from to , the value of can be computed in linear time, because is the sum of costs of the five possible events. The cost of each of them involves calculating the probabilities and cost of completion time. The probabilities can be obtained in via formula and the completion time cost can be obtained in as there is maximum time slots. Thus the overall complexity is . Furthermore, the value of is independent from the other pairs. These together enable us to obtain the optimal value of by taking operator over all possible values, i.e., . ∎

By Lemma 2, the objective function is linearized below.

(5)
(4b), (4c)

Iv-a Lagrangian Relaxation

We apply Lagrangian relaxation to (4c). Denote by , , the corresponding Lagrange multipliers. We have the following Lagrangian relaxation:

(6)

The above problem is polynomial-time solvable as the only constraint (4b) states that the task has to be assigned to either a helper or the server. Therefore, the optimal is to pick the helper or the server that minimizes the expected cost.

Iv-B Subgradient Optimization

The Lagrangian dual problem is , where . A subgradient, , to the concave function can be obtained as:

(7)

where is obtained from the optimal solution to for the given . The dual problem can be solved with subgradient optimization, described in Algorithm 1. In the algorithm, is the maximal allowed number of iterations, and denote the best known lower and upper bounds on . Any feasible solution yields upper bounds. Initially, we use . We user the following formula to calculate the step [17]:

(8)
1:  Choose a starting point , choose and , ,
2:  repeat
13:     Solve (6), yielding and
2if  then  ;
4:     Make an attempt to modify to a feasible solution, and possibly update
5:     Calculate search direction and step length using formula (7) and (8) respectively
6:     Update
7:     
8:  until 
Algorithm 1 Lagrangian-based Algorithm

We carry out Step 4 as follows. For each helper having its energy constraint violated, we reassign some of the allocated tasks to other helpers in ascending order of cost.

V Performance Evaluation

For performance comparison, we consider two intuitive task offloading strategies based on the expected cost and contact rates respectively. For these two strategies the tasks are allocated to helpers in descending order of expected completion cost and contact rate, respectively. After allocating a task, the residual energy of helpers will be updated. This process will be repeated until each task is assigned to either a helper or the server.

The energy available of each helper is generated randomly within interval Joule (J). The experiments in [18] have shown that

follows Gamma distribution. Here, we use Gamma distribution

. The energy required to process a task depends on two factors: the size of data and type of workload [19], and the number of CPU cycles for processing one bit varies by workload type [19, 20]. We generate the tasks with data size within interval MB and assign them workload such that CPU cycles per bit is in the range . We consider J and J for energy consumption of one CPU cycle and one bit data transmission respectively [20]. The processing cost of task on helper and the remote server are set to and . The other parameters are set as follows: , , where is a weighting factor, , and hour. The deadlines of tasks are generated randomly within range time slots. A task with more required energy has a longer deadline. All simulation results are obtained by averaging over instances.

Figures 2 and 3 show the impact of the number of helpers and weighting factor on the expected total cost. In Figure 2, as expected, the cost decreases with respect to . For , the performance gaps of cost-based and contact-based strategies with respect to the Lagrangian-based algorithm are and respectively, and the values grow to and for . The reason for the increase is that the available energy is limited when , thus most of the tasks are offloaded to the server, no matter which strategy or algorithm is used. But, when the number of helpers increases to , the Lagrangian-based algorithm manages to utilize the energy of helpers to accommodate more tasks, whereas the two other strategies are less optimal in this regard. In addition, the solution from the Lagrangian-based algorithm is about from the lower bound of global optimum. This manifests that our algorithm produces close-to-optimal solutions.

In Figure 3, we observe that with the increase of , the overall expected cost increases. This is expected as a higher means a growth in a coefficient in the objective function, whereas the solution space remains unchanged. The contact-based strategy performs better than the cost-based one for large values of . The reason is that the expected cost of each event basically consists of two main parts: the cost related to the expected completion time and the cost related to the processing and communications. The former depends on the weighting factor and the contact rates between the requesters and helpers. Thus, larger gives more emphasis on the contact rates, and consequently the contact-based strategy shows better performance when increases. Furthermore, the Lagrangian-based algorithm consistently and significantly outperforms the cost-based and contact-based task allocation strategies.

In Figure 4, the x-axis is the relative timer values with respect to the deadline, before the BS is used, and the curves show the percentages of requesters setting their timers being at most the values of the x-axis, for various values of . We can see that when parameter increases, there are more requesters using shorter timers. For example for , about of requesters will use the BS at time point zero, while this percentage for decreases to almost zero. These results provide structural insights of using D2D communications versus the BS as well as the resulting cost trade-off, in relation to the amount of emphasis put on task completion time.

Figure 2: Impact of on cost when , , and .
Figure 3: Impact of on cost when , , , , and .
Figure 4: Impact of on the timers, i.e., the amount of time that a user waits before the BS is called for assistance, with respect to deadlines when , , , and .

Vi conclusions

We have studied a task offloading problem with presence of user mobility and possible assistance of BS as relay. For this optimization problem, we have provided structural insight, complexity analysis, and a solution algorithm. Simulation results manifested that our algorithm has a small gap with the optimal solutions and outperforms the other two strategies, i.e., cost-based and contact-based strategies. The future work plan is to investigate a more hierarchical task offloading architecture for mobility scenarios.

References

  • [1] M. V. Barbera, S. Kosta, A. Mei, and J. Stefa, “To offload or not to offload? the bandwidth and energy costs of mobile cloud computing,” in Proc. IEEE Conference on Information Communications (INFOCOM), 2013, pp. 1285–1293.
  • [2] H. T. Dinh, C. Lee, D. Niyato, and P. Wang, “A survey of mobile cloud computing: architecture, applications, and approaches,” IEEE Wireless Communications and Mobile Computing, vol. 13, no. 18, pp. 1587–1611, 2013.
  • [3]

    W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: vision and challenges,”

    IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, 2016.
  • [4] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “Mobile edge computing: survey and research outlook,” arXiv, preprint arXiv:1701.01090, 2017.
  • [5] S. Wang, X. Zhang, Y. Zhang, L. Wang, J. Yang, and W. Wang, “A survey on mobile edge networks: convergence of computing, caching and communications,” IEEE Access, vol. 5, pp. 6757–6779, 2017.
  • [6] M. H. Chen, B. Liang, and M. Dong, “Multi-user multi-task offloading and resource allocation in mobile cloud systems,” arXiv, preprint arXiv:1803.06577, 2018.
  • [7] A. Mtibaa, K. A. Harras, and A. Fahim, “Towards computational offloading in mobile device clouds,” in Proc. IEEE International Conference on Cloud Computing Technology and Science, 2013, pp. 331–338.
  • [8] A. Mtibaa, K. A. Harras, K. Habak, M. Ammar, and E. W. Zegura, “Towards mobile opportunistic computing,” in Proc. IEEE International Conference on Cloud Computing, 2015, pp. 1111–1114.
  • [9] G. Ahani and D. Yuan, “On optimal proactive and retention-aware caching with user mobility,” in Proc. IEEE Vehicular Technology Conference, Fall, 2018, pp. 1–5.
  • [10] T. Deng, G. Ahani, P. Fan, and D. Yuan, “Cost-optimal caching for D2D networks with user mobility: modeling, analysis, and computational approaches,” IEEE Transactions on Wireless Communication, vol. 17, no. 5, pp. 3082–3094, 2018.
  • [11] C. Wang, Y. Li, and J. Depeng, “Mobility-assisted opportunistic computation offloading,” IEEE Communications Letters, vol. 18, no. 10, pp. 1779–1782, 2014.
  • [12] M. Chen, Y. Hao, M. Qiu, J. Song, D. Wu, and I. Humar, “Mobility-aware caching and computation offloading in 5G ultra-dense cellular networks,” Sensors, vol. 16, no. 7, pp. 1–13, 2016.
  • [13] V. Conan, J. Leguay, and T. Friedman, “Fixed point opportunistic routing in delay tolerant networks,” IEEE Journal on Selected Areas in Communications, vol. 26, no. 5, pp. 773–782, 2008.
  • [14] T. Deng, G. Ahani, P. Fan, and D. Yuan, “Cost-optimal caching for D2D networks with presence of user mobility,” in Proc. IEEE Global Communications Conference (GLOBECOM), 2017, pp. 1–6.
  • [15] H. Zhu, L. Fu, G. Xue, Y. Zhu, M. Li, and L. Ni, “Recognizing exponential inter-contact time in vanets,” in Proc. IEEE Conference on Information Communications (INFOCOM), 2010, pp. 101–105.
  • [16] P. Sermpezis and T. Spyropoulos, “Modeling and analysis of communication traffic heterogeneity in opportunist networks,” IEEE Transactions on Mobile Computing, vol. 14, no. 11, pp. 2316–2331, 2015.
  • [17] B. Polyak, “Minimization of unsmooth functionals,” USSR Computational Mathematics and Mathematical Physics, vol. 9, no. 3, pp. 14–29, 1969.
  • [18] A. Passarella and M. Conti, “Analysis of individual pair and aggregate intercontact times in heterogeneous opportunistic networks,” IEEE Transactions on Mobile Computing, vol. 12, no. 12, pp. 2843–2495, 2013.
  • [19] J. Kwak, Y. Kim, J. Lee, and S. Chong, “Dream: dynamic resource and task allocation for energy minimization in mobile cloud systems,” IEEE Journal on Selected Areas in Communications, vol. 23, no. 12, pp. 2510–2523, 2015.
  • [20] A. P. Miettinen and J. K. Nurminen, “Energy efficiency of mobile clients in cloud computing,” in Proc. USENIX Conference on Hot Topics in Cloud Computing, 2010, pp. 4–11.