Price-Based Distributed Offloading for Mobile-Edge Computing with Computation Capacity Constraints

Mobile-edge computing (MEC) is a promising technology to enable real-time information transmission and computing by offloading computation tasks from wireless devices to network edge.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

10/16/2017

Computation Offloading and Activation of Mobile Edge Computing Servers: A Minority Game

With the ever-increasing popularity of resource-intensive mobile applica...
09/15/2019

An Efficient Mechanism for Computation Offloading in Mobile-Edge Computing

Mobile edge computing (MEC) is a promising technology that provides clou...
05/28/2019

Age Based Task Scheduling and Computation Offloading in Mobile-Edge Computing Systems

To support emerging real-time monitoring and control applications, the t...
02/12/2019

Distributed and Application-aware Task Scheduling in Edge-clouds

Edge computing is an emerging technology which places computing at the e...
09/20/2021

Architecture and Performance Evaluation of Distributed Computation Offloading in Edge Computing

Edge computing is an emerging paradigm to enable low-latency application...
08/17/2020

Wireless Powered Mobile Edge Computing: Offloading Or Local Computation?

Mobile-edge computing (MEC) and wireless power transfer are technologies...
12/22/2019

Energy-Aware Multi-Server Mobile Edge Computing: A Deep Reinforcement Learning Approach

We investigate the problem of computation offloading in a mobile edge co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With the increasing popularity of new mobile applications of Internet of Things (IoT), future wireless networks with billions of IoT devices are required to support ultra low-latency communication and computing. However, the IoT devices are usually with small physical sizes and limited batteries, thus always suffer from intensive computation and high resource-consumption for real-time information processing [1, 2, 3].

Mobile-edge computing (MEC) is a promising technology to address this problem. Unlike conventional cloud computing integrated with remote central clouds results in long latency and fragile wireless connections, MEC migrates intensive computation tasks from IoT devices to the physically proximal network edge, and provides low-latency as well as flexible computing and communication services for IoT devices. As a result, MEC is commonly agreed as a key technology to realize next-generation wireless networks [4].

Joint radio and computation resource allocation for MEC has been recently investigated in the literature, e.g., [5, 6, 7, 8, 9, 10, 11, 12]. In general, the MEC paradigm can be divided into two categories: binary offloading [5, 6, 7, 8, 9] and partial offloading [10, 11, 12]. With binary offloading, the computation tasks at users can not be partitioned but must be executed as a whole either at users or at edge cloud. With partial offloading, the computation tasks at users can be partitioned into different parts for local computing and offloading at the same time.

Among the mentioned literature, a handful of works [8, 9]

adopted game theory to design distributed mechanisms for efficient resource allocation in MEC systems. For example, multiuser binary offloading was considered in

[8] using a Nash game to maximize the offloaded tasks subject to the total time and energy constraints. The work [9] considered competition among multiple heterogeneous clouds. In view of prior works, most of them assumed that the computation capacity of the edge cloud is infinite. However, as MEC server is located at network edge, its computation capacity should be finite, especially in the networks with intensive workloads. In this case, a mechanism is needed to control users’ offloaded tasks to make the network feasible. Moreover, the mechanism should provide reasonable incentives for both edge cloud and users to efficiently allocate network resources in a distributed fashion.

Motivated by the above issues, we consider an edge cloud with finite computation capacity, which is treated as a divisible resource to be sold among the users. The interaction between the edge cloud and users is modeled as a Stackelberg game, where the edge cloud sets prices to maximize its revenue and each user designs the offloading decision individually to minimize its cost that is defined as latency plus payment. The main contributions are two-fold: 1) We propose a new game-based distributed MEC scheme, where the users compete for the edge cloud’s finite computation resources via a pricing approach, which is modeled as a Stackelberg game. 2) The optimal uniform and differentiated pricing algorithms are proposed, which can be implemented with a distributed manner.

Ii System Model and Problem Formulation

Ii-a System Model

We consider a MEC system with users and one base station (BS) that is integrated with a MEC server to execute the offloaded data of the users. All nodes have a single antenna. The users’ computation data can be arbitrarily divided in bit-wise for partial local computing and partial offloading. We assume that the total bandwidth is equally divided for users such that each user can occupy a non-overlapping frequency to offload its data to the edge cloud simultaneously. The quasi-static channel model is considered, where channels remain unchanged during each offloading period, but can vary in different offloading periods. We also assume that the computation offloading can be completed in a period.

Let denote the number of CPU cycles for computing 1-bit of input data at user . It is assumed that user has to execute bits of input data in total, where bits are offloaded to the edge cloud while the rest bits are computed by its local CPU. The local CPU frequency of user is denoted as that is measured by the number of CPU cycles per second. Then the time for local computing at user is . The offloading time of user comprises three parts: the uplink transmission time , the execution time at the cloud , and the downlink feedback time . Thus, the offloading time is

(1)

Since the local computing and offloading can be performed concurrently, the required time of user for executing the total bits data can be expressed as

More specially, the data size of the computed result fed back to user is , where accounts for the ratio of output to input bits offloaded to the cloud [13], which depends on the applications of the users. Then we have and , where and denote the uplink and downlink transmission rates for user , respectively. Here is the noise power spectrum density, is the channel gain between the BS and user , and and are the downlink and uplink power for user , respectively. Moreover, let denote the computational speed of the edge cloud assigned to user , then we have . Here we consider equal allocation for simplicity, i.e., , where is the total computational speed of the could.

We consider a practical constraint that the edge cloud has finite computation capacity so that its CPU cycles for computing the sum received data in each offloading period are upper bounded by , the constraint can be expressed as

(2)

Note that and represent the MEC server’s computational quantity and speed for the offloaded CPU cycles, respectively.

Ii-B Stackelberg Game Formulation

In this paper, the users consume the edge cloud’s resources to execute the computation tasks while the edge cloud has to ensure its available CPU cycles for computing the total offloaded data to be below the computation capacity. Hence, to adjust the demand and supply of the computation resources, it is considered that the edge cloud prices the CPU cycles of the offloaded data for each user . Thus the Stackelberg game can be applied to model the interaction between the edge cloud and users, where the edge cloud is the leader and the users are the followers. The edge cloud (leader) first imposes the prices for CPU cycles of users. Then, the users (followers) divide their input data for local computing and offloading individually based on the prices announced from the edge cloud.

Denote the CPU cycle prices for users as a set . The objective of the edge cloud is to maximize its revenue obtained from selling the finite computation resources to users. Mathematically, the optimization problem at the edge cloud’s side can be expressed as (leader problem)

Note that the offloaded data for user is actually a function of , since the size of data that each user is willing to offload is dependent on its assigned price.

At the users’ side, each user’s cost is defined as its latency plus the payment charged by the edge cloud, i.e.,

(3)

which is equivalent to

(4)

where and is defined as .

The goal of each user is to minimize its own cost by choosing the optimal offloaded data size for given price set by the edge cloud. Mathematically, this problem can be expressed as (follower problem)

It is worth noting that the payment term in Problem P1 and Problem P2 can be cancelled each other from the net utility perspective. Problem P1 and Problem P2 in the Stackelberg game are coupled in a complicated way, i.e., the pricing strategies of the edge cloud have an influence on the offloaded data sizes of the users which also impact the edge cloud’s revenue in turn.

Iii Optimal Algorithm

To analyze the considered Stackelberg game, each user independently decides its offloading strategy by solving Problem P2 with given price . Knowing each user’s offloading decision , the edge cloud sets its optimal price by solving Problem P1. The above process is known as the backward induction. In this paper, two optimal pricing strategies are considered, which are termed as uniform pricing and differentiated pricing [14]. In the following, we will investigate the two pricing schemes respectively.

Iii-a Uniform Pricing

For the uniform pricing scheme, the edge cloud sets and broadcasts a uniform price to all users, i.e., . For given uniform price , the objective function is a piecewise function of , which is linear in each interval from (4). Then, by exploiting the structure of , we can obtain the optimal solution for Problem P2 in the following proposition.

Proposition 1.

The optimal offloading decision of each user in Problem P2 follows the threshold-based policy, i.e.,

(5)

where the binary variable

is defined as

(6)
Proof.

Please refer to Appendix A. ∎

From Proposition 1, we obtain that the offloading threshold is , i.e., user prefers to offload bits to the edge cloud if its CPU frequency is smaller than or equal to the threshold, and leaves all bits for local computing otherwise. In other words, the computation offloading is beneficial if the user has small computational speed and it is likely to compute locally otherwise.

Then we turn our attention to Problem P1. By substituting (5) into Problem P1, the optimization problem at the edge cloud for the uniform pricing scheme can be rewritten as

(7)
(8)
Proposition 2.

Without loss of generality, we sort , the optimal uniform price must belong to the set

Proof.

Please refer to Appendix B. ∎

According to Proposition 2, the revenue maximization problem in P3 reduces to the one-dimensional search problem over values in and we summarize the whole method in Algorithm 1 formally. Specially, the edge cloud bargains with the users by announcing price in the decreasing order of . Since the required sum CPU cycles decreases with the price , the price bargaining ends as long as the computation capacity constraint (8) is active and there is no need for bargaining the rest of price candidates.

It is obvious that the total complexity of Algorithm 1 to search is . For the uniform pricing scheme, the edge cloud needs the limited network information, i.e., and , which are collected by the cloud before the algorithm. In each iteration, after knowing the price broadcasted by the edge cloud, each user independently makes its offloading decision and reports it to the edge cloud for updating the price. Therefore, the cloud broadcasts the price and each user reports its offloading decision , which are the information exchanged between the edge cloud and the users in each iteration. Hence, Algorithm 1 is a fully distributed algorithm.

1:  The edge cloud initializes and .
2:  repeat
3:     Every user decides its optimal offloaded data size according to (5).
4:     The edge cloud computes its revenue from (7).
5:     if  then
6:        Update the price , and ;
7:     else
8:        Set ; beak;
9:     end if
10:  until .
11:  Output .
Algorithm 1 Optimal Uniform Pricing Policy for Problem P3

Iii-B Differentiated Pricing

Here, we consider the general case where the edge cloud charges the different users with different prices. Similar to the uniform pricing case, the optimal solution for Problem P2 is also (5), except that in is replaced by . And, Problem P1 can be rewritten as

It is worth noting that the price is actually a function of for user . Specifically, , i.e., and the optimal price for user is thus given by as the objective function of Problem P4 is an increasing function of . When , the edge cloud sets the price for user as and earns no revenue. Based on the above analysis, Problem P4 is thus equivalent to

Problem P4 is actually a binary knapsack problem with the weight and the value for user . Since the problem is NP-complete, there is no efficient algorithm solving it optimally. However, we can apply dynamic programming [15] to solve the above binary knapsack problem in pseudo-polynomial time.111 We adopt kp01 software package in MATLAB.

Each user needs to report and to the edge cloud for solving Problem P4 and there is no need for iteration between the edge cloud and users. Obtaining the optimal price , user decides its optimal strategy based on (5). Therefore, the differentiated pricing scheme is also a distributed algorithm, but it needs more information and higher complexity than that of the uniform pricing scheme.

Iv Numerical Results

In the simulation setup, we assume that the total channel bandwidth is 1 MHz and the noise power spectrum density is dBm/Hz. Each

is assumed to be uniformly distributed in

dBm. The local CPU frequency for each user is uniformly selected from the set GHz, and the required number of CPU cycles per bit and the data size for user are uniformly distributed with cycles/bit and KB, respectively. Unless otherwise noted, the remaining parameters are set as follows: W, W, cycles/slot, , and GHz.

The average performance of the two proposed pricing schemes is evaluated and compared in terms of average latency and revenue. Besides, we consider the scheme where all input data is computed locally at users for comparison. In Fig. 1(a), both latency and revenue performance become better as the computation capacity increases, while the scheme of only local computing has the worst latency performance and is not related with the computation capacity. In addition, the differentiated pricing scheme has better performance in both latency and revenue, which shows that it is more accurate to allocate resource. Thus there exists a tradeoff between performance and complexity for the two pricing schemes.

Fig. 1(b) illustrates the effect of the number of users on the average latency and revenue, and we have the similar observations as Fig. 1(a) for the three schemes. Besides, with the increasing number of users, the allocated spectrum for each user decreases, resulting in lower transmission rate and thus higher latency. Moreover, it is expected that competition with more users forces up the prices and revenue of the edge cloud.

(a)
(b) cycles/slot
Fig. 1: Performance comparison.

V Conclusion

In this work, we investigated the price-based computation offloading for a multiuser MEC system. The finite computation capacity of the edge cloud was considered to manage the offloaded tasks from users. The Stackelberg game was applied to model the interaction between the edge cloud and users. Based on the edge cloud’s knowledge of the network information, we proposed uniform and differentiated pricing schemes, which can both be implemented with distributed manners.

Appendix A The proof of Proposition 1

For given , the optimal solution for Problem P2 is

(9)

The case for all

occurs with probability

and we let in this case, which completes the proof.

Appendix B The proof of Proposition 2

It can be proved by contradiction as follows. Suppose that the optimal price can exist in the interval , . Then, we consider the case . From (6), we can obtain that with and with for both and . Therefore, the CPU cycles of the sum offloaded data for are equivalent to that for . As the objective function given in (7) is an increasing linear function of the price , we can always have that the case achieves a higher revenue than the case . Thus, this contradicts with the assumption that is optimal for Problem P3 with . Therefore, the optimal price must exist in the set .

References

  • [1] A. u. R. Khan, M. Othman, S. A. Madani, and S. U. Khan, “A survey of mobile cloud computing application models,” IEEE Communications Surveys Tutorials, vol. 16, no. 1, pp. 393–413, February 2014.
  • [2] K. Kumar and Y. H. Lu, “Cloud computing for mobile users: Can offloading computation save energy?” Computer, vol. 43, no. 4, pp. 51–56, April 2010.
  • [3] Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: The communication perspective,” Available: https://arxiv.org/abs/1701.01090.
  • [4]

    W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and challenges,”

    IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, October 2016.
  • [5] S. Sardellitti, G. Scutari, and S. Barbarossa, “Joint optimization of radio and computational resources for multicell mobile-edge computing,” IEEE Transactions on Signal and Information Processing over Networks, vol. 1, no. 2, pp. 89–103, June 2015.
  • [6] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and Y. Zhang, “Energy-efficient offloading for mobile edge computing in 5G heterogeneous networks,” IEEE Access, vol. 4, pp. 5896–5907, 2016.
  • [7] Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 12, pp. 3590–3605, December 2016.
  • [8] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795–2808, October 2016.
  • [9] T. Zhao, S. Zhou, X. Guo, Y. Zhao, and Z. Niu, “Pricing policy and computational resource provisioning for delay-aware mobile edge computing,” in 2016 IEEE/CIC International Conference on Communications in China (ICCC), July 2016, pp. 1–6.
  • [10] Y. Mao, J. Zhang, S. H. Song, and K. B. Letaief, “Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems,” IEEE Transactions on Wireless Communications, vol. 16, no. 9, pp. 5994–6009, September 2017.
  • [11] C. You, K. Huang, H. Chae, and B. H. Kim, “Energy-efficient resource allocation for mobile-edge computation offloading,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, March 2017.
  • [12] J. Xu, L. Chen, and S. Ren, “Online learning for offloading and autoscaling in energy harvesting mobile edge computing,” IEEE Transactions on Cognitive Communications and Networking, vol. PP, no. 99, pp. 1–1, 2017.
  • [13] Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge computing: Partial computation offloading using dynamic voltage scaling,” IEEE Transactions on Communications, vol. 64, no. 10, pp. 4268–4282, October 2016.
  • [14] Y. Liu, R. Wang, and Z. Han, “Interference-constrained pricing for D2D networks,” IEEE Transactions on Wireless Communications, vol. 16, no. 1, pp. 475–486, January 2017.
  • [15] S. Martello, D. Pisinger, and P. Toch, “Dynamic programming and strong bounds for the 0-1 knapsack problem,” Management Science, vol. 45, no. 3, pp. 414–424, 1999.