I Introduction
Due to the everincreasing popularity of computationally intensive applications, computation offloading capability has become a prerequisite for next generation wireless networks. Since the energy, storage, and computing capacity of small mobile devices are limited, mobile users need to transfer computationally expensive tasks to powerful computing servers. Despite its higher computational capability, remote cloud may not be the ideal option, as the long distance between the cloud and the user device yields substantial latency and energy cost. In contrast, small scale computing servers located in the network edge might provide services at reduced latency and energy cost, compared to the remote cloud. This is referred to as Mobile Edge Computing (MEC) [1]. However, efficient utilization of MEC servers is vital, since they have limited computational resources and power. To this end, one solution is to activate only a specific number of servers, while keeping the rest in the energy saving mode. At the same time, users’ latency requirements should be taken into account, as overloading the servers with computational tasks can result in unacceptable delay. Therefore, addressing this tradeoff is a major issue in developing efficient MEC systems. This becomes challenging in the presence of uncertainty in task arrival and/or in the absence of any central controller. Other challenges include minimizing users’ energy consumption and efficient radio resource management.
In [1], the authors develop a distributed algorithm in a game theoretic framework to address the decision making problem for computation offloading by the users, so that the MEC cloud and radio resources are efficiently utilized. In [2], the authors investigate a computational offloading problem, where mobile users offload to a variety of edge nodes such as small base stations, macro base stations and access points, in order to utilize their computational resources. Reference [3] provides centralized resource allocation algorithms that minimize the weighted sum energy consumption under delay constraints for both TDMA and OFDMA protocols in a mobile edge computation offloading system. A multiobjective offloading problem is formulated and analyzed using queuing theory in [4]. A comprehensive survey on the stateoftheart of computation offloading in mobile edge networks can be found in [5].
The majority of the existing literature focus on usercentric objectives such as meeting users’ delay constraints and minimizing users’ energy consumption. On the contrary, our work presents a hybrid view where both servers’ and users’ standpoints are considered. In doing so, we address the uncertainty caused by the randomness in channel quality and users’ requests. We first analyze the statistical characteristics of the offloading delay. Based on this, we model the computational offloading problem as a planned market, where the price of computational services is determined by an authority. Afterward, by using the theory of minority games [6], we develop a novel approach for efficient mode selection (or activation) at the servers’ side. The designed mode selection mechanism guarantees a minimal server activation to ensure energy efficiency, while meeting the users’ delay constraints. Moreover, this scheme is distributed, and does not require any prior information at the servers’ side. We numerically investigate the performance of the proposed method.
Ii System model and problem formulation
We consider an MEC system consisting of a virtual pool of computational servers (e.g., small base stations), denoted by a set , and a set of users (e.g., mobile devices). Each user has some delay sensitive computational tasks to be completed in consecutive offloading periods. Each offloading period is referred to as one time slot. In every time slot , the users offload a total number of computational jobs to the pool. Prior to task arrival, every server independently decides whether to

accept computation jobs (active mode); or

not to accept any computation job (inactive mode).
To become active, each server incurs a fixed energy cost represented by (dimensionless value). In addition, doing each task yields an extra units of energy cost. By processing each job, a server receives a reimbursement (benefit) equal to . Let be the number of servers that decide to become active at time slot . The total jobs are equally divided among the active servers, so that the number of jobs per active server is given by
(1) 
Hence, each active server processes jobs, and thus earns a total reward given by
(2) 
For each server, being in active mode is attractive only if a minimum desired reward, denoted by can be obtained. Then each active server has to receive at least
(3) 
jobs to achieve the minimum desired reward. Each computational job requires a random time to be processed by a server, denoted by . We assume that lies within the interval
and has a truncated normal distribution with parameters
and . Moreover, considering Rayleigh fading, the channel gain () is exponentially distributed with parameter
. We model the round trip transmission delay (from the user to the servers pool) as a linear function of the channel gain. The channel gains in both directions are assumed to be equal. Thus formally,^{1}^{1}1Assuming a normal distribution for the time required to perform each task does not limit the applicability, and similar analysis can be performed with any other distribution. The same holds for the linear model of the transmission delay. Any other model can be used at the expense of additional calculus steps. Also, note that since the required energy to perform each task is proportional to the required time to perform that task, one might consider with .(4) 
where and are constants. The total offloading delay is the sum of processing delay at the server, and the round trip transmission delay . Thus,
(5) 
The following proposition characterizes statistically.^{2}^{2}2
The proof follows by simple probability rules given the independence of
and . We omit the proof due to space limitation.Proposition 1.
The expected value and variance of
are given by(6) 
and
(7)  
respectively, where . Moreover,
is the cumulative distribution function of a standard normal distribution.
Every user requires its offloaded job(s) be completed by a deadline . Therefore, in every round and for every server, the total processing time of all tasks, i.e.,
(8) 
should be less than , so that the delay experienced by the last user in the queue does not exceed the deadline as well. In other words, the condition ensures that all users receive their jobs completed before the deadline. Since are independent and identically distributed (i.i.d.), is the sum of
i.i.d. random variables. Therefore, the expected value and variance of
are given by and , respectively. For large enough (e.g.,), and by using the central limit theorem, the distribution of
can be approximated as(9) 
Due to the uncertainty caused by the randomness, deterministic performance guarantee in terms of delay is not feasible. Thus we resort to a probabilistic guarantee of users’ QoE requirement. Formally, let be the probability that exceeds , i.e., the likelihood that the delay requirement of some offloading user(s) is not satisfied. We require that remains below a predefined threshold . That is,
(10) 
Considering both the servers’ and users’ perspectives, the tradeoff in the system can be seen as follows: On one hand, for each server it is beneficial to be active only if the number of active servers is less than a certain threshold , so that every active server receives the minimum number of jobs required to achieve the threshold reward (as stated by (3)). On the other hand, the users prefer that the number of jobs per server is small enough so that their desired QoE is fulfilled with high probability, i.e., the number of active servers shall be larger than a certain threshold . In what follows, we will derive the values of and analytically. Therefore, for the offloading system to perform efficiently, the number of active servers at any offloading round , i.e., should be determined in way that both servers and users are satisfied. We denote this value by .
Iia Condition for Servers
Recalling (3), to achieve minimum desired reward , each active server has to receive at least jobs. Consequently, at most
(11) 
servers can be in the active mode so that every active server receives the threshold reward , while inactive servers receive no reward. Thus, the condition below should be satisfied when selecting the cutoff :
(12) 
IiB Condition for Users
Recall that the users’ QoE requirement given by (10). Then, from (9) and (10), we have
(13) 
which, by definition, is equivalent to
(14) 
Since is an increasing function, is also an increasing function. Therefore, (14) results in
(15) 
Solving the quadratic inequality, we obtain;
(16) 
where . Since , considering only the right hand side of the inequality (IIB), we have
(17) 
Therefore,
(18) 
is the maximum allowable number of tasks per active server so that the users’ QoE (i.e., latency) requirement is satisfied with probability . Thus by (1), the minimum number of active servers to guarantee the users’ QoE satisfaction is
(19) 
Therefore, the condition below should be satisfied when selecting the threshold .
(20) 
By conditions (12) and (20), the optimal number of active servers, , is determined by solving the following equation:
(21) 
Or equivalently, the system performs optimally in terms of servers’ energy and users’ delay when servers are active so that
(22) 
Thus, we obtain the threshold using (11), (19), and (22) as
(23) 
To ensure that the entire system works efficiently, in addition to the optimal number of active servers (), the price of receiving computing services (i.e., ) must be determined by an authority (for instance, macro base station or network planner). By using (3), we have
(24) 
with given by (18). In fact, in a distributed system, if a price larger than (24) is charged, more servers than would become active, since every server achieves with lower number of tasks than . In contrast, for lower than (24), achieving requires more tasks per server than , so that users’ QoE might not be satisfied.
Now the challenge is to activate servers in a selforganized manner, which is addressed in the following section.
Iii Modeling the problem as a Minority Game
A Minority game (MG) can model the interaction among a large number of players competing for limited shared resources. In a basic MG, the players select between two alternatives and the players belonging to the minority group win. The minority is typically defined using some cutoff value. The collective sum of the selected actions by all players is referred to as the attendance. The advantages of MG include simple implementation, low overhead, and scalability to large set of players, which are of vital importance in a dense wireless network. Details can be found in [6, 7].
We model the formulated server mode selection problem as an MG, where the servers represent the players, with a cutoff value for the number of active servers. In each offloading period, the servers decide between the two actions, i.e., being active or inactive, denoted by and , respectively. We denote the action of a given player in the time slot by . The number of active servers maps to the attendance. Each player has strategies. According to our formulated servers’ mode selection problem and the analysis in Section II,

If , each of the active servers (the minority) earns a reward higher than or equal to the minimum desired reward, .

If , active servers cannot achieve . In this case inactivity (i.e., the action of the minority) is considered as the winning choice, since inactive servers spend no cost without being properly reimbursed.
Iiia Control Information
After each round of play, a central unit (e.g., a macro base station) broadcasts the winning choice to all servers by sending a onebit control information:
(25) 
Note that neither the actual attendance value nor the system cutoff is known by the players.
IiiB Utility
Let and denote the utility that server receives for being active and being inactive, respectively. Based on the discussion above, we define
(26) 
and
(27) 
IiiC Distributed Learning Algorithm
Every player applies a basic strategy reinforcement technique to solve the formulated MG, summarized in Algorithm 1 for some player . Details can be found in [6].
(28) 
(29) 
Iv Numerical Results
For numerical analysis, we choose , , , , , , , , , , and . Simulation is carried out for runs and in each run, the servers randomly draw a set of strategies () and repeatedly execute the MG for offloading periods. For the given parameters’ value, using (23), the cutoff yields . The optimal (central activation) and random choice game (each server selects its action uniformly at random) are also simulated for comparison.
In Fig. 1, we present the variation of important system parameters as a function of users’ QoE index (see Section II). From the figure, the following can be concluded: As increases, the number of required active servers () decreases, thereby allowing a larger number of offloading tasks to be processed per server. Similarly, the maximum allowable number of tasks per active server () increases with increasing . In fact, with larger , larger delay () is tolerable, or in other words, longer task queue () is allowed. Naturally, in this case, the price per task, , reduces, as intuitively expected for a weaker service.
Fig. 2 shows the changes in users’ probability measure, i.e., . The users meet their QoE certainty requirement whenever . As the attendance fluctuates near , the probability value also remains near the desired certainty.
The average utility per user is depicted in Fig. 3. It can be seen that the utility of MGbased strategy is higher than that of random selection. Yet, it is below the average utility of the optimal scenario. This is due to the fact that in MGbased method, servers make decisions under minimal external information, and without any coordination with other servers.
References
 [1] S. Josilo and G. Dan, “A game theoretic analysis of selfish mobile computation offloading,” in IEEE INFOCOM 2017  IEEE Conference on Computer Communications, May 2017, pp. 1–9.
 [2] W. Wang and W. Zhou, “Computational offloading with delay and capacity constraints in mobile edge,” in 2017 IEEE International Conference on Communications (ICC), May 2017, pp. 1–6.
 [3] C. You, K. Huang, H. Chae, and B. H. Kim, “Energyefficient resource allocation for mobileedge computation offloading,” IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1397–1411, March 2017.
 [4] L. Liu, Z. Chang, X. Guo, and T. Ristaniemi, “Multiobjective optimization for computation offloading in mobileedge computing,” in 2017 IEEE Symposium on Computers and Communications (ISCC), July 2017, pp. 832–837.
 [5] P. Mach and Z. Becvar, “Mobile edge computing: A survey on architecture and computation offloading,” IEEE Communications Surveys Tutorials, vol. 19, no. 3, pp. 1628–1656, thirdquarter 2017.
 [6] D. Challet, M. Marsili, and Y. C. Zhang, Minority Games: Interacting Agents in Financial Markets. Oxford, UK: Oxford University Press, 2014.
 [7] S. Ranadheera, S. Maghsudi, and E. Hossain, “Minority games with applications to distributed decision making and control in wireless networks,” IEEE Wireless Communications, vol. PP, no. 99, pp. 2–10, 2017.
Comments
There are no comments yet.