Finite battery lifetime and low computing capability of size-constrained wireless devices (WDs) have been longstanding performance limitations of many low-power wireless networks, e.g., wireless sensor networks (WSNs) and Internet of Things (IoT), especially for supporting many emerging applications that require sustainable and high-performance computations, e.g., autonomous driving and augmented reality.
Radio frequency (RF) based wireless power transfer (WPT) has been recently identified as an effective solution to the finite battery capacity problem [1, 2, 3, 4]. Specifically, WPT uses dedicated RF energy transmitter, which can continuously charge the battery of remote energy-harvesting devices. Thanks to the broadcasting nature of RF signal, WPT is particularly suitable for powering a large number of closely-located WDs, like those deployed in WSNs and IoT. On the other hand, a recent technology innovation named mobile edge computing (MEC) has attracted massive industrial investment and has been identified as a key technology towards future 5G network [5, 6, 7]. As its name suggests, MEC allows the WDs to offload intensive computations to nearby servers located at the edge of radio access network, e.g., cellular base station and WiFi access point (AP), to reduce computation latency and energy consumption. In general, there are two basic computation task offloading models in MEC, i.e., binary and partial computation offloading . Specifically, binary offloading requires a task to be executed as a whole either locally at the WD or remotely at the MEC server. Partial offloading, on the other hand, allows a task to be partitioned into two parts with one executed locally and the other offloaded for edge execution. In practice, binary offloading is suitable for simple tasks that are not partitionable, while partial offloading is favorable for some complex tasks composed of multiple parallel segments. A key research problem is the joint design of task offloading and system resource allocation to optimize the computing performance, which has been extensively studied under both binary and partial computation offloading policies [8, 9, 10, 11].
The integration of WPT and MEC technologies introduces a new paradigm named wireless powered MEC, where the distributed MEC wireless devices are powered by means of WPT. The deployment of wireless powered MEC systems can potentially tackle the two aforementioned performance limitations in low-power wireless networks like IoT. Compared to conventional battery-powered MEC, the optimal design in a wireless powered MEC network is more challenging. On one hand, the task offloading and resource allocation decisions now depend on the distinct amount of energy harvested by individual WDs from WPT. On the other hand, WPT and task offloading need to share the limited wireless resource, e.g., time or frequency. There are few existing studies on wireless powered MEC system [12, 13, 14]. 
considers a single-user wireless powered MEC with binary offloading, where the user maximizes its probability of successful computation under latency constraint. In a multi-user scenario, considers using a multi-antenna AP to power the users and minimizes the AP’s total energy consumption.  also considers maximizing the weighted sum computation rate of a multi-user wireless powered MEC network. However, both  and 
assume partial computation offloading policy. Mathematically speaking, partial offloading is a convex-relaxed version of the binary offloading policy. In a multi-user environment, the optimal design under the binary offloading policy often involves non-convex combinatorial optimization problems, which is much more challenging and currently lacking of study.
In this paper, we consider a wireless powered MEC network as shown in Fig. 1, where the AP is reused as both energy transmitter and MEC server that transfers RF power to and receives computation offload from the WDs. Each device follows the binary offloading policy. In particular, we are interested in maximizing the weighted sum computation rate, i.e., the number of processed bits per second, of all the WDs in the network. Our contributions are detailed below.
We formulate a joint optimization of user computing mode selection and the system transmission time allocation. The combinatorial nature of multi-user computing mode selection makes the optimal solution hard to obtain in general. As a performance benchmark, an enumeration-based optimal method is presented for evaluating the proposed reduced-complexity algorithm.
We devise an ADMM-based technique that tackles the hard combinatorial mode selection by decomposing the original problem into parallel small-scale integer programming subproblems, one for each WD. We further show that the computational complexity of the proposed method increases slowly at a linear rate of the network size .
Extensive simulations show that both proposed algorithm can achieve near-optimal performance under various network setups, and significantly outperform the other benchmark algorithms. Because of its computational complexity, the proposed method is especially applicable to large-size IoT networks.
Ii System Model
Ii-a Network Model
As shown in Fig. 1, we consider a wireless powered MEC network consisting of an AP and WDs, where the AP and the WDs have a single antenna each. In particular, an RF energy transmitter and a MEC server is integrated at the AP. The AP is assumed to be connected to a stable power supply and broadcast RF energy to the distributed WDs, while each WD has an energy harvesting circuit and a rechargeable battery that can store the harvested energy to power its operations. Each device, including the AP and the WDs, has a communication circuit. Specifically, we assume that WPT and communication are performed in the same frequency band. To avoid mutual interference, the communication and energy harvesting circuits of each WD operate in a time-division-multiplexing (TDD) manner. A similar TDD circuit structure is also applied at the AP to separate energy transmission and communication with the WDs. Within each system time frame of duration , the wireless channel gain between the AP and the -th WD is denoted by , which is assumed reciprocal for the downlink and uplink,111The channel reciprocity assumption is made to obtain more design insights on the impact of wireless channel. The proposed algorithm in this paper, however, can be easily extended to the case with non-equal uplink and downlink channels. and static within each time frame but may vary across different time frames.
Within each time frame, we assume that each WD needs to accomplish a certain computing task based on its local data. For instance, a WD as a wireless sensor needs to regularly generate an estimate, e.g., the pollution level of the monitored area, based on the raw data samples measured from the environment. In particular, the computing task of a WD can be performed locally by the on-chip micro-processor, which has low computing capability due to the energy- and size-constrained computing processor. Alternatively, the WD can also offload the data to the MEC server with much more powerful processing power, which will compute the task and send the result back to the WD.
In this paper, we assume that the WDs adopt a binary computation offloading rule. That is, a WD must choose to operate in either the local computing mode (mode , like WD in Fig. 1) or the offloading mode (mode , like WD and WD) in each time frame. In practice, this corresponds to a wide variety of applications. For instance, the measurement samples of a sensor are correlated in time, and thus need to be jointly processed to enhance the estimation accuracy.
Ii-B Computation Model
We consider an example transmission time allocation in Fig. 2. We use two non-overlapping sets and to denote the indices of WDs that operate in mode and , respectively. As such is the set of all the WDs. In the first part of a tagged time frame, the AP broadcasts wireless energy to the WDs for amount of time, where , and all the WDs harvest the energy. Specifically, the energy harvested by the -th WD is
where denotes the RF energy transmit power of the AP and denotes the energy harvesting efficiency . In the second part of the time frame , the WDs in (e.g., WD and WD in Fig. 1) offload the data to the AP. To avoid co-channel interference, we assume that the WDs take turns to transmit in the uplink, and the time that a WD transmits is denoted by , . Depending on the selected computing mode, the detailed operation of each WD is illustrated as follows.
Ii-B1 Local Computing Mode
Notice that the energy harvesting circuit and the computing unit are separate. Thus, a mode- WD can harvest energy and compute its task simultaneously. Let denote the number of computation cycles needed to process one bit of raw data, which is assumed equal for all the WDs. Let denote the processor’s chosen computing speed (cycles per second) and denote the computation time of the WD. The power consumption of the processor is modeled as (joule per second), where denotes the computation energy efficiency coefficient of the processor’s chip . Then, the total energy consumption is constrained by
to ensure sustainable operation of the WD.222We assume each WD has sufficient initial energy in the very beginning and the battery capacity is sufficiently large such that battery-overcharging is negligible. Besides, for simplicity, we do not assume a maximum computing speed for the WDs considering their low harvested energy. With the above computation model, the computation rate of WD (in bits per second) denoted by , can be calculated as 
where the inequality is obtained from (2). Therefore, the maximum is achieved by setting , i.e., the WD computes for a maximal allowable time throughout the time frame and at a minimal possible computing speed. By substituting and into (3), the maximum local computation rate of a mode- WD is
where is a fixed parameter.
Ii-B2 Offloading Mode
Due to the TDD circuit constraint, a mode- WD can only offload its task to the AP after harvesting energy. We denote the number of bits to be offloaded to the AP as , where denotes the amount of raw data and indicates the communication overhead in task offloading, such as packet header and encryption. Let denote the transmit power of the -th WD. Then, the maximum equals to the data transmission capacity, i.e.,
where denotes the communication bandwidth and denotes the receiver noise power.
After receiving the raw data of all the WDs, the AP computes and sends back the output result of length bits back to the corresponding WD. Here, indicates the output/input ratio including the overhead in downlink transmission. In practice, the computing capability and the transmit power of the AP is much stronger than the energy-harvesting WDs, e.g., by more than three orders of magnitude. Beside, is a very small value, e.g., one output temperature estimation from tens of input sensing sample. Accordingly, we neglect the time spent on task computation and feedback by the AP like in [8, 12, 13]. In this case, task offloading can occupy the rest of the time frame after WPT, i.e., . Besides, from the above discussion, we also neglect the energy consumption by the WD on receiving the result from the AP and consider only the energy consumptions on data transmission to the AP. In this case, the WD should exhaust its harvested energy on task offloading, i.e., , to maximize its computation rate. From (5), the maximum computation rate of a mode- WD is
Ii-C Problem Formulation
In this paper, we maximize the weighted sum computation rate of all the WDs in each time frame. From (4) and (6), the computation rates of the WDs are related to their computing mode selection and the system resource allocation on WPT and task offloading. Mathematically, the computation rate maximization problem is formulated as follows.
Here, and . denotes the weight of the -th WD. denotes the offloading time of the mode- WDs. The two terms of the objective function correspond to the computation rates of mode- and mode- WDs, respectively. (7c) is the time allocation constraint.
Due to the stringent energy and computation limitations of the WDs, we adopt a centralized control scheme where the AP is responsible for all the computations and coordinations, including selecting the computing mode for each WD. Among all the parameters in (P1), the AP only needs to estimate the wireless channel gains ’s that are time varying in each time frame. The others are static parameters that remain constant for sufficiently long period of time, such as ’s and ’s. Then, the AP calculates (P1) and broadcasts the solution to the WDs, which will react by operating in their designated computing modes.333The energy and time consumed on channel estimation and coordination can be modeled as two constant terms that will not affect the validity of the proposed algorithm. They are neglected in this paper for simplicity.
Problem (P1) is a hard non-convex problem due to the combinatorial computing mode selection. However, we observe that the second term in the objective is jointly concave in . Once is given, (P1) reduces to a convex problem, where the optimal time allocation can be efficiently solved using off-the-shelf optimization algorithms, e.g., interior point method . Accordingly, a straightforward method is to enumerate all the possible and output the one that yields the highest objective value. The enumeration-based method may be applicable for a small number of WDs, e.g., , but quickly becomes computationally infeasible as further increases. Therefore, it will be mainly used as a benchmark to evaluate the performance of the proposed reduced-complexity algorithm in this paper. Before entering formal discussions on the algorithm design, it is worth mentioning that a closely related max-min rate optimization problem, which maximizes the minimum computation rate among the WDs, has its dual problem in the form of weight-sum-rate-maximization like (P1). In this sense, the proposed method in this paper can also be extended to enhance the user fairness performance.
Iii An ADMM-Based Joint Optimization Method
Iii-a Reformulation of (P1)
In this section, we propose an ADMM-based method to solve (P1). The main idea is to decompose the hard combinatorial optimization (P1) into parallel smaller integer programming problems, one for each WD. Conventional decomposition techniques, such as dual decomposition, cannot be directly applied to (P1) due to the coupling factors in both objective and constraint. We first reformulate (P1) as an equivalent integer programming problem by introducing binary decision variables ’s and additional artificial variables ’s and ’s as follows
Here, for all and for all . and . With a bit abuse of notation, we denote . Notice that variables and are immaterial to the objective if . Then, (8) can be equivalently written as
Problem (9) can be effectively decomposed using the ADMM technique , which solves for the optimal dual soulution. By introducing multipliers to the constraints in (9b), we can write a partial augmented Lagrangian of (9) as
where , , and . is a fixed step size. The corresponding dual function is
where denotes a
binary vector. Furthermore, the dual problem is
Iii-B Proposed ADMM Iterations
The ADMM technique solves the dual problem (11) by iteratively updating , , and . We denote the values in the -th iteration as . Then, in the -th iteration, the update of the variables is performed sequentially as follows:
Iii-B1 Step 1
Given , we first maximize with respect to , where
Notice that (12) can be decomposed into parallel subproblems. Each subproblem solves
We can equivalently express (13) as
For both and , (14) solves a strictly convex problem, and thus the optimal solution can be easily obtained, e.g., using the projected Newton’s method . Accordingly, we can simply select or that yields a larger objective value in (14) as , and the corresponding optimal solution as and . After solving the parallel subproblems, the optimal solution to (12) is given by . Notice that the complexity of solving each subproblem does not scale with (i.e., complexity), thus the overall computational complexity of Step is .
Iii-B2 Step 2
Given , we then maximize with respect to . By the definition of in (10), must hold at the optimum. Accordingly, the maximization problem can be equivalently written as the following convex problem
Instead of using standard convex optimization algorithms, e.g., interior point method, here we devise an alternative low-complexity algorithm. By introducing a multiplier to the constraint , it holds at the optimum that
where . As and are non-increasing with , the optimal solution can be obtained by a bi-section search over , where is a sufficiently large value, until is satisfied (if possible), and then comparing the result with the case of (the case that ). The details are omitted due to the page limit. Overall, the computational complexity of the bi-section search method to solve (15) is .
Iii-B3 Step 3
Finally, given and , we minimize with respect to , which is achieved by updating the multipliers as
Evidently, the computational complexity of Step is .
The above Steps to repeat until a specified stopping criterion is met. In general, the stopping criterion is specified by two thresholds: absolute tolerance (e.g., ) and relative tolerance (e.g., ) . The pseudo-code of the ADMM method solving (P1) is illustrated in Algorithm . As the dual problem (11) is convex in , the convergence of the proposed method is guaranteed. Meanwhile, the convergence of the ADMM method is insensitive to the choice of step size . Thus, we set without loss of generality. Besides, we can infer that the computational complexity of one ADMM iteration (including the steps) is , because each of the steps has complexity. Notice that the ADMM algorithm may not exactly converge to the primal optimal solution of (8) due to the potential duality gap of non-convex problems. Therefore, upon termination of the algorithm, the dual optimal solution is an approximate solution to (8), whose performance gap will be evaluated through simulations.
Iv Simulation Results
In this section, we present simulations to evaluate the performance of the proposed algorithm. In all simulations, we use the parameters of the Powercast TX91501-3W transmitter with W (Watt) as the energy transmitter at the AP, and those of P2110 Powerharvester as the energy receiver at each WD with energy harvesting efficiency.444Please see the detailed product specifications on the website of Powercast Co. (http://www.powercastco.com). Without loss of generality, we set . The wireless channel gain follows the free-space path loss model , where denotes the antenna gain, MHz denotes the carrier frequency, in meters denotes the distance between the WD and AP, and denotes the path loss exponent. Unless otherwise stated, . Likewise, we set equal computing efficiency parameter , , and for all the WDs . For the data offloading mode, the bandwidth MHz, and noise power watt.
Iv-a Computation Rate Performance Comparisons
We first evaluate the computation rate performance of the proposed ADMM-based algorithm. For performance comparisons, we consider the following three representative benchmark methods:
Optimal: exhaustively enumerates all the combinations of WDs’ computing modes;
Offloading only: all the WDs offload their tasks to the AP, ;
Local computing only: all the WDs perform computations locally, .
In Fig. 3(a), we compare the weighted sum computation rate achieved by different schemes when the path loss exponent increases from to . For the simplicity of illustration, we consider and set meters, . In this case, the WDs are equally spaced by meter, where WD () has the strongest wireless channel and WD () has the weakest wireless channel. Besides, we set if
is an odd number andotherwise. We see that when is small and the wireless channels are strong, e.g., , the offloading-only scheme achieves near optimal solution. However, as we increase , the performance of the offloading-only scheme quickly degrades, e.g., achieving only around of the optimal rate when , because the offloading rates severely suffer from the weak channels in both the uplink and downlink. In contrast, the local-computing-only scheme achieves the worst performance when is small (only around of the maximum when ) but near-optimal performance when . On the other hand, the proposed ADMM method achieves near-optimal performance for all values of (at most performance gap compared to the optimal value).
In Fig. 3(b), we fix and compare the computation rate performance when the average distance between the AP and the WDs varies. For simplicity of illustration, we consider WDs uniformly placed within the range with a meter spacing between every two adjacent WDs. In this sense, the placement of the WDs in Fig. 3(a) corresponds to . The weight assignment follows that in Fig. 3(a). We observe that the proposed ADMM method achieves near-optimal performance for all values of . The offloading-only scheme achieves relatively good performance when is small, e.g., , but poor performance when is large ( of the optimal value when ). The local-computing-only scheme, however, performs poorly when is small ( of the optimal value when ) but achieving near-optimal solution when is large. The results show that it is more preferable for a WD to offload computation when its wireless channel is strong and to perform local computing otherwise.
In Fig. 4, we compare the performance of different algorithms when the number of WDs varies from to . For each WD, its distance to the AP is uniformly generated as , and its weight is randomly assigned as either or with equal probability. Besides, each point in the figure is an average performance of independent random placements. Unlike in Fig. 3, the optimal performance is not plotted because the mode-enumeration based optimal method is computationally infeasible for most values of within the considered range. For example, needs to enumerate over computing mode combinations. Instead, we only compare the performance of the other sub-optimal methods. We see that the proposed ADMM method significantly outperforms the other two benchmark methods, i.e., around and higher average computation rate than the offloading-only and local-computing-only schemes, respectively. In particular, the offloading-only scheme performs relatively well when , but the rate increase becomes slower than the other three methods when becomes larger.
To sum up from Fig. 3 and 4, the performance of the offloading-only and local-computing-only methods are very sensitive to the network parameters and placement, e.g., path loss exponent, distance, and network size, which may produce very poor performance in some practical setups. In contrast, even with fixed initial point, the proposed ADMM method can achieve near-optimal computation rate performance under different network setups.
Iv-B Computational Complexity Evaluation
In Fig. 5, we characterize the computational complexity of the proposed ADMM-based algorithm. Here, we use the same network setup as in Fig. 4 and plot the average number of iterations consumed by Algorithm before its convergence when the number of WDs varies. Interestingly, we observe that the ADMM-based method consumes almost constant number of iterations under different within the considered range, i.e., . As the computational complexities of one ADMM iteration is , the overall computational complexity of the ADMM-based method is as well. The result indicates the complexity of the proposed ADMM based method increases slowly as the network size increase. Therefore, it is feasible to apply the ADMM-based method in a large-size IoT network where the network size dominates the overall complexity.
In this paper, we studied a weighted sum computation rate maximization problem in multi-user wireless powered edge computing networks with binary computation offloading policy. We formulated the problem as a joint optimization of individual computing mode selection and system transmission time allocation. In particular, we proposed an efficient ADMM-based method to tackle the hard combinatorial computing mode selection problem. Extensive simulation results showed that, with time complexity, the proposed ADMM-based method can achieve near-optimal computation rate performance under different network setups, and significantly outperform the other representative benchmark methods.
-  S. Bi, C. K. Ho, and R. Zhang, “Wireless powered communication: opportunities and challenges,” IEEE Commun. Mag., vol. 53, no. 4, pp. 117-125, Apr. 2015.
-  S. Bi, Y. Zeng, and R. Zhang, “Wireless powered communication networks:an overview,” IEEE Commun. Mag., vol. 23, no. 2, pp. 1536-1284, Apr. 2016.
-  S. Bi and R. Zhang, “Placement optimization of energy and information access points in wireless powered communication networks,” IEEE Trans. Wireless Commun., vol. 15, no. 3, pp. 2351-2364, Mar. 2016.
-  S. Bi and R. Zhang, “Distributed charging control in broadband wireless power transfer networks,” IEEE J. Sel. Areas in Commun., vol. 34, no. 12, pp. 3380-3393, Dec. 2016.
-  M. Chiang and T. Zhang, “Fog and IoT: An overview of research opportunities,” IEEE Internet Things J., vol. 3, no. 6, pp. 854-864, Jun. 2016.
-  Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, “A survey on mobile edge computing: the communication perspective,” IEEE Commun. Surveys Tuts, vol. 19, no. 4, pp. 2322-2358, Aug. 2017.
ETSI white paper No. 11 (Sep. 2015). Mobile edge computing: A key technology towards 5G. available on-line at http://www.etsi.org/images/files/ETSIWhitePapers/etsi_wp11_mec_a
-  W. Zhang, Y. Wen, K. Guan, D. Kilper, H. Luo, and D. O. Wu, “Energy-optimal mobile cloud computing under stochastic wireless channel,” IEEE Trans. Wireless Commun., vol. 12, no. 9, pp. 4569-4581, Sep. 2013.
-  Y. Wang, M. Sheng, X. Wang, L. Wang, and J. Li, “Mobile-edge computing: partial computation offloading using dynamic voltage scaling,” IEEE Trans. Commun., vol. 64, no. 10, pp. 4268-4282, Oct. 2016.
-  C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource allocation for mobile-edge computation offloading,” IEEE Trans. Wireless Commun., vol. 16, no. 3, pp. 1397-1411, Mar. 2017.
-  M.-H. Chen, B. Liang, and M. Dong, “Joint offloading decision and resource allocation for multi-user multi-task mobile cloud,” in Proc. IEEE Int. Conf. Commun. (ICC), Kuala Lumpur, Malaysia, May 2016, pp. 1-6.
-  C. You, K. Huang, and H. Chae, Energy efficient mobile cloud computing powered by wireless energy transfer, IEEE J. Sel. Areas Commun., vol. 34, no. 5, pp. 1757-1771, May 2016.
-  F. Wang, J. Xu, X. Wang, and S. Cui, “Joint offloading and computing optimization in wireless powered mobile-edge computing systems,” to appear in IEEE Trans. Wireless Commun., available on-line at arxiv.org/abs/1702.00606.
-  F. Wang, “Computation rate maximization for wireless powered mobile edge computing,” submitted for publication, available on-line at arxiv.org/abs/1707.05276.
-  S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
S. Boyd, E. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,”
Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1-122, Jan. 2011.
-  E. Ghadimi, A. Teixeira, I. Shames, and M. Johansson, “Optimal parameter selection for the alternating direction method of multipliers (ADMM): quadratic problems,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp. 644-658, Mar. 2015.