Mobile Edge Computing Network Control: Tradeoff Between Delay and Cost

As mobile edge computing (MEC) finds widespread use for relieving the computational burden of compute- and interaction-intensive applications on end user devices, understanding the resulting delay and cost performance is drawing significant attention. While most existing works focus on singletask offloading in single-hop MEC networks, next generation applications (e.g., industrial automation, augmented/virtual reality) require advance models and algorithms for dynamic configuration of multi-task services over multi-hop MEC networks. In this work, we leverage recent advances in dynamic cloud network control to provide a comprehensive study of the performance of multi-hop MEC networks, addressing the key problems of multi-task offloading, timely packet scheduling, and joint computation and communication resource allocation. We present a fully distributed algorithm based on Lyapunov control theory that achieves throughput-optimal performance with delay and cost guarantees. Simulation results validate our theoretical analysis and provide insightful guidelines on the interplay between communication and computation resources in MEC networks.



There are no comments yet.


page 1

page 2

page 3

page 4


TODG: Distributed Task Offloading with Delay Guarantees for Edge Computing

Edge computing has been an efficient way to provide prompt and near-data...

Computation Offloading in Multi-Access Edge Computing Networks: A Multi-Task Learning Approach

Multi-access edge computing (MEC) has already shown the potential in ena...

Task Offloading Optimization in NOMA-Enabled Multi-hop Mobile Edge Computing System Using Conflict Graph

Resource allocation is investigated for offloading computational-intensi...

Delay-Optimal Distributed Edge Computing in Wireless Edge Networks

By integrating edge computing with parallel computing, distributed edge ...

Optimal Control of Wireless Computing Networks

Augmented information (AgI) services allow users to consume information ...

Timely-Throughput Optimal Scheduling with Prediction

Motivated by the increasing importance of providing delay-guaranteed ser...

From Sensor to Processing Networks: Optimal Estimation with Computation and Communication Latency

This paper investigates the use of a networked system (e.g., swarm of ro...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Resource- and interaction-intensive applications such as real-time computer vision and augmented reality will increasingly dominate our daily lives

[13]. Due to the limited computation capabilities and restricted energy supply of end user equipments, many resource-demanding tasks that cannot be executed locally end up being offloaded to centralized cloud data centers. However, the additional delays incurred in routing data streams from UEs to distant clouds significantly degrade the performance of real-time interactive applications. To address this challenge, mobile edge computing (MEC) emerges as an attractive alternative by bringing computation resources to edge servers deployed close to the end users (e.g., at base stations), striking a good balance between cost efficiency and low latency access.

Fig. 1: An illustrative multi-hop MEC network consisting of UEs, edge cloud servers, and the core cloud. Edge servers communicate with UEs via wireless links, and among each other and the core cloud via wired connections.
Fig. 2: A function chain representation of an augmented reality service, composed of Tracker, Object Recognizer, and Renderer functions [1].

Delay and cost are hence two crucial criteria when evaluating the performance of MEC networks. Offloading intensive tasks to the cloud reduces overall resource cost (e.g., energy consumption) by taking advantage of more efficient and less energy-constrained cloud servers, at the expense of increasing end-to-end delay. In order to optimize such cost-delay tradeoff, MEC operators have to make critical decisions:

  • Task offloading: decide which service tasks should be processed at the UEs (locally) and which at the edge cloud, and in which servers;

  • Packet routing and scheduling: decide how to route incoming packets to the appropriate servers assigned with the execution of the corresponding tasks;

  • Resource allocation: determine the amount of computation resources to allocate for task execution (at both UEs and edge servers) and the amount of communication resources (e.g., transmission power) to allocate for the transmission of data streams through the MEC network.

Some of these problems have been addressed in existing literature. We refer interested readers to [13, 1] and references therein for an overview of recent MEC studies. For example, the task offloading problem for minimizing average delay under UE battery constraints is addressed in [5]; the dual problem of minimizing energy consumption subject to worst-case delay constraints is studied in [6]; [11] investigates the same problem under an objective function that trades off the two criteria. However, most existing works on MEC focus on simplified versions of a subset of the problems listed above. That is, single task offloading, assignment of task requests to edge servers without explicit multi-hop routing, and computation resource allocation without joint optimization of computation and communication resources.

On the other hand, recent works in the cloud networking literature have addressed the optimization of more complex services composed of multiple tasks/functions (e.g., service function chains) that can be executed at multiple cloud locations [12, 2, 4]. However, this line of work has focused on static wireline networks, without taking into account aspects such as uncertain channel conditions, time-varying service demands, and delay optimization, critical to MEC networks.

Driven by the advent of increasingly complex services and heterogeneous MEC networks, and leveraging recent advances in the use of Lyapunov control theory for distributed computing networks [8, 9], in this work we focus on the design of dynamic control policies for multi-hop MEC networks (see Fig. 1) hosting multi-task services (see Fig. 2). Our contributions can be summarized as follows:

  • We develop an efficient algorithm for dynamic MEC network control that jointly solves the problems of multi-task offloading, multi-hop routing, and joint computation-communication resource allocation.

  • We show how the algorithm allows tuning the cost-delay tradeoff while guaranteeing throughput-optimality.

  • We provide numerical simulations that validate our theoretical claims in practical MEC settings.

Ii System Model

Consider a MEC network as shown in Fig. 1. Let and denote the set of UEs and edge servers, respectively, with . The UEs can communicate with the edge cloud via wireless channels, while wired connections are constructed between nearby edge servers and the core cloud;111While, in line with most works on MEC, we do not consider cooperation between UEs, i.e., we assume UEs do NOT compute/transmit/receive packets irrelevant to their own, it is straightforward to extend the proposed model to include cooperation between UEs. wireless and wireline links are collected in and , respectively. A communication link with node as the transmitter and as the receiver is denoted by . The incoming and outgoing neighbors of node are collected in the sets and , respectively; specially, denotes the set of wireless outgoing neighbors of node .

Time is divided into slots of appropriate length , chosen such that the uncontrollable processes ( channel state information (CSI), packet arrivals, etc) are independent and identically distributed (i.i.d.) across time slots. Each time slot is divided into three phases. In the sensing phase, neighboring nodes exchange local information (e.g., queue backlog) and collect CSI. Then, in the outgoing phase, decisions on task offloading, resource allocation, and packet scheduling are made and executed by each node. Finally, during the incoming phase, each node receives incoming packets from neighbor nodes, local processing unit, and (possibly) sensing equipment.

The following parameters characterize the available computation resources in the MEC network:

  • : the possible levels of computational resource that can be allocated at node ;

  • : the compute capability (e.g., computing cycles) when resources are allocated at node ;

  • : the setup cost to allocate computational resources at node ;

  • : the computation unit operational cost (e.g., cost per computing cycle) at node .

Similarly, for the wireline transmission resources on links :

  • : the possible levels of transmission resources that can be allocated on link ;

  • : the transmission capability (e.g., bits per second) of transmission resources on link ;

  • : the setup cost of allocating transmission resources on link ;

  • : the transmission unit operational cost (e.g., cost per packet) on link .

We denote by

the resource allocation vector at time


Finally, we assume that each node has a maximum power budget for wireless transmission, and each unit of energy consumption leads to a cost of at node . More details on the wireless transmission are presented in the next subsection.

Ii-a Wireless Transmission Model

This section focuses on the wireless transmissions between the UEs and edge cloud. We employ the channel model proposed in [3], as is depicted in Fig. 1. Massive antennas are deployed at each edge server, and with the aid of beamforming techniques, it can transmit/receive data to/from multiple UEs simultaneously using the same band. The interference between different links can be neglected, given that the UEs are spatially well separated. Different edge servers are separated by a frequency division scheme, where a bandwidth of is allocated to each edge server. Each UE is assumed to associate with only one edge server at a time.

The transmission power of each wireless link is assumed to be constant during a time slot, and we define as the power allocation decision of node . The packet rate of link then follows


where denotes the channel gain, which is assumed to be i.i.d. over time; is the noise power. Recall that each node has a maximum transmission power of , and it follows


For each UE , a binary vector is defined to indicate its decision of task offloading, where the link is activated if the corresponding element . It then follows that


Finally, we define the aggregated vectors as and , respectively.

Remark 1

The length of each time slot is set to match the channel coherence time, which is on the order of milliseconds for millimeter wave communication (e.g., slowly moving pedestrian using sub-GHz band). On this basis, the change in network topology (and hence path loss) is assumed to be negligible between adjacent time slots.

Remark 2

While out of the scope of this paper, the use of wireless transmissions via broadcast can potentially increase route diversity, thus enhancing network performance when aided by techniques such as superposition coding [9].

Ii-B Service Model

Let denote the set of available services, where each service is completed by sequentially performing tasks (functions) on the input data stream. All data streams are composed of packets of equal size , which can be processed individually. We denote by the number of packets of service exogenously arriving at node at time , assumed to be i.i.d. across time and with mean value .

We denote by the scaling factor of the -th function of service , i.e., the ratio of the output stream size to the input stream size; and by its workload, i.e., the required computation resource (number of computing cycles) to process a unit of input data stream.

For a given service , a stage packet refers to a packet that is part of the output stream of function , or the input stream of function . Hence, a service input packet (exogenously arriving to the network) is a stage packet, and a service output packet (fully processed by the sequence of functions) is a stage packet.

Ii-C Queueing System

Based on the above service model, the state of a packet is completely described by a -tuple , where is the destination of the packet (i.e., the user requesting the service), is the requested service, and the current service stage. We refer to a packet in state as a -commodity packet. A distinct queue is created for each commodity at every node , with queue length denoted by , where the overall set of MEC queue sizes denoted by .

To indicate the number of packets that each node plans to compute and transmit, corresponding flow variables are defined.222The planned flow does NOT take the number of available packets into consideration, i.e., a node can plan to compute/transmit more packets than it actually has. It is defined in this way for mathematical convenience. For commodity , we denote by the number of packets node plans to send to its central processing unit (CPU), the number of packets node expects to collect from its CPU, and the number of packets node plans to transmit to node . Flow variables, collected in , must satisfy: 1) non-negativity: ;

2) service chaining constraints:


3) capacity constraints:


4) boundary conditions: , , ,


The queueing dynamics is then given by333The inequality is due to the definition of planned flow, i.e., the last line in (II-C) can be larger than the packets that node actually receives.


where .

Finally, we set and to indicate the only stage packets can arrive exogenously to the network and only stage packets can exit the network.

Ii-D Problem Formulation

Two metrics will be considered to evaluate the performance of the MEC network: resource cost and average delay.

The instantaneous resource cost at time is given by


which includes computation, wired transmission, and wireless transmission costs.

On the other hand, the average delay is derived according to Little’s theorem [10] as


where ,444Note that our average delay metric is computed normalizing queue sizes by corresponding scaling factors, out of consideration for fairness. Additional priority weights may be used to indicate service-specific latency requirements, independent of stream sizes. and denotes the long-term average of the random process .

We then formulate the dynamic MEC network control problem as that of making operation decisions over time to:


where (10b) is a necessary condition for network stability.

Bearing this goal in mind, in the next section we propose a parameter-dependent tunable control policy that allows, not only finding the minimum average resource cost that guarantees MEC stability, but also computing alternative operating points that allow trading off increased resource cost for reduced average delay .

Iii MEC Network Control

In this section, we derive our proposed MEC network control (MECNC) policy by applying Lyapunov optimization theory to problem (10).

Iii-a Lyapunov Drift-Plus-Penalty (Ldp)

Let the Lyapunov function of the MEC queuing system be defined as , where denotes a diagonal matrix with as its elements. The standard Lyapunov optimization procedure is to 1) observe the current queue status , as well as the CSI , and then 2) minimize an upper bound of the drift-plus-penalty function


where parameter controls the tradeoff between the drift and penalty , and the upper bound is given by555The inequality holds under the mild assumption that there exists a constant that bounds the arrival process, i.e., for .


where , and the weights are given by


and is a constant that is irrelevant to queue status and decision variables (see [8] for the details of derivation).

The control algorithm developed in this paper aims to minimize the upper bound (III-A), which includes three parts (last three lines), relating to computation, wired transmission, and wireless transmission, respectively. Each part can be optimized separately, with the solutions to the first two parts provided in [8] and the third problem addressed in the following.

The goal is to minimize the last term in (III-A) subject to (2), (3), and (5b). Note that the objective function is linear in , which leads to Max Weight mannered solution, i.e., finding the commodity with largest weight:


If , no packets will be transmitted, and thus no power will be allocated; otherwise, the optimal flow assignment is


when , and otherwise. Substituting the above result into the objective function leads to a reduced problem with respect to (w.r.t.) and , i.e.,


subject to (2) and (3). Note that we can rearrange the above objective function according to the transmitting node as


which enables separate decision making at different nodes. The optimal solution for the UEs and edge servers is presented in next section, based on the following proposition.

Proposition 1

The solution to the following problem


is , where is the minimum positive value that makes (2) satisfied.

User Edge Server
Wired Links No wired transmission between users
Wireless Links
TABLE I: Available Resources and Costs of the MEC network (on the basis of second)

Iii-B MEC Network Control (Mecnc) Algorithm

In this section, we present the MECNC algorithm, which optimizes (III-A) in a fully distributed manner.

Iii-B1 Computing Decision

For each node :

  • Calculate the weight for each commodity:

  • Find the commodity with the largest weight:

  • The optimal choice for computing resource allocation is

  • The optimal flow assignment is


    when , and otherwise, with denoting the indicator function.

Iii-B2 Wired Transmission Decision

For each link :

  • Calculate the weight for each commodity:

  • Find the commodity with the largest weight:

  • The optimal choice for computing resource allocation is

  • The optimal flow assignment is


    when , and otherwise.

Iii-B3 Wireless Transmission Decision

For each wireless link , find the optimal commodity and the corresponding weight by (14); then,

  • for each edge server : determine the transmission power by Proposition 1;

  • for each UE : since it only associates with one edge server and noting the fact that generally each user can only access a limited number of edge servers, we can decide the optimal edge server by brute-forced search. More concretely, for any edge server , assume that UE associates with it, i.e., let if , and otherwise in (17), and solve the one-variable sub-problem by Proposition 1, which gives the optimal transmission power and the corresponding objective value . By comparing the values, the optimal edge server to associate with is .

Iii-C Performance Analysis

We first define the stability region as the collection of all admissible arrival vectors such that there exists some control policy to stabilize the network (i.e., satisfying (10b)). For any admissible , denote by the minimum (or infimum) cost that can be achieved, which serves as a benchmark to evaluate the proposed algorithm.

Theorem 1

For any arrival vector lying in the interior of , the MECNC algorithm can stabilize the queueing system, with the average cost and delay satisfying


where denotes a vector with all elements equal to that satisfies (since is in the interior of ).


See Appendix B in [7].

Since the queueing system is stabilized, the proposed algorithm is throughput optimal. In addition, there exists an tradeoff between the upper bounds of the average delay and the incurred cost. By increasing the value of , the average cost can be reduced, while increasing the average delay (but still guaranteed), which accords with the motivation behind the definition of the LDP function (III-A).

Iv Numerical Results

Consider the grid area shown in Fig. 5, which includes UEs and edge servers. Each edge server can cover UEs within its surrounding grid. We set the length of each time slot to . The UEs mobility is modeled by i.i.d. random walk (reflecting when hitting the boundary), with the one-slot displacement distributing in Gaussian (the average speed of the UEs under this setting).

Each user can request two services (there are two functions in each service), with the following parameters

where is the supportable input size (in one slot) given CPU resource. The size of each packet is , and the number of packets arriving at each UE is modeled by i.i.d. Poisson processes of parameter .

The available resources and corresponding costs are summarized in Table I. For wireless transmission, millimeter wave communication is employed, operating in the band of , and a bandwidth of is allocated to each edge server; the 3GPP path-loss model

for urban microcell is adopted, and the standard deviation of the shadow fading is

; the antenna gain is . The noise has a power spectrum density of .

Fig. 3: Network setting with edge servers (and UEs).
Fig. 4: Stability region achieved by using and .
Fig. 5: Delay (blue curve) and cost (red curve) performance under various .

Iv-a Stability Region

First, we simulate the stability region for the described MEC network, using different values of in the MECNC algorithm. As varies, the stable average delay is recorded (if exists) based on a long-term ( time slots) observation of the queueing system. If the average delay is constantly growing even at the end of the time window, the average delay is defined as , which implies that the network is not stable under the given arrival rate.

As depicted in Fig. 5, the average delay rises as the arrival rate increases, which blows up when approaching . This critical point can be interpreted as the boundary of the stability region of the MEC network. On the other hand, if all the computations are constrained to be executed at the UEs, the stability region is reduced to . That is, a gain of is achieved with the aid of edge servers. Last but not least, note that different values lead to identical critical points, although they result in different average delay performance, which validates the throughput-optimality of the MECNC algorithm.

Iv-B Cost-Delay Tradeoff

Next, we study the delay and cost performance of the MEC network, when tuning the parameter . The arrival rate is set as (i.e., packets per slot).

The results are shown in Fig. 5. Evidently, the average delay grows almost linearly with , while the cost reduces as grows (with a vanishing rate at the end), which support the tradeoff between the delay and cost bound. In addition, we observe two regions of significant cost reduction, i.e., and . The first reduction happens when UEs start offloading local tasks to the edge cloud; while the second one results when edge servers stop cooperating and most tasks are allocated to the edge server closer to the respective UEs, significantly reducing the transmission cost within the edge cloud, while increasing average delay (since load is not balanced between edge servers in favor of delay performance). A more detailed cost breakdown is presented in [7].

Based on the tradeoff relationship, we can tune the value of to optimize the performance of practical MEC networks. For example, the value leads to an average delay of , which is acceptable for real-time applications; while a cost of , which reduces the gap to the optimal cost by (compared with when ).

Service Service
Function Function Function Function
TABLE II: Offloading Ratio under Different Values of

Finally, we observe the offloading ratios for different computation tasks and various values in Table II. As expected, a growing value of puts more attention on the average cost, motivating the UEs to offload service tasks to the cloud. In addition, we find that for all listed values of , Service tasks tend to have a higher offloading ratio. An intuitive explanation is that Service is more compute-intensive, while resulting in lower communication overhead than Service , and thus more preferable for offloading.

V Conclusive Remarks

In this paper, we leveraged recent advances in the use of Lyapunov optimization theory to study the stability of distributed computing networks, in order to address key open problems in MEC network control. We designed a dynamic control algorithm, MECNC, that makes joint decisions about multi-task offloading, packet routing/scheduling, and computation/communication resource allocation. Numerical experiments were carried out on the stability region, cost-delay tradeoff, and task assignment performance of the proposed solution, proving to be a promising paradigm to manage next generation MEC networks.


  • [1] Z. Becvar (2017-03) Mobile edge computing: A survey on architecture and computation offloading. IEEE Communications Surveys & Tutorials 19 (3), pp. 1628–1656. Cited by: Fig. 2, §I.
  • [2] R. Boutaba (2015-11) On orchestrating virtual network functions in NFV. In 11th Int. Conf. on Network and Service Management (CNSM), Barcelona, Spain, pp. 50–56. External Links: Document Cited by: §I.
  • [3] G. Caire (2013-10) Joint spatial division and multiplexing – The large-scale array regime. ieee_j_it 59 (10), pp. 6441–6463. Cited by: §II-A.
  • [4] A. Erbad (2016-11) A survey on service function chaining. Journal of Network and Computer Applications 75 (1), pp. 138–155. Cited by: §I.
  • [5] Y. Hao (2018-03) Task offloading for mobile edge computing in software defined ultra-dense network. ieee_j_jsac 36 (3), pp. 587–597. Cited by: §I.
  • [6] R. P. Liu (2018-06) Energy-efficient admission of delay-sensitive tasks for mobile edge computing. ieee_j_com 66 (6), pp. 2603–2616. Cited by: §I.
  • [7] A. F. Molisch Mobile edge computing network control: Tradeoff between delay and cost. arXiv preprint. Cited by: §III-C, §IV-B.
  • [8] A. F. Molisch (2018-10) Optimal dynamic cloud network control. ieee_j_net 26 (5), pp. 2118–2131. Cited by: §I, §III-A, §III-A.
  • [9] A. F. Molisch (2018-12) Optimal control of wireless computing networks. ieee_j_wcom 17 (12), pp. 8283–8298. Cited by: §I, Remark 2.
  • [10] M. J. Neely (2010) Stochastic network optimization with application to communication and queueing systems. Morgan & Claypool, San Rafael, CA, USA. Cited by: §II-D.
  • [11] D. Pompili (2019-01) Joint task offloading and resource allocation for multi-server mobile-edge computing networks. ieee_j_vt 68 (1), pp. 856–868. Cited by: §I.
  • [12] N. Raman (2015-05) The cloud service distribution problem in distributed cloud networks. In Proc. IEEE Int. Conf. Commun., London, UK, pp. 344–350. External Links: Document Cited by: §I.
  • [13] P. Wang (2013-12) A survey of mobile cloud computing: architecture, applications, and approaches. Wirel. Commun. Mob. Comput. 13 (18), pp. 1587–1611. Cited by: §I, §I.