# Joint Task Assignment and Resource Allocation for D2D-Enabled Mobile-Edge Computing

## Authors

• 11 publications
• 42 publications
• 110 publications
• 48 publications
02/07/2018

### Joint Task Assignment and Wireless Resource Allocation for Cooperative Mobile-Edge Computing

This paper studies a multi-user cooperative mobile-edge computing (MEC) ...
01/27/2019

### HetMEC: Latency-optimal Task Assignment and Resource Allocation for Heterogeneous Mobile Edge Computing

Driven by great demands on low-latency services of the edge devices (EDs...
05/15/2020

### Joint Planning of Network Slicing and Mobile Edge Computing in 5G Networks

Multi-access Edge Computing (MEC) facilitates the deployment of critical...
08/17/2019

### Energy-Efficient Proactive Caching for Fog Computing with Correlated Task Arrivals

With the proliferation of latency-critical applications, fog-radio netwo...
10/31/2019

### Joint Communication and Computation Optimization for Wireless Powered Mobile Edge Computing with D2D Offloading

This paper studies a wireless powered mobile edge computing (MEC) system...
07/25/2020

### Minimum Overhead Beamforming and Resource Allocation in D2D Edge Networks

Device-to-device (D2D) communications is expected to be a critical enabl...
01/22/2020

### Joint Wireless and Edge Computing Resource Management with Dynamic Network Slice Selection

Network slicing is a promising approach for enabling low latency computa...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

It is envisioned that by the year of 2020, around billions of interconnected Internet of things (IoT) devices will surge in wireless networks, featuring new applications such as video stream analysis, augmented reality, and autonomous driving. The unprecedented growth of these latency-critical services requires extensive real-time computation, which, however, is hardly affordable by conventional mobile-cloud computing (MCC) systems that usually deploy cloud servers far away from end users [2]. Compared with MCC, mobile-edge computing (MEC) endows cloud-computing capabilities within the radio access network (RAN) such that the users can offload computation tasks to edge servers in their proximity for remote execution and then collect the results from them with enhanced energy efficiency and reduced latency (see [3] and the references therein). Meanwhile, in industry, technical specifications and standard regulations are also being developed by, e.g., the European Telecommunications Standard Institute (ETSI) [4], Cisco [5], and the 3rd Generation Partnership Project (3GPP) [6].

In the above works, the edge servers are mostly assumed to be one integrated server. However, considering multi-user multi-server systems where more than one CAPs are distributed over the network, it becomes non-trivial to answer the fundamental questions such as how to distribute the tasks among multiple servers, and how to schedule multiple tasks on one single server [19, 20, 21, 22, 23]. Computation resource sharing among WDs with intermittent connectivity was considered as early as in [19], in which a greedy task dissemination algorithm was developed to minimize task completion time. A polynomial-time task assignment scheme for tasks with inter-dependency was developed in [21]

to achieve guaranteed latency-energy trade-offs. However, this line of works often assumed the communication conditions (e.g., transmission rate and multiple access schemes) and/or computation capacities (e.g., execution rate) to be fixed or estimable by some service profiling, but ignored potential performance improvement brought by dynamic management over such resources (e.g., transmitting power, bandwidth, and computation frequency).

The remainder of this paper is organized as follows. The system model is presented in Section II. The joint task assignment and wireless resource allocation problem is formulated in Section III. The convex-relaxation-based joint task assignment and wireless resource allocation algorithm is proposed in Section IV, while two low-complexity benchmark schemes are proposed in Section V. Numerical results are provided in Section VI, with concluding remarks drawn in Section VII.

Notation

—We use upper-case boldface letters for matrices and lower-case boldface ones for vectors. “Independent and identically distributed” is simplified as

, and

means “denoted by”. A circularly symmetric complex Gaussian (CSCG) distributed random variable (RV)

with mean

and variance

is denoted by . A continuous RV uniformly distributed over is denoted by . and stand for the sets of real matrices of dimension and real vectors of dimension , respectively. The cardinality of a set is represented by . In addition, means an -degree polynomial.

## Ii System Model

We consider a multi-user cooperative MEC system that consists of one local user, and nearby helpers denoted by the set , all equipped with single antenna. For convenience, we define the local user as the -th WD. Suppose that the local user has independent tasks111In this paper, we do not consider interdependency among tasks enabling data transmission from one helper to another as in [19, 21], since even under this simple task model, it becomes clear later that task assignment among multiple D2D helpers over pre-scheduled TDMA slots has already been very demanding to solve. to be executed, denoted by the set , and the input/output data length of each task is denoted by / in bits. In the considered MEC system, each task can be either computed locally, or offloaded to one of the helpers for remote execution. Let denote the task assignment matrix, whose -th entry, denoted by , , , is given by

 π(l,k)={1,if the lth task is % assigned to the kth WD,0,otherwise.

Also, define as the set of tasks that are assigned to WD , . It is worthy of noting that we assume , . That is, each WD including the local user should be assigned with at least one task, i.e., 222In practice, when , a group of helpers are required to be selected a priori such that . However, detailed design regarding such selection mechanism is out of the discussion of this paper, and is left as our future work.. Define by in cycles the number of CPU cycles required for computing the th task, [20, 22]. Also, denote the CPU frequency in cycles per second (Hz) at the th WD as , .

### Ii-a Local Computing

The tasks in the set are executed locally with the local computation frequency in cycles per second given as [17]

 f0=∑Ll=1π(l,K+1)Cltc0, (1)

where denotes the associated local computation time, and is subject to the maximum frequency constraint, i.e., . The corresponding computation energy consumed by the local user is given by [25]

 Ec0=κ0L∑l=1π(l,K+1)Clf20, (2)

where is a constant denoting the effective capacitance coefficient that is decided by the chip architecture of the local user. Replacing in (2) with (1), can thus be expressed in terms of as follows:

 Ec0=κ0(∑Ll=1π(l,K+1)Cl)3(tc0)2. (3)

### Ii-B Remote Computing at Helpers

First, the tasks are offloaded to the helpers via TDMA. For simplicity, in this paper we assume that the local user offloads the tasks to the helpers with a fixed order of as in Fig. 1. In other words, the local user offloads tasks to the st helper, then to the nd helper, until to the th helper.

Let denote the channel power gain from the local user to the th helper for offloading, . The achievable offloading rate (in bits per second) at the th helper is given by

 roffk=Blog2⎛⎝1+poffkhkσ2k⎞⎠, (4)

where in Hz denotes the available transmission bandwidth, is the transmitting power for offloading tasks to the th helper, and is the power of additive white Gaussian noise (AWGN) at the th helper. Then, the time spent in offloading tasks to the th helper is given by

 toffk=∑Ll=1π(l,k)Tlroffk. (5)

According to (4) and (5), is expressed in terms of as

 poffk=1¯hkf⎛⎝∑Ll=1π(l,k)Tltoffk⎞⎠, (6)

where is the normalized channel power gain, and is a function defined as . The total energy consumed by the local user for offloading all the tasks in is thus expressed as

 Eoff0=K∑k=11¯hkf⎛⎝∑Ll=1π(l,k)Tltoffk⎞⎠toffk. (7)

#### Ii-B2 Phase II: Task Execution

After receiving the assigned tasks , , the th helper proceeds with the computation frequency given by

 fk=∑Ll=1π(l,k)Cltck, (8)

where ’s is the remote computation time spent by the th helper. Similarly, helper ’s remote computing frequency given by (8) is also constrained by its maximum frequency, i.e.,. In addition, its computation energy is expressed as

 Eck=κk(∑Ll=1π(l,k)Cl)3(tck)2, (9)

where is the corresponding capacitance constant of the th helper.

After computing all the assigned tasks, the helpers begin transmitting the computation results back to the local user via TDMA. Similar to the task offloading phase, we assume that the helpers transmit their respective results in the fixed order of . Let denote the channel power gain from helper to the local user for downloading. The achievable downloading rate from the th helper is then given by

 rdlk=Blog2(1+pdlkgkσ20), (10)

where denotes the transmitting power of the th helper, and denotes the power of AWGN at the local user. The corresponding downloading time is thus given by

 tdlk=∑Ll=1π(l,k)Rlrdlk. (11)

Combining (10) and (11), the transmitting power of the th helper is expressed as

 pdlk=1¯gkf(∑Ll=1π(l,k)Rltdlk), (12)

where denotes the normalized channel power gain from the th helper to the local user. The communication energy consumed by the th helper is thus given by

 Edlk=1¯gkf(∑Ll=1π(l,k)Rltdlk)tdlk. (13)

### Ii-C Total Latency

Since TDMA is used in both Phase I and Phase III, each helper has to wait until it is scheduled. Specifically, the first scheduled helper, i.e., helper , can transmit its task results to the local user only when the following two conditions are satisfied: first, its computation has been completed; and second, task offloading from the local user to all of the helpers are completed such that the wireless channels begin available for data downloading. As a result, helper starts transmitting its results after a period of waiting time given by

 I1=max{toff1+tc1,K∑k=1toffk}, (14)

where is the task execution time at helper .

Moreover, for each of the other helpers, it can transmit the results to the local user only when: first, its computation has been completed; second, the th helper scheduled preceding to it has finished transmitting. Consequently, denoting the waiting time for helper () to transmit the results as , is expressed as

 Ik=max{k∑j=1toffj+tck,Ik−1+tdlk−1}. (15)

 T=IK+tdlK. (16)

To sum up, taking local computing into account as well, the total latency for executing all of the tasks is given by

 Ttotal=max{tc0,T}. (17)

## Iii Problem Formulation

In this paper, we aim at minimizing the total latency for local/remote computing of all the tasks by optimizing the task assignment strategy (’s), the task offloading time (’s), the task execution time (’s), and the results downloading time (’s), subject to the individual energy and computation frequency constraints at the local user as well as the helpers. Specifically, we are interested in the following problem:

 (P0): Minimize\boldmath{Π},{toffk,tdlk,tck},tc0   Ttotal Subject to κ0(∑Ll=1π(l,K+1)Cl)3(tc0)2+K∑k=11¯hkf⎛⎝∑Ll=1π(l,k)Tltoffk⎞⎠toffk≤E0, (18a) κk(∑Ll=1π(l,k)Cl)3(tck)2+1¯gkf(∑Ll=1π(l,k)Rltdlk)tdlk≤Ek,∀k∈K, (18b) ∑Ll=1π(l,K+1)Clfmax0≤tc0, (18c) ∑Ll=1π(l,k)Clfmaxk≤tck,∀k∈K, (18d) K+1∑k=1π(l,k)=1,∀l∈L, (18e) L∑l=1π(l,k)≥1,∀k∈K∪{K+1}, (18f) π(l,k)∈{0,1},∀l∈L,k∈K∪{K+1}, (18g) toffk≥0,tdlk≥0,∀k∈K. (18h)

In the above problem, the objective function is given by (17). The constraints given by (18a) and (18b) state that the total energy consumption of computation and transmission for the local user and the th helper cannot exceed and ’s, respectively. In (18a), and are replaced with (2) and (7), respectively, while (18b) is obtained by substituting (9) and (13) for ’s and ’s, respectively. (18c) and (18d) guarantee that the computation frequencies of the local users (c.f. (1)) and the helpers (c.f. (8)) stay below their respective limits. (18e) guarantees that each task must be and only assigned to one WD; and (18f) ensures that each of the local user and the helpers is assigned with at least one task. Finally, (18g) imposes the binary offloading constraints.

### Iii-a Problem Reformulation

Note that (c.f. (17)) is a complicated function involving accumulative mainly due to the recursive expression of (c.f. (15)). Hence, to obtain an explicit objective function in terms of the optimization variables, we need to simplify exploiting the following proposition.

###### Proposition iii.1

Problem (P0) can be recast into an equivalent problem as follows.

 (P0-Eqv): Minimize\boldmath{Π},{toffk,tdlk,tck},tc0   toff1+tc1+K∑k=1tdlk Subject to  (???)−(???), K∑k=1toffk≤toff1+tc1, (19a) tck≤toff1+tc1+k−1∑j=1tdlj−k∑j=1toffj,∀k∈K∖{1}, (19b) tc0≤toff1+tc1+K∑k=1tdlk. (19c)
###### Proof:

A brief idea of the proof is given as follows. To remove in ’s, we first need to narrow down from different cases leveraging the property of the optimal solution. Then, based on the simplified case, we recursively derive ’s for . Finally, we arrive at a clear objective function of (P1-Eqv) subject to all the optimality conditions given by (19a)-(19c). Please refer to Appendix A for the proof in detail.

### Iii-B Suboptimal Design

The transformed problem (P0-Eqv) is seen as an MINLP due to the integer constraints given by (18g), and is thus in general NP-hard. Although the optimal solution to (P0-Eqv) can be obtained by exhaustive search, it is computationally too expensive (approx.

times of search) to implement in practice. Therefore, we solicit two approaches for suboptimal solution to (P0-Eqv) in the following sections. The first approach is to relax the binary variables into continuous ones while the second approach aims for decoupling the task assignment and wireless resource allocation.

For the first approach, first, we relax (18g) into continuous constraints expressed as

 π(l,k)∈[0,1],∀l∈L,k∈K∪{K+1}. (20)

Therefore, the relaxed problem is expressed as:

 (P1): Minimize\boldmath{Π},{toffk,tdlk,tck},tc0   toff1+tc1+K∑k=1tdlk Subject to   (???)−(???),(???),(???)−(???),(???).

It is worthy of noting that, since (c.f. (3)) and (c.f. (7)) are obtained by convex operations on perspective of convex functions and ’s with respect to (w.r.t.) the variables and ’s, respectively, they are also convex functions. So are and , . Therefore, (P1) is a convex problem. Next, we need to round the continuous ’s into binary one such that (18e) and (18f) are satisfied. The details of the proposed joint task assignment and wireless resource allocation scheme will be discussed in Section IV. In addition, we also provide a brief discussion regarding one special case of this approach in Section V-A, in which computation frequencies of all the WDs are fixed to be their maximum, thus serving as a benchmark scheme without computation allocation.

For the second approach, first, it is easy to verify that given fixed, (P0-Eqv) reduces to be a convex problem shown as below:

 (P2): Minimize{toffk,tdlk,tck},tc0   toff1+tc1+K∑k=1tdlk Subject to   (???)−(???),(???),(???)−(???).

Then, we decouple the design of task assignment and wireless resource allocation by employing a greedy task assignment based heuristic algorithm that will be elaborated in Section V-B.

## Iv Joint Task Assignment and Wireless Resource Allocation

The main thrust of the proposed scheme in this section is to relax the binary task-assignment variables into continuous ones, and to solve the relaxed convex problem in semi-closed forms, which are then followed by attaining suboptimal task assignment based on the optimal solution to the relaxed problem.

It is seen that problem (P1) is convex, and can thus be efficiently solved by some off-the-shelf convex optimization tools such as CVX [26]. To gain more insights into the optimal rate and computation frequency allocation, in this section, we propose to solve (P1) leveraging the technique of Lagrangian dual decomposition. The (partial) Lagrangian of (P1) is expressed as

 L1(\boldmath{Π},{toffk,tdlk,tck},tc0;η,β0,λ0,ζ0,\boldmath{λ}T,\boldmath{β}T,\boldmath{ζ}T)=toff1+tc1+K∑k=1tdlk+η(K∑k=1toffk−toff1−tc1)+β0(tc0−toff1−tc1−K∑k=1tdlk)+λ0(κ0(∑Ll=1π(l,K+1)Cl)3(tc0)2+K∑k=11¯hkf⎛⎝∑Ll=1π(l,k)Tltoffk⎞⎠toffk−E0)−ζ0(tc0−∑Ll=1π(l,K+1)Clfmax0)+K∑k=1λk(κk(∑Ll=1π(l,k)Cl)3(tck)2+1¯gkf(∑Ll=1π(l,k)Rltdlk)tdlk−Ek)+K∑k=2βk(tck−toff1−tc1−k−1∑j=1tdlj+k∑j=1toffj)−K∑k=1ζk(tck−∑Ll=1π(l,k)Clfmaxk), (21)

where , , , and denote the dual variables associated with the constraints (19a), (19c), (18a), and (18c), respectively; represent the dual variables associated with the total energy constraints (18b) each for one helper; are the dual variables for the constraints given by (19b); and the multipliers are assigned to the constraints given by (18d). After some manipulations, (21) can be equivalently expressed as

 L1(\boldmath{Π},{toffk,tdlk,tck},tc0;η,β0,λ0,ζ0,\boldmath{λ}T,\boldmath{β}T,\boldmath{ζ}T)=¯L0(\boldmath{Π},tc0;β0,λ0,ζ0)+ζ0∑Ll=1π(l,K+1)Clfmax0+K∑k=1(¯Lk(\boldmath{Π% },toffk,tdlk,tck;η,β0,λ0,\boldmath% {λ}T,\boldmath{β}T,\boldmath{ζ}T)+ζk∑Ll=1π(l,k)Clfmaxk), (22)

where

 ¯L0(\boldmath{Π},tc0;β0,λ0,ζ0)=(β0−ζ0)tc0+λ0κ0(∑Ll=1π(l,K+1)Cl)3(tc0)2, (23)

and

 ¯Lk(\boldmath{Π},toffk,tdlk,tck;η,β0,λ0,\boldmath{λ}T,%\boldmath$β$T,\boldmath{ζ}T)=Aktdlk+Bktck+Dktoffk+λkκk(∑Ll=1π(l,k)Cl)3(tck)2+λ0¯hkf⎛⎝∑Ll=1π(l,k)Tltoffk⎞⎠toffk+λk¯gkf(∑Ll=1π(l,k)Rltdlk)tdlk, (24)

with , , , given by

 Ak={1−β0−∑Kj=k+1βjk1, (25)

and

 Dk={1−β0k=1η+∑Kj=kβjk>1, (26)

respectively. The dual function corresponding to (22) can be expressed as

 (27)

As a result, the dual problem of is formulated as

 (P1-dual): Maximizeη,β0,λ0,ζ0,\boldmath{λ},\boldmath{β},\boldmath{ζ} g(η,β0,λ0,ζ0,\boldmath{λ}T,\boldmath{β}T,\boldmath{ζ}T) Subject to η≥0,β0≥0,λ0≥0,ζ0≥0,\boldmath{λ}≥0,\boldmath{β}≥0,\boldmath{ζ}≥0. (28)

### Iv-a Dual-Optimal Solution to (P1)

In this subsection, we aim for solving problem . To facilitate solving the optimum and to (27) providing that and a set of dual variables are given, we decompose the above problem into subproblems including for and one for as follows.

 (P1-sub1): Minimizetoffk,tdlk,tck ¯Lk(¯\boldmath{Π},toffk,tdlk,tck;η,β0,λ0,\boldmath{λ}T,\boldmath{β}T,\boldmath{ζ}T) Subject to toffk≥0,tdlk≥0. (P1-sub2): Minimizetc0 ¯L0(¯\boldmath{Π},tc0;β0,λ0,ζ0).

Since these problems are independent of each other, they can be solved in parallel each for one , .

Next, define for , in which is the principal branch of Lambert function defined as the inverse function of [27]. Then, in accordance with the optimal solution to the above subproblems, the optimal time and power together with the optimal task assignment to (27) are shown in the following proposition.

###### Proposition iv.1

Given a set of dual variables, the optimal solution to (27) is given by

 ^toffk=⎧⎨⎩∑Ll=1^π(l,k)Tl~f(Dk¯hk/λ0)if Dk>0,infotherwise; ^tdlk=⎧⎨⎩∑Ll=1^π(l,k)Rl~f(Ak¯gk/λk)if Ak>0,infotherwise; (29) ^tck=⎧⎨⎩∑Ll=1^π(l,k)Cl3√Bk/(2λkκk)if Bk>0,infotherwise; and  ^tc0=⎧⎨⎩∑Ll=1^π(l,K+1)Cl3√(β0−ζ0)/(2λ0κ0)if β0−ζ0>0,infotherwise. (30)

In addition, ’s shown in (29) and (30) denote the optimal solution to the following linear programming (LP) problem:

 (LP1): Minimize\boldmath{Π} L∑l=1(K∑k=1ϕl,kπ(l,k)+ϕl,K+1π(l,K+1)) Subject to

where , , , is given by

 (31)

and , , is expressed as

 ϕl,K+1=(β0−ζ0)Cl3√(β0−ζ0)/(2λ0κ0)+λ0κ0Cl(β0−ζ02λ0κ0)23+ζ0Clfmax0. (32)