Closed-Form Delay-Optimal Computation Offloading in Mobile Edge Computing Systems

06/24/2019 ∙ by Xianling Meng, et al. ∙ 0

Mobile edge computing (MEC) has recently emerged as a promising technology to release the tension between computation-intensive applications and resource-limited mobile terminals (MTs). In this paper, we study the delay-optimal computation offloading in computation-constrained MEC systems. We consider the computation task queue at the MEC server due to its constrained computation capability. In this case, the task queue at the MT and that at the MEC server are strongly coupled in a cascade manner, which creates complex interdependencies and brings new technical challenges. We model the computation offloading problem as an infinite horizon average cost Markov decision process (MDP), and approximate it to a virtual continuous time system (VCTS) with reflections. Different to most of the existing works, we develop the dynamic instantaneous rate estimation for deriving the closed-form approximate priority functions in different scenarios. Based on the approximate priority functions, we propose a closed-form multi-level water-filling computation offloading solution to characterize the influence of not only the local queue state information (LQSI) but also the remote queue state information (RQSI). A extension is provided from single MT single MEC server scenarios to multiple MTs multiple MEC servers scenarios and several insights are derived. Finally, the simulation results show that the proposed scheme outperforms the conventional schemes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

page 14

page 16

page 19

page 25

page 27

page 35

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Smart mobile terminals (MTs) with advanced communication and computation capabilities facilitate us with a pervasive and powerful platform to realize many emerging computation-intensive mobile applications, e.g., interactive gaming, character recognition, and natural language processing

[2], [3]. These pose exigent requirements on the quality of computation experience, especially for the delay-sensitive applications.

Computation offloading [4], which offloads the computation tasks to the offloading destination, is one of the fundamental services to improve the computation performance, i.e., delay performance. In computation offloading services, both the communication capability of the MT and the computation capability of the offloading destination will influence the delay performance. Specifically,

  • The communication capability of the MT: The offloading rate varies according to the time-varying wireless channel quality between the MT and the offloading destination. Poor communication capabilities will result in the starvation of the computation of the offloading destination, which induces a large queuing delay at the MT.

  • The computation capability of the offloading destination: In practical scenarios, the offloaded tasks cannot be executed immediately because the computation capability of the offloading destination is not infinity. Both the computation time and the waiting time at the offloading destination will influence the delay performance.

In [5], a basic two-party communication complexity model is studied for the networked computation problems, with a particular emphasis on the communication aspect of computation. In [6], the communication and computation capabilities are jointly optimized to minimize the delay under the constraint of energy consumption. Since the cloud computing servers are usually computationally powerful, it is reasonable to neglect the executing time at the server. However, the remote cloud computing servers are always far away from the MTs and the large communication delay cannot be decreased, so the cloud computation offloading is not fit for the delay-sensitive applications.

Mobile edge computing (MEC) [7] is emerged as a promising technology to handle the explosive computation demands and the everincreasing computation quality requirements. Different from conventional cloud computing systems, MEC offers computation capability in close proximity to the MT. Therefore, by offloading the computation task from the MTs to the MEC servers, the delay performance can be greatly improved [8] [9] [10].

Most aforementioned works focus on optimize the local computation delay or the communication delay and neglect the computation delay at the MEC server. However, for computation-constrained MEC system, the computation capability of the MEC server is limited in MEC systems and neglecting the computation delay at the MEC server will lead to the deviation from the optimality. A computation offloading policy is strongly desired for computation-constrained MEC systems to achieve superior delay performance.

In this paper, we aim to achieve a delay-optimal computation offloading policy for computation-constrained MEC systems. Specially, the computation offloading policy will consider not only the current computation task delay, but also the future delay performance of the MEC system for superior delay performance. To investigate the optimal delay performance, a systematic approach to the delay-aware optimization problem is through the Markov decision process (MDP), but there are a couple of technical challenges involved as follows:

  • Challenges due to the Cascade Queue Coupling: Because of the cascade manner between the local task queue and the remote task queue, the offloading policy should be adapted to not only the channel state information (CSI) and the local queue state information (LQSI), but also the remote queue state information (RQSI) with the practical consideration on the limited computation capability of the MEC server. Specifically, for achieving the delay-optimal computation offloading, we need jointly consider the queue lengths of the LQSI and the RQSI, and choose the efficient transmission opportunities to execute the offloading based on the time-varying CSI. Also, for fully using the computation capability of both the MT and the MEC server, we need to maintain the balance between two cascade queues by adjusting the transmission rate (power), because the departure of the local task queue is the arrival of the remote task queue.

  • Challenges due to the closed-form MDP solution: For obtaining the optimal solution of the MDP optimization problem, a Bellman equation needs to be solved, which is well known as NP-hard, and nontrivial to obtain an optimal solution in closed-form with low computational complexity. Also, for maintaining the cascade queue balance, the time-varying system, which consists of the random task arrivals, the local computation, the transmission and the remote task computation, cannot adopt a simple long-term average state formulation. The optimal computation offloading policy should have the ability to adapt to the random task arrivals and make sure the cascade queue system can converge to the delay-optimal steady state by adjusting the local computation rate (power) and the transmission rate (power). The system dynamics increase the difficulties of solving the formulated Bellman equation.

For overcoming the aforementioned challenges, we develop an analytical framework for delay-optimal computation offloading in computation-constrained MEC systems, and derive a closed-form offloading policy. Our key contributions are summarized as follows:

  • We consider the computation-constrained MEC server for the delay-optimal computation offloading problem. In this system, the computation delay of the MEC server cannot be neglected, and the cascade queue balance should be maintained. For achieving good delay performance, the delay-optimal computation offloading policy should jointly consider the CSI, the LQSI and the RQSI simultaneously.

  • We formulate the delay-optimal computation offloading problem as an infinite horizon average cost MDP, and adopt a virtual continuous time system (VCTS) with reflections to overcome the curse of dimensionality. Next, we develop a multi-level water-filling computation offloading policy for jointly considering the CSI, the LQSI and the RQSI. Then, we derive the dynamic instantaneous rate estimation for maintaining the cascade queue balance by estimating the in-out rate difference of the queue system. Finally, we obtain approximate priority functions in both the computation sufficient scenario and the computation constrained scenario.

  • We extend our policy to the multi-MT multi-server scenario by adopting learning approach. Specifically, we compare the main differences between two scenarios, and derive a computation offloading policy by learning the access ratios from the historical access records.

The rest of this paper is organized as follows. Section II discusses the related works. Section III presents the system model and formulates the computation offloading problem. Section IV provides the optimality conditions via establishing the VCTS. Section V proposes the delay-optimal computation offloading policy and the dynamic instantaneous rate estimation. Section VI extends the computation offloading policy to the multi-MT multi-server scenarios and derives some brief insights. The performance of the proposed policy is evaluated by simulation in Section VII. Finally, this paper is concluded in Section VIII.

Ii Related Works

Since this paper studies the delay-optimal computation offloading in MEC systems, in this section, we briefly review the existing works on computation offloading and delay-aware considerations.

Ii-a Computation Offloading in MEC Systems

Computation offloading in MEC systems has attracted significant attentions recently. In [13], the computation tasks are chosen to offload for minimizing the average power consumption. In [14], the energy-delay tradeoff is analyzed for single-user MEC systems. Then, the results are extended to multi-user systems in [15]. In [16]

, a distributed computational offloading algorithm is proposed using game theory. In

[17], both the radio and computational resources are optimized for computation offloading in multi-cell MEC systems.

For delay-sensitive applications, it is necessary to consider the delay performance for computation offloading [18]. Significant theoretical and experimental research has been done in various areas to show that computation offloading can significantly enhance the delay performance. In [19], an one-dimensional search algorithm is proposed to minimize the total delay. In [2], an offloading strategy based on Lyapunov optimization is adopted to minimize the total cost which consists of delay and energy consumption. In [20], two offline strategies based on the constrained MDP are proposed to minimize the energy consumption under a delay constraint. In [21], a distributed computation offloading algorithm is proposed to achieve Nash equilibrium between delay and energy consumption. In [22], joint communication-computation optimization are studied to minimize the delay and energy consumption.

However, the above existing works take the assumption that the MEC server is computationally powerful enough such that the offloaded computation tasks are executed immediately once arriving the server. In this paper, we consider the limited computation capability of the MEC server, and include the queuing time at the MEC server into the delay performance of computation. In this case, we handle the coupling between the computation capability of the MEC server and the communication capability of the MT, and propose a computation offloading policy to balance the communication-computation tradeoff.

Ii-B Delay-Aware Considerations

To optimize the delay performance, there are several common approaches to handle delay-aware resource allocation [23]. Large deviation [24] is an approach to convert the delay constraint into an equivalent rate constraint. However, this method achieves good delay performance only in a large delay regime. Stochastic majorization [25] provides a way to minimize the delay for the cases with symmetric arrivals. Lyapunov optimization [26] is an effective approach on queue stability, but it is effective only when the queue backlog is large.

MDP [12] is a systematic approach to minimize the delay. In general, the optimal control policy can be obtained by solving the well-known Bellman equation. Conventional solutions to the Bellman equation, such as brute-force value iteration or policy iteration [12], have huge complexity (i.e., the curse of dimensionality), because solving the Bellman equation involves solving an exponentially large system of non-linear equations. There are some existing works that use the stochastic approximation approach with distributed online learning algorithm [27], which has linear complexity. However, the stochastic learning approach can only give a numerical solution to the Bellman equation and may suffer from slow convergence and lack of insight [28].

In this paper, we address this issue head-on by transforming the discrete time MDP to a continuous time VCTS with reflections, such that it is possible to derive a closed-form computation offloading policy by solving the stochastic differential equations.

Fig. 1: Cascade queue system

Iii System Model and Problem Formulation

In this section, we introduce a MEC system with bursty task arrivals. First, we elaborate the system model and introduce the queue dynamics at both the MT and at the MEC server. Then, we define the computation offloading policy and formulate the delay-optimal optimization problem.

Iii-a MEC System Model

Consider a MEC system with one MT and one MEC server, as shown in Fig. 1. The MT executes its computation tasks with two approaches, including the local computation at the MT and the computation offloading from the MT to the MEC server. In our system, time is slotted with duration , and the slots are indexed by .

First, we consider the approach that the computation tasks are computed at the MT. With dynamic voltage and frequency scaling (DVFS) techniques, the local computation rate can be adjusted by changing the CPU-cycle frequencies [29]. Denote as the CPU-cycle frequency of the MT, the local computation rate at the -th time slot can be expressed as

(1)

where is the scale factor111By this scale factor, we unify the transmission rate and the computation rate of the MT. between the packet size and the amount of floating point operations of the computation task with mean .

High CPU-cycle frequency increases the power consumption. The power consumption for the local computation at the MT is

(2)

where is the effective switched capacitance that depends on the CPU architecture. Based on (1) and (2), is calculated as

(3)

Next, we consider the approach that the computation tasks are offloaded to the MEC server. This approach contains the transmission phase at the MT and the computation phase at the MEC server.

Denote as the CSI which is the instantaneous channel path gain from the MT to the MEC server at the -th time slot, with mean . Denote as the noise power of the complex Gaussian additive channel and as the bandwidth. For given CSI and transmission power , the transmission rate of the MT is calculated as

(4)

Denote as the CPU-cycle frequency of the MEC server. The computation rate of the MEC server at the -th time slot can be expressed as

(5)

with mean .

Iii-B Queue Dynamics

To analyze the delay performance, we first discuss the local and remote task queues. Let denote the LQSI (packets) at the MT and the RQSI (packets) at the MEC server at the beginning of the -th time slot, respectively. Let be the random arrivals of computation tasks (packets) at the end of the -th time slot at the MT. Assume that is i.i.d over time slots, with , where is the average task arrival rate. Hence, the dynamics of the local task queue at the MT is given by

(6)

and that of the remote task queue at the MEC server is given by

(7)

where .

Fig. 1 illustrates the queue system for computation offloading, where the CSI, the LQSI and the RQSI are jointly considered to make an appropriate computation offloading decision.

Remark 1 (Cascade Coupling of Local and Remote Queues).

The local queue dynamics in (6) and the remote queue dynamics in (7) are coupled together by a cascade control, because the departure of the former is the arrival of the latter. This cascade coupling creates complex interdependence and makes the computation offloading problem an involved stochastic optimization problem. ∎

Iii-C Computation Offloading Policy

Next, we define the computation offloading policy for the mentioned MEC system. For notation convenience, denote as the state set. The action set consists of and . At the beginning of the -th time slot, the MT determines the computation offloading action based on the following policy.

Definition 1 (Computation Offloading Policy).

A computation offloading policy specifies the offloading actions and that the MT will choose when in state , which the actions are adaptive to all the information up to time (i.e., ). ∎

Given an offloading policy , the random process

is a controlled Markov chain with the following transition probability:

(8)

where the transition probability of the CSI is independent. The probability of the LQSI is based on the last state of the LQSI and the CSI. The probability of the RQSI is related to not only the last state of the RQSI and the CSI, but also the last state of the LQSI because the actual transmission amount cannot exceed . Specifically, The probability is given by

Similarly, the probability is given by

Furthermore, we have the following definition of the admissible offloading policy, which guarantees that the system will converge to a unique steady state.

Definition 2 (Admissible Offloading Policy).

A policy is admissible if the following requirements are satisfied:

  • is a unichain policy, i.e., the controlled Markov chain under has a single recurrent class (and possibly some transient states).

  • The queues in the MEC system under are steady in the sense that and , where means taking expectation w.r.t. the probability measure induced by the offloading policy . ∎

Iii-D Problem Formulation

Under an admissible offloading policy , the average delay and average power cost starting from a given initial state are given by

(9)

where is denoted as the average queuing delay in slot . For the cascade queue system, the queuing delay of both the local task queue and the remote task queue should be considered. Considering the task proportions of local computation and computation offloading are and with , we can obtain that the arrival rate of the local task queue and that of the remote task queue are and , respectively. Then the queuing delay can be expressed222We aim to develop a delay-optimal computation offloading policy to promote the network performance by adopting the “packet-level” delay in this work. as , and

(10)

where , respectively.

Based on the expressions above, we define the average cost for the delay optimization under given weights and as

(11)

where .

Based on the above cost function, we can adjust the weights to satisfy different requirements on average delay or average power. We can achieve the delay-optimal computation offloading policy by solving the following problem:

Problem 1 (Delay-Optimal Computation Offloading Problem).
(12)

where should satisfy the conditions in Definition 1 and Definition 2. ∎

Iv Optimality Conditions via Virtual Continuous Time System

In this section, we first discuss the sufficient optimality condition for Problem 1

. As we discussed before, one of the major technical challenges is induced by the huge complexity of solving the multi-dimensional MDP. To overcome this challenge, we approximate the problem to a virtual continuous time system (VCTS) with reflections. Based on that, we derive a two-dimensional partial differential equation (PDE) to characterize the priority function.

Iv-a Optimality Conditions for Problem 1

Exploiting the i.i.d. property of the CSI, we derive an equivalent optimality condition of Problem 1 according to Proposition 4.6.1 in [12] as follows:

Theorem 1 (Optimality Condition).

For any given weights and , assume there exists a that solves the following equation:

(13)

Furthermore, for all admissible offloading policy and initial queue state , satisfies the following transversality condition:

(14)

We have the following results:

  • is the optimal average cost for any initial state and is the priority function.

  • Suppose there exists an admissible stationary offloading policy with for any , where attains the minimum of the R.H.S. of (13) for given . Then, the optimal offloading policy of the optimization problem is given by .

Proof:

Please refer to Appendix A. ∎

The solution captures the dynamic priority of the task queues for different . However, obtaining the priority function is highly non-trivial because achieving the optimality of the multi-dimensional MDP needs to solve nonlinear fixed point equations. For deriving the closed-form expression, we construct a VCTS with reflections in the following subsection.

Iv-B Virtual Continuous Time System

We first define the VCTS, which is a fictitious system with continuous virtual queue state , where are the virtual local queue length and the remote queue length at time ().

Let be the virtual computation offloading policy in the VCTS. Similarly, the virtual offloading policy should be admissible with satisfying the conditions in Definition 2.

Given an initial virtual system state and a virtual policy , the trajectory of the virtual queue system is described by the following coupled differential equations with reflections:

(15)

where is the reflection process induced by the local computation and associated with the lower boundary for the local task queue, and is the reflection process induced by the transmission and associated with the lower boundary , which are determined by

(16)
(17)

is the reflection process induced by the computation at the MEC server and associated with the lower boundary for the remote task queue, i.e.,

(18)

where the reflection processes satisfy .

Iv-C Average Cost Problem Under the VCTS

For a given admissible virtual offloading policy , we define the average cost of the VCTS from a given initial virtual queue state as

(19)

then Problem 1 can be reformulated as the following infinite horizon average cost problem in the VCTS:

Problem 2 (Infinite Horizon Average Cost Problem in the VCTS).
(20)

for any given , where is given in (19). ∎

This average cost problem has been well-studied in the continuous time optimal control theory [24]. The solution can be obtained by solving the following Hamilton-Jacobi-Bellman (HJB) equation.

Theorem 2 (Sufficient Optimality Conditions under VCTS).

Assume there exists a and a function of class that satisfy the following HJB equation:

(21)

Furthermore, for all admissible virtual control policy and initial virtual queue state , the following boundary conditions should be satisfied:

(22)

Then we have the following results:

  • is the optimal average cost, and is called the virtual priority function.

  • Suppose there exists an admissible virtual stationary offloading policy with for any , where attains the minimum of the L.H.S. of (21) for given . Then, the optimal offloading policy of Problem 2 is given by .

Proof:

Please refer to Appendix B. ∎

Similar to [30], in Theorem 2 can serve as an approximate priority function to the optimal priority function in Theorem 1 with approximation error . As a result, solving the Bellman equation (13) is transformed into a calculus problem of solving the two-dimensional PDE (21).

V Delay-Optimal Computation Offloading Policy

In this section, we solve the two-dimensional HJB equation in Theorem 2. By the steady state analyze and the dynamic instantaneous rate estimation of the virtual local and remote queues, we obtain the closed-form solutions to the two-dimensional PDE and extract the insights in different scenarios, including the computation sufficient scenario and the computation constrained scenario. Dynamic instantaneous rate estimation is an important technical approach to achieve the optimal single point and solve the cascade manner MDP framework. For simplicity of expression, we denote and in the remaining parts of this paper.

V-a Optimal Computation Offloading Structure

Taking the derivative w.r.t. and on the L.H.S of the HJB equation in (21), we obtain the optimal computation offloading in the following theorem:

Theorem 3 (Optimal Computation Offloading).

For a given virtual priority function , the optimal computation offloading actions by solving the HJB equation in Theorem 2 is given by

(23)
(24)

Remark 2 (Structure of the Optimal Computation Offloading Policy).

The optimal computation offloading policy in (24) depends on the instantaneous CSI, LQSI and RQSI. Furthermore, the optimal offloading transmit power has a multi-level water-filling structure, where the water level is adaptive to the LQSI and the RQSI indirectly via the priority function . ∎

We then establish the following theorem to substitute the optimal computation offloading policy into the PDE in Theorem 2 and discuss the sufficient conditions for the existence of solution to the PDE.

Theorem 4 (PDE with Optimal Computation Offloading Policy).

With the optimal computation offloading policy in Theorem 3, the PDE in (21) is equivalent to the following PDE:

(25)

where , , and . After that, there exists a that satisfies (22) and (25) if and only if , and .

Proof:

Please refer to Appendix C. ∎

From now on, the main challenge is to find a priority function that satisfies the PDE in (25) and the corresponding boundary conditions in (22).

V-B Asymptotic Closed-Form Priority Function

The PDE in (25) is a two-dimensional PDE, which has no closed-form solution for the priority function

. In this subsection, we consider the asymptotic analysis under the sufficient conditions for obtaining the closed-form solution of

.

We first analyze the steady states in different cases in the following theorem:

Theorem 5 (Steady State Analysis).

Let be the equilibrium point that . There exists two possible steady states as follows:

  1. If , the steady state should satisfy and with .

  2. If , the steady state should satisfy and with .

Proof:

Please refer to Appendix D. ∎

Next, we consider the two scenarios in Theorem 5 and obtain the closed-form solutions of respectively. We use a triple tuple to denote the steady state, where and is derived from Theorem 5, and is calculated from (25).

1) Computation Sufficient Scenario

In this scenario, we consider the local task arrival rate , and the remote computation capability is sufficient. Based on Theorem 5, the steady state is

(26)

Because the queue lengths are 0 in the steady state, based on (19) and Theorem 2, the optimal average cost can be denoted as

(27)

In the steady state, the arrival and departure rates are the same in a long-term sense. However, both the arrival and departure are time-varying, and the instantaneous arrival and departure rates are usually different. We define the difference between the instantaneous arrival and departure rates as follows:

Definition 3 (Dynamic Instantaneous Rate Estimation for Virtual Local Queue).

Denote as the instantaneous task rate difference between the input of virtual local queue and the output which includes transmission and local computation, i.e.,

(28)

where is the optimal value of under the instantaneous rate difference. ∎

According to Definition 3, the corresponding average cost is

(29)

Note that the instantaneous rate difference can be estimated by short-term statistics.

With the instantaneous state, we can solve the HJB equation (25) with more accurate approximation and obtain the closed-form solution of .

Theorem 6 (Asymptotic Closed-Form Priority Function in Computation Sufficient Scenario).

For a given , the priority function is expressed as

(30)
Proof:

Please refer to Appendix E. ∎

The above theorem considers the solution in the case . When , we cannot solve the PDE in (25) directly because the coefficient of in solution is negative, which does not make sense for the physical meaning of the priority function. Instead, we try to find an approximation for the case .

To find an appropriate approximation of , we need to consider the influence to first. From (30), the weight of is . When , the weight of is a decreasing function of in . From Theorem 6, we derive the weight of tends to when tends to 0. Based on the above analysis, we know that the weight of with should be larger than that with , which means that the weight with should be larger than . For a finite length queue, a sufficient large value of is enough to indicate the importance of . Thus, we approximate the difference to when , where is a sufficient small constant under the condition .

Theorem 7 (Approximation Error for ).

The approximation error between the steady state and the optimal state is .

Proof:

Please refer to Appendix F. ∎

We summarize some insights from the optimal computation offloading with the closed-form virtual priority function as follows:

Remark 3 (Insights in Computation Sufficient Scenario).

From the closed-form priority function in (30), we have

(31)
(32)

From these expressions, we can extract the following insights:

  • The weight of is a non-increasing function of . With the same , if the task rate difference of the local queue is small, our computation offloading policy will give a high power gain to reduce the local queue length.

  • The local computation power is an increasing function of , which is reasonable because a high task rate is required to reduce the local queue length when is large.

  • If , the transmission power is an increasing function of and a decreasing function of . Otherwise, . It is not necessary to push the computation tasks to the MEC server when is too large. With our policy, the local queue and the remote queue will keep in equilibrium until both of them be the steady state. ∎

2) Computation Constrained Scenario

In this scenario, we consider the local task arrival rate , and the remote computation is constrained. We obtain the steady state as follows:

(33)
(34)
(35)

Similar to Definition 3, we have the following definition for the remote task queue.

Definition 4 (Dynamic Instantaneous Rate Estimation for Virtual Remote Queue).

Denote as the instantaneous task rate difference between the input and the output of virtual remote queue, i.e.,

(36)

where is the optimal value of under the instantaneous rate difference. Combining the Definition 3, is determined by

(37)

According to Definition 4, the corresponding average cost is

(38)

With the instantaneous state, we can solve the HJB equation (25) with more accurate approximation and obtain the closed-form solution of .

Theorem 8 (Asymptotic Closed-Form Priority Function in Computation Constrained Scenario).

For given and , the priority function is expressed as

(39)

where , and denotes the projection onto .

Proof:

Please refer to Appendix G. ∎

For the cases with or , similar to the computation sufficient scenario, we approximate the difference to when , and the difference to when , where and are sufficient small constants under the condition and . Using the similar approach with the proof of Theorem 7, we obtain the following theorem:

Theorem 9 (Approximation Error for or ).

The approximation error between the steady state and the optimal state is . Also, the approximation error between the steady state and the optimal state is . ∎

Based on the closed-form solution in Theorem 8, we summarize the optimal computation offloading structure as follows:

Remark 4 (Insights in Computation Constrained Scenario).

From the closed-form priority function in (39), we have

(40)
(41)

From these expressions, we can extract the following insights:

  • The weight of is a non-increasing function of , which has the similar insights with those in computation sufficient scenario.

  • The weight of is a non-increasing function of . If the rate difference of the remote queue is large, our policy will reduce the influence of to the water level and increase the offloading rate of the local queue, which keeps the length of the remote queue to prevent the waste of computation resources. If the task rate difference is small, the policy will reduce the offloading for keeping the stability of the remote queue.

  • The local computation power is an increasing function of , which has the similar insights with those in computation sufficient scenario.

  • If , the transmission power is an increasing function of and a decreasing function of . Otherwise, . ∎

V-C Stability Conditions in Discrete Time System

In this subsection, we show that the proposed offloading policy in Theorem 3 derived from the analysis in VCTS is also admissible in the original discrete time system. Specifically, we derive the following theorem to guarantee the system stability when using the computation offloading policy derived in Theorem 3 in the original discrete time system:

Theorem 10 (Stability in the Original Discrete-Time System).

Using the policy in Theorem 3 with the priority functions in Theorems