Dynamic Offloading Design in Time-Varying Mobile Edge Networks with Deep Reinforcement Learning Approach

03/03/2021
by   Liang Yu, et al.
0

Mobile edge computing (MEC) is regarded as a promising wireless access architecture to alleviate the intensive computation burden at resource limited mobile terminals (MTs). Allowing the MTs to offload partial tasks to MEC servers could significantly decrease task processing delay. In this study, to minimize the processing delay for a multi-user MEC system, we jointly optimize the local content splitting ratio, the transmission/computation power allocation, and the MEC server selection under a dynamic environment with time-varying task arrivals and wireless channels. The reinforcement learning (RL) technique is utilized to deal with the considered problem. Two deep RL strategies, that is, deep Q-learning network (DQN) and deep deterministic policy gradient (DDPG), are proposed to efficiently learn the offloading policies adaptively. The proposed DQN strategy takes the MEC selection as a unique action while using convex optimization approach to obtain the remaining variables. And the DDPG strategy takes all dynamic variables as actions. Numerical results demonstrates that both proposed strategies perform better than existing schemes. And the DDPG strategy is superior to the DQN strategy as it can learn all variables online although it requires relatively large complexity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset