Joint Speed Control and Energy Replenishment Optimization for UAV-assisted IoT Data Collection with Deep Reinforcement Transfer Learning

06/19/2021
by   Nam H. Chu, et al.
0

Unmanned aerial vehicle (UAV)-assisted data collection has been emerging as a prominent application due to its flexibility, mobility, and low operational cost. However, under the dynamic and uncertainty of IoT data collection and energy replenishment processes, optimizing the performance for UAV collectors is a very challenging task. Thus, this paper introduces a novel framework that jointly optimizes the flying speed and energy replenishment for each UAV to significantly improve the data collection performance. Specifically, we first develop a Markov decision process to help the UAV automatically and dynamically make optimal decisions under the dynamics and uncertainties of the environment. We then propose a highly-effective reinforcement learning algorithm leveraging deep Q-learning, double deep Q-learning, and a deep dueling neural network architecture to quickly obtain the UAV's optimal policy. The core ideas of this algorithm are to estimate the state values and action advantages separately and simultaneously and to employ double estimators for estimating the action values. Thus, these proposed techniques can stabilize the learning process and effectively address the overestimation problem of conventional Q-learning algorithms. To further reduce the learning time as well as significantly improve learning quality, we develop advanced transfer learning techniques to allow UAVs to “share” and “transfer” learning knowledge. Extensive simulations demonstrate that our proposed solution can improve the average data collection performance of the system up to 200 methods.

READ FULL TEXT
research
11/12/2020

Fast or Slow: An Autonomous Speed Control Approach for UAV-assisted IoT Data Collection Networks

Unmanned Aerial Vehicles (UAVs) have been emerging as an effective solut...
research
03/01/2020

Deep Reinforcement Learning for Fresh Data Collection in UAV-assisted IoT Networks

Due to the flexibility and low operational cost, dispatching unmanned ae...
research
06/02/2023

Energy-Efficient UAV-Assisted IoT Data Collection via TSP-Based Solution Space Reduction

This paper presents a wireless data collection framework that employs an...
research
06/04/2019

On-board Deep Q-Network for UAV-assisted Online Power Transfer and Data Collection

Unmanned Aerial Vehicles (UAVs) with Microwave Power Transfer (MPT) capa...
research
10/18/2018

Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

This paper presents a comprehensive literature review on applications of...
research
06/28/2018

Robust Fuzzy-Learning For Partially Overlapping Channels Allocation In UAV Communication Networks

In this paper, we consider a mesh-structured unmanned aerial vehicle (UA...
research
01/18/2023

Workload-Aware Scheduling using Markov Decision Process for Infrastructure-Assisted Learning-Based Multi-UAV Surveillance Networks

In modern networking research, infrastructure-assisted unmanned autonomo...

Please sign up or login with your details

Forgot password? Click here to reset