Workload-Aware Scheduling using Markov Decision Process for Infrastructure-Assisted Learning-Based Multi-UAV Surveillance Networks

01/18/2023
by   Soohyun Park, et al.
0

In modern networking research, infrastructure-assisted unmanned autonomous vehicles (UAVs) are actively considered for real-time learning-based surveillance and aerial data-delivery under unexpected 3D free mobility and coordination. In this system model, it is essential to consider the power limitation in UAVs and autonomous object recognition (for abnormal behavior detection) deep learning performance in infrastructure/towers. To overcome the power limitation of UAVs, this paper proposes a novel aerial scheduling algorithm between multi-UAVs and multi-towers where the towers conduct wireless power transfer toward UAVs. In addition, to take care of the high-performance learning model training in towers, we also propose a data delivery scheme which makes UAVs deliver the training data to the towers fairly to prevent problems due to data imbalance (e.g., huge computation overhead caused by larger data delivery or overfitting from less data delivery). Therefore, this paper proposes a novel workload-aware scheduling algorithm between multi-towers and multi-UAVs for joint power-charging from towers to their associated UAVs and training data delivery from UAVs to their associated towers. To compute the workload-aware optimal scheduling decisions in each unit time, our solution approach for the given scheduling problem is designed based on Markov decision process (MDP) to deal with (i) time-varying low-complexity computation and (ii) pseudo-polynomial optimality. As shown in performance evaluation results, our proposed algorithm ensures (i) sufficient times for resource exchanges between towers and UAVs, (ii) the most even and uniform data collection during the processes compared to the other algorithms, and (iii) the performance of all towers convergence to optimal levels.

READ FULL TEXT

page 1

page 2

page 8

page 15

research
06/12/2019

Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks

Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G comm...
research
11/12/2020

Fast or Slow: An Autonomous Speed Control Approach for UAV-assisted IoT Data Collection Networks

Unmanned Aerial Vehicles (UAVs) have been emerging as an effective solut...
research
12/18/2019

Real-time data muling using a team of heterogeneous unmanned aerial vehicles

The use of Unmanned Aerial Vehicles (UAVs) in Data transport has attract...
research
06/12/2019

Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks in Smart Cities

Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G comm...
research
06/04/2019

On-board Deep Q-Network for UAV-assisted Online Power Transfer and Data Collection

Unmanned Aerial Vehicles (UAVs) with Microwave Power Transfer (MPT) capa...
research
06/19/2021

Joint Speed Control and Energy Replenishment Optimization for UAV-assisted IoT Data Collection with Deep Reinforcement Transfer Learning

Unmanned aerial vehicle (UAV)-assisted data collection has been emerging...
research
11/30/2021

Solving reward-collecting problems with UAVs: a comparison of online optimization and Q-learning

Uncrewed autonomous vehicles (UAVs) have made significant contributions ...

Please sign up or login with your details

Forgot password? Click here to reset