DeepAI AI Chat
Log In Sign Up

Communication-Enabled Multi-Agent Decentralised Deep Reinforcement Learning to Optimise Energy-Efficiency in UAV-Assisted Networks

by   Babatunji Omoniwa, et al.

Unmanned Aerial Vehicles (UAVs) are increasingly deployed to provide wireless connectivity to static and mobile ground users in situations of increased network demand or points-of-failure in existing terrestrial cellular infrastructure. However, UAVs are energy-constrained and may experience interference from nearby UAV cells sharing the same frequency spectrum, thereby impacting the system's energy efficiency (EE). We aim to address research gaps that focus on optimising the system's EE using a 2D trajectory optimisation of UAVs serving only static ground users, and neglect the impact of interference from nearby UAV cells. Unlike previous work that assume global spatial knowledge of ground users' location via a central controller that periodically scans the network perimeter and provides real-time updates to the UAVs for decision making, we focus on a realistic decentralised approach suitable in emergencies. Thus, we apply a decentralised Multi-Agent Reinforcement Learning (MARL) approach that maximizes the system's EE by jointly optimising each UAV's 3D trajectory, number of connected static and mobile users, and the energy consumed, while taking into account the impact of interference and the UAVs' coordination on the system's EE in a dynamic network environment. To address this, we propose a direct collaborative Communication-enabled Multi-Agent Decentralised Double Deep Q-Network (CMAD-DDQN) approach. The CMAD-DDQN is a collaborative algorithm that allows UAVs to explicitly share knowledge by communicating with its nearest neighbours based on existing 3GPP guidelines. Our approach is able to maximise the system's EE without degrading the coverage performance in the network. Simulation results show that the proposed approach outperforms existing baselines in term of maximising the systems' EE by about 15


page 1

page 12


Optimising Energy Efficiency in UAV-Assisted Networks using Deep Reinforcement Learning

In this letter, we study the energy efficiency (EE) optimisation of unma...

Energy-aware placement optimization of UAV base stations via decentralized multi-agent Q-learning

Unmanned aerial vehicles serving as aerial base stations (UAV-BSs) can b...

Multi-Agent Reinforcement Learning with Action Masking for UAV-enabled Mobile Communications

Unmanned Aerial Vehicles (UAVs) are increasingly used as aerial base sta...

3D UAV Trajectory Design for Fair and Energy-Efficient Communication: A Deep Reinforcement Learning Technique

In different situations, like disaster communication and network connect...

Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency of Fixed-Wing UAV Cellular Access Points

Unmanned Aerial Vehicles (UAVs) promise to become an intrinsic part of n...

Mobile Cellular-Connected UAVs: Reinforcement Learning for Sky Limits

A cellular-connected unmanned aerial vehicle (UAV)faces several key chal...