-
Joint Uplink-Downlink Resource Allocation for OFDMA-URLLC MEC Systems
In this paper, we study resource allocation algorithm design for multius...
read it
-
A Tutorial of Ultra-Reliable and Low-Latency Communications in 6G: Integrating Theoretical Knowledge into Deep Learning
As one of the key communication scenarios in the 5th and also the 6th ge...
read it
-
A Deep Q-Learning Method for Downlink Power Allocation in Multi-Cell Networks
Optimal resource allocation is a fundamental challenge for dense and het...
read it
-
Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G Networks
In the future 6th generation networks, ultra-reliable and low-latency co...
read it
-
Optimal Resource Allocation for Multi-user OFDMA-URLLC MEC Systems
In this paper, we study resource allocation algorithm design for multi-u...
read it
-
WGAN-based Autoencoder Training Over-the-air
The practical realization of end-to-end training of communication system...
read it
-
Factory Automation: Resource Allocation of an Elevated LiDAR System with URLLC Requirements
Ultra-reliable and low-latency communications (URLLC) play a vital role ...
read it
Experienced Deep Reinforcement Learning with Generative Adversarial Networks (GANs) for Model-Free Ultra Reliable Low Latency Communication
In this paper, a novel experienced deep reinforcement learning (deep-RL) framework is proposed to provide model-free resource allocation for ultra reliable low latency communication (URLLC). The proposed, experienced deep-RL framework can guarantee high end-to-end reliability and low end-to-end latency, under explicit data rate constraints, for each wireless without any models of or assumptions on the users' traffic. In particular, in order to enable the deep-RL framework to account for extreme network conditions and operate in highly reliable systems, a new approach based on generative adversarial networks (GANs) is proposed. This GAN approach is used to pre-train the deep-RL framework using a mix of real and synthetic data, thus creating an experienced deep-RL framework that has been exposed to a broad range of network conditions. Formally, the URLLC resource allocation problem is posed as a power minimization problem under reliability, latency, and rate constraints. To solve this problem using experienced deep-RL, first, the rate of each user is determined. Then, these rates are mapped to the resource block and power allocation vectors of the studied wireless system. Finally, the end-to-end reliability and latency of each user are used as feedback to the deep-RL framework. It is then shown that at the fixed-point of the deep-RL algorithm, the reliability and latency of the users are near-optimal. Moreover, for the proposed GAN approach, a theoretical limit for the generator output is analytically derived. Simulation results show how the proposed approach can achieve near-optimal performance within the rate-reliability-latency region, depending on the network and service requirements. The results also show that the proposed experienced deep-RL framework is able to remove the transient training time that makes conventional deep-RL methods unsuitable for URLLC.
READ FULL TEXT
Comments
There are no comments yet.