Application of Reinforcement Learning for 5G Scheduling Parameter Optimization

10/21/2019
by   Ali Asgher Mansoor Habiby, et al.
0

RF Network parametric optimization requires a wealth of experience and knowledge to achieve the optimal balance between coverage, capacity, system efficiency and customer experience from the telecom sites serving the users. With 5G, the complications of Air interface scheduling have increased due to the usage of massive MIMO, beamforming and introduction of higher modulation schemes with varying numerologies. In this work, we tune a machine learning model to "learn" the best combination of parameters for a given traffic profile using Cross Entropy Method Reinforcement Learning and compare these with RF Subject Matter Expert "SME" recommendations. This work is aimed towards automatic parameter tuning and feature optimization by acting as a Self Organizing Network module

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

05/06/2020

Scheduling and Dynamic Pilot Allocation For Massive MIMO with Varying Traffic

In this paper, we consider the problem of joint user scheduling and dyna...
03/19/2019

Massive MIMO Optimization with Compatible Sets

Massive multiple-input multiple-output (MIMO) is expected to be a vital ...
06/14/2019

Self-Tuning Sectorization: Deep Reinforcement Learning Meets Broadcast Beam Optimization

Beamforming in multiple input multiple output (MIMO) systems is one of t...
12/19/2019

Deep Reinforcement Learning Designed RF Pulse: DeepRF_SLR

A novel approach of applying deep reinforcement learning to an RF pulse ...
10/13/2021

Reinforcement Learning for Standards Design

Communications standards are designed via committees of humans holding r...
08/05/2021

User Scheduling for Federated Learning Through Over-the-Air Computation

A new machine learning (ML) technique termed as federated learning (FL) ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.