Faster Activity and Data Detection in Massive Random Access: A Multi-armed Bandit Approach

01/28/2020
by   Jialin Dong, et al.
6

This paper investigates the grant-free random access with massive IoT devices. By embedding the data symbols in the signature sequences, joint device activity detection and data decoding can be achieved, which, however, significantly increases the computational complexity. Coordinate descent algorithms that enjoy a low per-iteration complexity have been employed to solve the detection problem, but previous works typically employ a random coordinate selection policy which leads to slow convergence. In this paper, we develop multi-armed bandit approaches for more efficient detection via coordinate descent, which make a delicate trade-off between exploration and exploitation in coordinate selection. Specifically, we first propose a bandit based strategy, i.e., Bernoulli sampling, to speed up the convergence rate of coordinate descent, by learning which coordinates will result in more aggressive descent of the objective function. To further improve the convergence rate, an inner multi-armed bandit problem is established to learn the exploration policy of Bernoulli sampling. Both convergence rate analysis and simulation results are provided to show that the proposed bandit based algorithms enjoy faster convergence rates with a lower time complexity compared with the state-of-the-art algorithm. Furthermore, our proposed algorithms are applicable to different scenarios, e.g., massive random access with low-precision analog-to-digital converters (ADCs).

READ FULL TEXT

page 2

page 6

page 7

page 9

page 10

page 12

page 13

page 16

research
05/03/2018

An Asymptotically Optimal Strategy for Constrained Multi-armed Bandit Problems

For the stochastic multi-armed bandit (MAB) problem from a constrained m...
research
12/08/2017

Stochastic Dual Coordinate Descent with Bandit Sampling

Coordinate descent methods minimize a cost function by updating a single...
research
03/23/2020

Contextual Bandit-Based Channel Selection for Wireless LANs with Interference-Driven Feature Extraction

This paper proposes a radio channel selection algorithm based on a conte...
research
06/01/2015

Coordinate Descent Converges Faster with the Gauss-Southwell Rule Than Random Selection

There has been significant recent work on the theory and application of ...
research
10/11/2018

Regularized Contextual Bandits

We consider the stochastic contextual bandit problem with additional reg...
research
10/24/2020

Adam with Bandit Sampling for Deep Learning

Adam is a widely used optimization method for training deep learning mod...
research
07/05/2022

Linear Jamming Bandits: Sample-Efficient Learning for Non-Coherent Digital Jamming

It has been shown (Amuru et al. 2015) that online learning algorithms ca...

Please sign up or login with your details

Forgot password? Click here to reset