Linear Jamming Bandits: Sample-Efficient Learning for Non-Coherent Digital Jamming

07/05/2022
by   Charles E. Thornton, et al.
0

It has been shown (Amuru et al. 2015) that online learning algorithms can be effectively used to select optimal physical layer parameters for jamming against digital modulation schemes without a priori knowledge of the victim's transmission strategy. However, this learning problem involves solving a multi-armed bandit problem with a mixed action space that can grow very large. As a result, convergence to the optimal jamming strategy can be slow, especially when the victim and jammer's symbols are not perfectly synchronized. In this work, we remedy the sample efficiency issues by introducing a linear bandit algorithm that accounts for inherent similarities between actions. Further, we propose context features which are well-suited for the statistical features of the non-coherent jamming problem and demonstrate significantly improved convergence behavior compared to the prior art. Additionally, we show how prior knowledge about the victim's transmissions can be seamlessly integrated into the learning framework. We finally discuss limitations in the asymptotic regime.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/28/2020

Lifelong Learning in Multi-Armed Bandits

Continuously learning and leveraging the knowledge accumulated from prio...
research
07/29/2020

An Index-based Deterministic Asymptotically Optimal Algorithm for Constrained Multi-armed Bandit Problems

For the model of constrained multi-armed bandit, we show that by constru...
research
02/07/2019

KLUCB Approach to Copeland Bandits

Multi-armed bandit(MAB) problem is a reinforcement learning framework wh...
research
01/28/2020

Faster Activity and Data Detection in Massive Random Access: A Multi-armed Bandit Approach

This paper investigates the grant-free random access with massive IoT de...
research
01/27/2018

IRSA Transmission Optimization via Online Learning

In this work, we propose a new learning framework for optimising transmi...
research
02/08/2020

Improved Algorithms for Conservative Exploration in Bandits

In many fields such as digital marketing, healthcare, finance, and robot...

Please sign up or login with your details

Forgot password? Click here to reset