Periodic Bandits and Wireless Network Selection

04/28/2019
by   Shunhao Oh, et al.
0

Bandit-style algorithms have been studied extensively in stochastic and adversarial settings. Such algorithms have been shown to be useful in multiplayer settings, e.g. to solve the wireless network selection problem, which can be formulated as an adversarial bandit problem. A leading bandit algorithm for the adversarial setting is EXP3. However, network behavior is often repetitive, where user density and network behavior follow regular patterns. Bandit algorithms, like EXP3, fail to provide good guarantees for periodic behaviors. A major reason is that these algorithms compete against fixed-action policies, which is ineffective in a periodic setting. In this paper, we define a periodic bandit setting, and periodic regret as a better performance measure for this type of setting. Instead of comparing an algorithm's performance to fixed-action policies, we aim to be competitive with policies that play arms under some set of possible periodic patterns F (for example, all possible periodic functions with periods 1,2,...,P). We propose Periodic EXP4, a computationally efficient variant of the EXP4 algorithm for periodic settings. With K arms, T time steps, and where each periodic pattern in F is of length at most P, we show that the periodic regret obtained by Periodic EXP4 is at most O(√(PKT K + KT |F|)). We also prove a lower bound of Ω(√(PKT + KT |F|/ K)) for the periodic setting, showing that this is optimal within log-factors. As an example, we focus on the wireless network selection problem. Through simulation, we show that Periodic EXP4 learns the periodic pattern over time, adapts to changes in a dynamic environment, and far outperforms EXP3.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/24/2020

Upper Confidence Bounds for Combining Stochastic Bandits

We provide a simple method to combine stochastic bandit algorithms. Our ...
research
02/11/2015

Combinatorial Bandits Revisited

This paper investigates stochastic and adversarial combinatorial multi-a...
research
05/05/2016

Copeland Dueling Bandit Problem: Regret Lower Bound, Optimal Algorithm, and Computationally Efficient Algorithm

We study the K-armed dueling bandit problem, a variation of the standard...
research
02/26/2020

Memory-Constrained No-Regret Learning in Adversarial Bandits

An adversarial bandit problem with memory constraints is studied where o...
research
05/30/2021

Periodic-GP: Learning Periodic World with Gaussian Process Bandits

We consider the sequential decision optimization on the periodic environ...
research
01/16/2021

Blind Optimal User Association in Small-Cell Networks

We learn optimal user association policies for traffic from different lo...
research
01/23/2019

Cooperation Speeds Surfing: Use Co-Bandit!

In this paper, we explore the benefit of cooperation in adversarial band...

Please sign up or login with your details

Forgot password? Click here to reset