Bridging Adversarial and Nonstationary Multi-armed Bandit

01/05/2022
by   Ningyuan Chen, et al.
4

In the multi-armed bandit framework, there are two formulations that are commonly employed to handle time-varying reward distributions: adversarial bandit and nonstationary bandit. Although their oracles, algorithms, and regret analysis differ significantly, we provide a unified formulation in this paper that smoothly bridges the two as special cases. The formulation uses an oracle that takes the best-fixed arm within time windows. Depending on the window size, it turns into the oracle in hindsight in the adversarial bandit and dynamic oracle in the nonstationary bandit. We provide algorithms that attain the optimal regret with the matching lower bound.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2022

Pareto Regret Analyses in Multi-objective Multi-armed Bandit

We study Pareto optimality in multi-objective multi-armed bandit by prov...
research
02/10/2022

Adaptively Exploiting d-Separators with Causal Bandits

Multi-armed bandit problems provide a framework to identify the optimal ...
research
04/21/2020

Algorithms for slate bandits with non-separable reward functions

In this paper, we study a slate bandit problem where the function that d...
research
07/12/2022

Optimal Clustering with Noisy Queries via Multi-Armed Bandit

Motivated by many applications, we study clustering with a faulty oracle...
research
02/02/2019

On the Optimality of Perturbations in Stochastic and Adversarial Multi-armed Bandit Problems

We investigate the optimality of perturbation based algorithms in the st...
research
06/07/2021

Multi-armed Bandit Requiring Monotone Arm Sequences

In many online learning or multi-armed bandit problems, the taken action...
research
05/21/2017

Instrument-Armed Bandits

We extend the classic multi-armed bandit (MAB) model to the setting of n...

Please sign up or login with your details

Forgot password? Click here to reset