Bounded regret in stochastic multi-armed bandits

02/06/2013
by   Sébastien Bubeck, et al.
0

We study the stochastic multi-armed bandit problem when one knows the value μ^() of an optimal arm, as a well as a positive lower bound on the smallest positive gap Δ. We propose a new randomized policy that attains a regret uniformly bounded over time in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows Δ, and bounded regret of order 1/Δ is not possible if one only knows μ^()

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2019

Batched Multi-Armed Bandits with Optimal Regret

We present a simple and efficient algorithm for the batched stochastic m...
research
07/10/2018

Bandits with Side Observations: Bounded vs. Logarithmic Regret

We consider the classical stochastic multi-armed bandit but where, from ...
research
04/21/2013

Prior-free and prior-dependent regret bounds for Thompson Sampling

We consider the stochastic multi-armed bandit problem with a prior distr...
research
06/13/2023

Multi-Fidelity Multi-Armed Bandits Revisited

We study the multi-fidelity multi-armed bandit (MF-MAB), an extension of...
research
02/17/2017

Beyond the Hazard Rate: More Perturbation Algorithms for Adversarial Multi-armed Bandits

Recent work on follow the perturbed leader (FTPL) algorithms for the adv...
research
12/16/2011

Regret lower bounds and extended Upper Confidence Bounds policies in stochastic multi-armed bandit problem

This paper is devoted to regret lower bounds in the classical model of s...
research
12/25/2017

Stochastic Multi-armed Bandits in Constant Space

We consider the stochastic bandit problem in the sublinear space setting...

Please sign up or login with your details

Forgot password? Click here to reset