Simple regret for infinitely many armed bandits

05/18/2015
by   Alexandra Carpentier, et al.
0

We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a (n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/12/2019

Distributed Bandit Learning: How Much Communication is Needed to Achieve (Near) Optimal Regret

We study the communication complexity of distributed multi-armed bandits...
research
06/26/2020

On Regret with Multiple Best Arms

We study regret minimization problem with the existence of multiple best...
research
09/29/2021

Batched Bandits with Crowd Externalities

In Batched Multi-Armed Bandits (BMAB), the policy is not allowed to be u...
research
02/23/2022

Truncated LinUCB for Stochastic Linear Bandits

This paper considers contextual bandits with a finite number of arms, wh...
research
10/17/2016

Risk-Aware Algorithms for Adversarial Contextual Bandits

In this work we consider adversarial contextual bandits with risk constr...
research
02/20/2020

Optimal anytime regret with two experts

The multiplicative weights method is an algorithm for the problem of pre...
research
07/15/2019

A Dimension-free Algorithm for Contextual Continuum-armed Bandits

In contextual continuum-armed bandits, the contexts x and the arms y are...

Please sign up or login with your details

Forgot password? Click here to reset