Optimal Simple Regret in Bayesian Best Arm Identification

11/18/2021
by   Junpei Komiyama, et al.
10

We consider Bayesian best arm identification in the multi-armed bandit problem. Assuming certain continuity conditions of the prior, we characterize the rate of the Bayesian simple regret. Differing from Bayesian regret minimization (Lai, 1987), the leading factor in Bayesian simple regret derives from the region where the gap between optimal and sub-optimal arms is smaller than √(log T/T). We propose a simple and easy-to-compute algorithm with its leading factor matches with the lower bound up to a constant factor; simulation results support our theoretical findings.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2021

Bandits with many optimal arms

We consider a stochastic bandit problem with a possibly infinite number ...
research
02/10/2022

Bayes Optimal Algorithm is Suboptimal in Frequentist Best Arm Identification

We consider the fixed-budget best arm identification problem with Normal...
research
09/28/2021

The Fragility of Optimized Bandit Algorithms

Much of the literature on optimal design of bandit algorithms is based o...
research
10/11/2022

The Typical Behavior of Bandit Algorithms

We establish strong laws of large numbers and central limit theorems for...
research
10/09/2018

Bridging the gap between regret minimization and best arm identification, with application to A/B tests

State of the art online learning procedures focus either on selecting th...
research
09/05/2019

Optimal UCB Adjustments for Large Arm Sizes

The regret lower bound of Lai and Robbins (1985), the gold standard for ...
research
11/17/2016

Unimodal Thompson Sampling for Graph-Structured Arms

We study, to the best of our knowledge, the first Bayesian algorithm for...

Please sign up or login with your details

Forgot password? Click here to reset