Fast and Regret Optimal Best Arm Identification: Fundamental Limits and Low-Complexity Algorithms

by   Qining Zhang, et al.

This paper considers a stochastic multi-armed bandit (MAB) problem with dual objectives: (i) quick identification and commitment to the optimal arm, and (ii) reward maximization throughout a sequence of T consecutive rounds. Though each objective has been individually well-studied, i.e., best arm identification for (i) and regret minimization for (ii), the simultaneous realization of both objectives remains an open problem, despite its practical importance. This paper introduces Regret Optimal Best Arm Identification (ROBAI) which aims to achieve these dual objectives. To solve ROBAI with both pre-determined stopping time and adaptive stopping time requirements, we present the ๐–ค๐–ฎ๐–ข๐–ฏ algorithm and its variants respectively, which not only achieve asymptotic optimal regret in both Gaussian and general bandits, but also commit to the optimal arm in ๐’ช(log T) rounds with pre-determined stopping time and ๐’ช(log^2 T) rounds with adaptive stopping time. We further characterize lower bounds on the commitment time (equivalent to sample complexity) of ROBAI, showing that ๐–ค๐–ฎ๐–ข๐–ฏ and its variants are sample optimal with pre-determined stopping time, and almost sample optimal with adaptive stopping time. Numerical results confirm our theoretical analysis and reveal an interesting โ€œover-explorationโ€ phenomenon carried by classic ๐–ด๐–ข๐–ก algorithms, such that ๐–ค๐–ฎ๐–ข๐–ฏ has smaller regret even though it stops exploration much earlier than ๐–ด๐–ข๐–ก (๐’ช(log T) versus ๐’ช(T)), which suggests over-exploration is unnecessary and potentially harmful to system performance.


page 1

page 2

page 3

page 4

โˆ™ 12/09/2020

Streaming Algorithms for Stochastic Multi-armed Bandits

We study the Stochastic Multi-armed Bandit problem under bounded arm-mem...
โˆ™ 09/13/2022

Sample Complexity of an Adversarial Attack on UCB-based Best-arm Identification Policy

In this work I study the problem of adversarial perturbations to rewards...
โˆ™ 10/09/2018

Bridging the gap between regret minimization and best arm identification, with application to A/B tests

State of the art online learning procedures focus either on selecting th...
โˆ™ 10/28/2021

Selective Sampling for Online Best-arm Identification

This work considers the problem of selective-sampling for best-arm ident...
โˆ™ 09/21/2023

Optimal Conditional Inference in Adaptive Experiments

We study batched bandit experiments and consider the problem of inferenc...
โˆ™ 05/27/2019

The bias of the sample mean in multi-armed bandits can be positive or negative

It is well known that in stochastic multi-armed bandits (MAB), the sampl...
โˆ™ 10/16/2021

On the Pareto Frontier of Regret Minimization and Best Arm Identification in Stochastic Bandits

We study the Pareto frontier of two archetypal objectives in stochastic ...

Please sign up or login with your details

Forgot password? Click here to reset