On the Complexity of Best Arm Identification in Multi-Armed Bandit Models

07/16/2014
by   Emilie Kaufmann, et al.
0

The stochastic multi-armed bandit model is a simple abstraction that has proven useful in many different contexts in statistics and machine learning. Whereas the achievable limit in terms of regret minimization is now well known, our aim is to contribute to a better understanding of the performance in terms of identifying the m best arms. We introduce generic notions of complexity for the two dominant frameworks considered in the literature: fixed-budget and fixed-confidence settings. In the fixed-confidence setting, we provide the first known distribution-dependent lower bound on the complexity that involves information-theoretic quantities and holds when m is larger than 1 under general assumptions. In the specific case of two armed-bandits, we derive refined lower bounds in both the fixed-confidence and fixed-budget settings, along with matching algorithms for Gaussian and Bernoulli bandit models. These results show in particular that the complexity of the fixed-budget setting may be smaller than the complexity of the fixed-confidence setting, contradicting the familiar behavior observed when testing fully specified alternatives. In addition, we also provide improved sequential stopping rules that have guaranteed error probabilities and shorter average running times. The proofs rely on two technical results that are of independent interest : a deviation lemma for self-normalized sums (Lemma 19) and a novel change of measure inequality for bandit models (Lemma 1).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2023

Multi-Fidelity Multi-Armed Bandits Revisited

We study the multi-fidelity multi-armed bandit (MF-MAB), an extension of...
research
05/13/2014

On the Complexity of A/B Testing

A/B testing refers to the task of determining the best option among two ...
research
09/30/2022

On Best-Arm Identification with a Fixed Budget in Non-Parametric Multi-Armed Bandits

We lay the foundations of a non-parametric theory of best-arm identifica...
research
01/31/2017

Learning the distribution with largest mean: two bandit frameworks

Over the past few years, the multi-armed bandit model has become increas...
research
06/12/2021

Guaranteed Fixed-Confidence Best Arm Identification in Multi-Armed Bandit

We consider the problem of finding, through adaptive sampling, which of ...
research
06/22/2023

Pure Exploration in Bandits with Linear Constraints

We address the problem of identifying the optimal policy with a fixed co...
research
06/23/2019

Making the Cut: A Bandit-based Approach to Tiered Interviewing

Given a huge set of applicants, how should a firm allocate sequential re...

Please sign up or login with your details

Forgot password? Click here to reset