An Optimal Elimination Algorithm for Learning a Best Arm

06/20/2020
by   Avinatan Hassidim, et al.
0

We consider the classic problem of (ϵ,δ)-PAC learning a best arm where the goal is to identify with confidence 1-δ an arm whose mean is an ϵ-approximation to that of the highest mean arm in a multi-armed bandit setting. This problem is one of the most fundamental problems in statistics and learning theory, yet somewhat surprisingly its worst-case sample complexity is not well understood. In this paper, we propose a new approach for (ϵ,δ)-PAC learning a best arm. This approach leads to an algorithm whose sample complexity converges to exactly the optimal sample complexity of (ϵ,δ)-learning the mean of n arms separately and we complement this result with a conditional matching lower bound. More specifically:

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2018

Pure Exploration in Infinitely-Armed Bandit Models with Fixed-Confidence

We consider the problem of near-optimal arm identification in the fixed ...
research
06/13/2020

Explicit Best Arm Identification in Linear Bandits Using No-Regret Learners

We study the problem of best arm identification in linearly parameterise...
research
06/15/2023

Optimal Best-Arm Identification in Bandits with Access to Offline Data

Learning paradigms based purely on offline data as well as those based s...
research
02/14/2012

Fractional Moments on Bandit Problems

Reinforcement learning addresses the dilemma between exploration to find...
research
08/23/2015

The Max K-Armed Bandit: A PAC Lower Bound and tighter Algorithms

We consider the Max K-Armed Bandit problem, where a learning agent is fa...
research
01/31/2015

Sparse Dueling Bandits

The dueling bandit problem is a variation of the classical multi-armed b...
research
11/13/2017

Thresholding Bandit for Dose-ranging: The Impact of Monotonicity

We analyze the sample complexity of the thresholding bandit problem, wit...

Please sign up or login with your details

Forgot password? Click here to reset