Top Two Algorithms Revisited

06/13/2022
by   Marc Jourdan, et al.
14

Top Two algorithms arose as an adaptation of Thompson sampling to best arm identification in multi-armed bandit models (Russo, 2016), for parametric families of arms. They select the next arm to sample from by randomizing among two candidate arms, a leader and a challenger. Despite their good empirical performance, theoretical guarantees for fixed-confidence best arm identification have only been obtained when the arms are Gaussian with known variances. In this paper, we provide a general analysis of Top Two methods, which identifies desirable properties of the leader, the challenger, and the (possibly non-parametric) distributions of the arms. As a result, we obtain theoretically supported Top Two algorithms for best arm identification with bounded distributions. Our proof method demonstrates in particular that the sampling step used to select the leader inherited from Thompson sampling can be replaced by other choices, like selecting the empirical best arm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2022

Non-Asymptotic Analysis of a UCB-based Top Two Algorithm

A Top Two sampling rule for bandit identification is a method which sele...
research
06/05/2023

Covariance Adaptive Best Arm Identification

We consider the problem of best arm identification in the multi-armed ba...
research
03/10/2023

A General Recipe for the Analysis of Randomized Multi-Armed Bandit Algorithms

In this paper we propose a general methodology to derive regret bounds f...
research
02/23/2020

Predictive Sampling with Forecasting Autoregressive Models

Autoregressive models (ARMs) currently hold state-of-the-art performance...
research
08/28/2020

Statistically Robust, Risk-Averse Best Arm Identification in Multi-Armed Bandits

Traditional multi-armed bandit (MAB) formulations usually make certain a...
research
01/10/2023

Best Arm Identification in Stochastic Bandits: Beyond β-optimality

This paper focuses on best arm identification (BAI) in stochastic multi-...
research
06/16/2020

Finding All ε-Good Arms in Stochastic Bandits

The pure-exploration problem in stochastic multi-armed bandits aims to f...

Please sign up or login with your details

Forgot password? Click here to reset