MaxGap Bandit: Adaptive Algorithms for Approximate Ranking

06/03/2019
by   Sumeet Katariya, et al.
0

This paper studies the problem of adaptively sampling from K distributions (arms) in order to identify the largest gap between any two adjacent means. We call this the MaxGap-bandit problem. This problem arises naturally in approximate ranking, noisy sorting, outlier detection, and top-arm identification in bandits. The key novelty of the MaxGap-bandit problem is that it aims to adaptively determine the natural partitioning of the distributions into a subset with larger means and a subset with smaller means, where the split is determined by the largest gap rather than a pre-specified rank or threshold. Estimating an arm's gap requires sampling its neighboring arms in addition to itself, and this dependence results in a novel hardness parameter that characterizes the sample complexity of the problem. We propose elimination and UCB-style algorithms and show that they are minimax optimal. Our experiments show that the UCB-style algorithms require 6-8x fewer samples than non-adaptive sampling to achieve the same error.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/19/2017

Practical Algorithms for Best-K Identification in Multi-Armed Bandits

In the Best-K identification problem (Best-K-Arm), we are given N stocha...
research
09/21/2020

Robust Outlier Arm Identification

We study the problem of Robust Outlier Arm Identification (ROAI), where ...
research
03/18/2021

Top-m identification for linear bandits

Motivated by an application to drug repurposing, we propose the first al...
research
06/15/2019

The True Sample Complexity of Identifying Good Arms

We consider two multi-armed bandit problems with n arms: (i) given an ϵ ...
research
04/11/2022

Approximate Top-m Arm Identification with Heterogeneous Reward Variances

We study the effect of reward variance heterogeneity in the approximate ...
research
06/16/2020

Finding All ε-Good Arms in Stochastic Bandits

The pure-exploration problem in stochastic multi-armed bandits aims to f...
research
05/22/2022

On Elimination Strategies for Bandit Fixed-Confidence Identification

Elimination algorithms for bandit identification, which prune the plausi...

Please sign up or login with your details

Forgot password? Click here to reset