The Max K-Armed Bandit: PAC Lower Bounds and Efficient Algorithms

12/23/2015
by   Yahel David, et al.
0

We consider the Max K-Armed Bandit problem, where a learning agent is faced with several stochastic arms, each a source of i.i.d. rewards of unknown distribution. At each time step the agent chooses an arm, and observes the reward of the obtained sample. Each sample is considered here as a separate item with the reward designating its value, and the goal is to find an item with the highest possible value. Our basic assumption is a known lower bound on the tail function of the reward distributions. Under the PAC framework, we provide a lower bound on the sample complexity of any (ϵ,δ)-correct algorithm, and propose an algorithm that attains this bound up to logarithmic factors. We analyze the robustness of the proposed algorithm and in addition, we compare the performance of this algorithm to the variant in which the arms are not distinguishable by the agent and are chosen randomly at each stage. Interestingly, when the maximal rewards of the arms happen to be similar, the latter approach may provide better performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/23/2015

The Max K-Armed Bandit: A PAC Lower Bound and tighter Algorithms

We consider the Max K-Armed Bandit problem, where a learning agent is fa...
research
12/16/2022

Materials Discovery using Max K-Armed Bandit

Search algorithms for the bandit problems are applicable in materials di...
research
11/16/2020

DART: aDaptive Accept RejecT for non-linear top-K subset identification

We consider the bandit problem of selecting K out of N arms at each time...
research
05/28/2019

Combinatorial Bandits with Full-Bandit Feedback: Sample Complexity and Regret Minimization

Combinatorial Bandits generalize multi-armed bandits, where k out of n a...
research
02/13/2022

On the complexity of All ε-Best Arms Identification

We consider the problem introduced by <cit.> of identifying all the ε-op...
research
01/30/2020

HAMLET – A Learning Curve-Enabled Multi-Armed Bandit for Algorithm Selection

Automated algorithm selection and hyperparameter tuning facilitates the ...
research
11/13/2017

Thresholding Bandit for Dose-ranging: The Impact of Monotonicity

We analyze the sample complexity of the thresholding bandit problem, wit...

Please sign up or login with your details

Forgot password? Click here to reset