Infinite Arms Bandit: Optimality via Confidence Bounds

05/30/2018
by   Hock Peng Chan, et al.
0

The infinite arms bandit problem was initiated by Berry et al. (1997). They derived a regret lower bound of all solutions for Bernoulli rewards, and proposed various bandit strategies based on success runs, but which do not achieve this bound. We propose here a confidence bound target (CBT) algorithm that achieves extensions of their regret lower bound for general reward distributions and distribution priors. The algorithm does not require information on the reward distributions, for each arm we require only the mean and standard deviation of its rewards to compute a confidence bound. We play the arm with the smallest confidence bound provided it is smaller than a target mean. If the confidence bounds are all larger, then we play a new arm. We show how the target mean can be computed from the prior so that the smallest asymptotic regret, among all infinite arms bandit algorithms, is achieved. We also show that in the absence of information on the prior, the target mean can be determined empirically, and that the regret achieved is comparable to the smallest regret. Numerical studies show that CBT is versatile and outperforms its competitors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2021

Continuous Time Bandits With Sampling Costs

We consider a continuous-time multi-arm bandit problem (CTMAB), where th...
research
10/04/2022

Max-Quantile Grouped Infinite-Arm Bandits

In this paper, we consider a bandit problem in which there are a number ...
research
09/05/2019

Optimal UCB Adjustments for Large Arm Sizes

The regret lower bound of Lai and Robbins (1985), the gold standard for ...
research
06/04/2020

Differentiable Linear Bandit Algorithm

Upper Confidence Bound (UCB) is arguably the most commonly used method f...
research
07/05/2015

Correlated Multiarmed Bandit Problem: Bayesian Algorithms and Regret Analysis

We consider the correlated multiarmed bandit (MAB) problem in which the ...
research
06/20/2019

Stochastic One-Sided Full-Information Bandit

In this paper, we study the stochastic version of the one-sided full inf...
research
04/25/2019

Lipschitz Bandit Optimization with Improved Efficiency

We consider the Lipschitz bandit optimization problem with an emphasis o...

Please sign up or login with your details

Forgot password? Click here to reset