
Sample complexity of partition identification using multiarmed bandits
Given a vector of probability distributions, or arms, each of which can ...
read it

Towards Instance Optimal Bounds for Best Arm Identification
In the classical best arm identification (Best1Arm) problem, we are gi...
read it

On discrimination between the Lindley and xgamma distributions
For a given data set the problem of selecting either Lindley or xgamma d...
read it

Discriminative Learning via Adaptive Questioning
We consider the problem of designing an adaptive sequence of questions t...
read it

Nearly Optimal Sampling Algorithms for Combinatorial Pure Exploration
We study the combinatorial pure exploration problem BestSet in stochast...
read it

NonAsymptotic Sequential Tests for Overlapping Hypotheses and application to near optimal arm identification in bandit models
In this paper, we study sequential testing problems with overlapping hyp...
read it

Collaborative Top Distribution Identifications with Limited Interaction
We consider the following problem in this paper: given a set of n distri...
read it
Optimal best arm selection for general distributions
Given a finite set of unknown distributions or arms that can be sampled from, we consider the problem of identifying the one with the largest mean using a deltacorrect algorithm (an adaptive, sequential algorithm that restricts the probability of error to a specified delta) that has minimum sample complexity. Lower bounds for deltacorrect algorithms are well known. Further, deltacorrect algorithms that match the lower bound asymptotically as delta reduces to zero have also been developed in literature when the arm distributions are restricted to a single parameter exponential family. In this paper, we first observe a negative result that some restrictions are essential as otherwise under a deltacorrect algorithm, distributions with unbounded support would require an infinite number of samples in expectation. We then propose a deltacorrect algorithm that matches the lower bound as delta reduces to zero under a mild restriction that a known bound on the expectation of a nonnegative, increasing convex function (for example, the squared moment) of underlying random variables, exists. We also propose batch processing and identify optimal batch sizes to substantially speed up the proposed algorithm. This best arm selection problem is a well studied classic problem in the simulation community. It has many learning applications including in recommendation systems and in product selection.
READ FULL TEXT
Comments
There are no comments yet.