Repeated A/B Testing

by   Nicolò Cesa-Bianchi, et al.

We study a setting in which a learner faces a sequence of A/B tests and has to make as many good decisions as possible within a given amount of time. Each A/B test n is associated with an unknown (and potentially negative) reward μ_n ∈ [-1,1], drawn i.i.d. from an unknown and fixed distribution. For each A/B test n, the learner sequentially draws i.i.d. samples of a {-1,1}-valued random variable with mean μ_n until a halting criterion is met. The learner then decides to either accept the reward μ_n or to reject it and get zero instead. We measure the learner's performance as the sum of the expected rewards of the accepted μ_n divided by the total expected number of used time steps (which is different from the expected ratio between the total reward and the total number of used time steps). We design an algorithm and prove a data-dependent regret bound against any set of policies based on an arbitrary halting criterion and decision rule. Though our algorithm borrows ideas from multiarmed bandits, the two settings are significantly different and not directly comparable. In fact, the value of μ_n is never observed directly in our setting---unlike rewards in stochastic bandits. Moreover, the particular structure of our problem allows our regret bounds to be independent of the number of policies.


page 1

page 2

page 3

page 4


Contextual Bandits with Cross-learning

In the classical contextual bandits problem, in each round t, a learner ...

Lexicographic Multiarmed Bandit

We consider a multiobjective multiarmed bandit problem with lexicographi...

Nonstationary Stochastic Multiarmed Bandits: UCB Policies and Minimax Regret

We study the nonstationary stochastic Multi-Armed Bandit (MAB) problem i...

Staged Multi-armed Bandits

In this paper, we introduce a new class of reinforcement learning method...

Orthogonal Projection in Linear Bandits

The expected reward in a linear stochastic bandit model is an unknown li...

UCB-based Algorithms for Multinomial Logistic Regression Bandits

Out of the rich family of generalized linear bandits, perhaps the most w...

Logarithmic regret in the dynamic and stochastic knapsack problem

We study a dynamic and stochastic knapsack problem in which a decision m...

Please sign up or login with your details

Forgot password? Click here to reset