Repeated A/B Testing

05/28/2019
by   Nicolò Cesa-Bianchi, et al.
0

We study a setting in which a learner faces a sequence of A/B tests and has to make as many good decisions as possible within a given amount of time. Each A/B test n is associated with an unknown (and potentially negative) reward μ_n ∈ [-1,1], drawn i.i.d. from an unknown and fixed distribution. For each A/B test n, the learner sequentially draws i.i.d. samples of a {-1,1}-valued random variable with mean μ_n until a halting criterion is met. The learner then decides to either accept the reward μ_n or to reject it and get zero instead. We measure the learner's performance as the sum of the expected rewards of the accepted μ_n divided by the total expected number of used time steps (which is different from the expected ratio between the total reward and the total number of used time steps). We design an algorithm and prove a data-dependent regret bound against any set of policies based on an arbitrary halting criterion and decision rule. Though our algorithm borrows ideas from multiarmed bandits, the two settings are significantly different and not directly comparable. In fact, the value of μ_n is never observed directly in our setting---unlike rewards in stochastic bandits. Moreover, the particular structure of our problem allows our regret bounds to be independent of the number of policies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2018

Contextual Bandits with Cross-learning

In the classical contextual bandits problem, in each round t, a learner ...
research
07/26/2019

Lexicographic Multiarmed Bandit

We consider a multiobjective multiarmed bandit problem with lexicographi...
research
01/22/2021

Nonstationary Stochastic Multiarmed Bandits: UCB Policies and Minimax Regret

We study the nonstationary stochastic Multi-Armed Bandit (MAB) problem i...
research
08/04/2015

Staged Multi-armed Bandits

In this paper, we introduce a new class of reinforcement learning method...
research
06/26/2019

Orthogonal Projection in Linear Bandits

The expected reward in a linear stochastic bandit model is an unknown li...
research
03/21/2021

UCB-based Algorithms for Multinomial Logistic Regression Bandits

Out of the rich family of generalized linear bandits, perhaps the most w...
research
09/06/2018

Logarithmic regret in the dynamic and stochastic knapsack problem

We study a dynamic and stochastic knapsack problem in which a decision m...

Please sign up or login with your details

Forgot password? Click here to reset