Repeated A/B Testing

05/28/2019
by   Nicolò Cesa-Bianchi, et al.
0

We study a setting in which a learner faces a sequence of A/B tests and has to make as many good decisions as possible within a given amount of time. Each A/B test n is associated with an unknown (and potentially negative) reward μ_n ∈ [-1,1], drawn i.i.d. from an unknown and fixed distribution. For each A/B test n, the learner sequentially draws i.i.d. samples of a {-1,1}-valued random variable with mean μ_n until a halting criterion is met. The learner then decides to either accept the reward μ_n or to reject it and get zero instead. We measure the learner's performance as the sum of the expected rewards of the accepted μ_n divided by the total expected number of used time steps (which is different from the expected ratio between the total reward and the total number of used time steps). We design an algorithm and prove a data-dependent regret bound against any set of policies based on an arbitrary halting criterion and decision rule. Though our algorithm borrows ideas from multiarmed bandits, the two settings are significantly different and not directly comparable. In fact, the value of μ_n is never observed directly in our setting---unlike rewards in stochastic bandits. Moreover, the particular structure of our problem allows our regret bounds to be independent of the number of policies.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset