Doubly-Adaptive Thompson Sampling for Multi-Armed and Contextual Bandits

02/25/2021
by   Maria Dimakopoulou, et al.
0

To balance exploration and exploitation, multi-armed bandit algorithms need to conduct inference on the true mean reward of each arm in every time step using the data collected so far. However, the history of arms and rewards observed up to that time step is adaptively collected and there are known challenges in conducting inference with non-iid data. In particular, sample averages, which play a prominent role in traditional upper confidence bound algorithms and traditional Thompson sampling algorithms, are neither unbiased nor asymptotically normal. We propose a variant of a Thompson sampling based algorithm that leverages recent advances in the causal inference literature and adaptively re-weighs the terms of a doubly robust estimator on the true mean reward of each arm – hence its name doubly-adaptive Thompson sampling. The regret of the proposed algorithm matches the optimal (minimax) regret rate and its empirical evaluation in a semi-synthetic experiment based on data from a randomized control trial of a web service is performed: we see that the proposed doubly-adaptive Thompson sampling has superior empirical performance to existing baselines in terms of cumulative regret and statistical power in identifying the best arm. Further, we extend this approach to contextual bandits, where there are more sources of bias present apart from the adaptive data collection – such as the mismatch between the true data generating process and the reward model assumptions or the unequal representations of certain regions of the context space in initial stages of learning – and propose the linear contextual doubly-adaptive Thompson sampling and the non-parametric contextual doubly-adaptive Thompson sampling extensions of our approach.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2020

Contextual Blocking Bandits

We study a novel variant of the multi-armed bandit problem, where at eac...
research
02/02/2019

On the bias, risk and consistency of sample means in multi-armed bandits

In the classic stochastic multi-armed bandit problem, it is well known t...
research
07/03/2023

Statistical Inference on Multi-armed Bandits with Delayed Feedback

Multi armed bandit (MAB) algorithms have been increasingly used to compl...
research
02/14/2022

Statistical Inference After Adaptive Sampling in Non-Markovian Environments

There is a great desire to use adaptive sampling methods, such as reinfo...
research
11/22/2022

Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning

We design and implement an adaptive experiment (a “contextual bandit”) t...
research
05/27/2019

The bias of the sample mean in multi-armed bandits can be positive or negative

It is well known that in stochastic multi-armed bandits (MAB), the sampl...
research
07/23/2012

MCTS Based on Simple Regret

UCT, a state-of-the art algorithm for Monte Carlo tree search (MCTS) in ...

Please sign up or login with your details

Forgot password? Click here to reset