GuideBoot: Guided Bootstrap for Deep Contextual Bandits

07/18/2021
by   Feiyang Pan, et al.
0

The exploration/exploitation (E E) dilemma lies at the core of interactive systems such as online advertising, for which contextual bandit algorithms have been proposed. Bayesian approaches provide guided exploration with principled uncertainty estimation, but the applicability is often limited due to over-simplified assumptions. Non-Bayesian bootstrap methods, on the other hand, can apply to complex problems by using deep reward models, but lacks clear guidance to the exploration behavior. It still remains largely unsolved to develop a practical method for complex deep contextual bandits. In this paper, we introduce Guided Bootstrap (GuideBoot for short), combining the best of both worlds. GuideBoot provides explicit guidance to the exploration behavior by training multiple models over both real samples and noisy samples with fake labels, where the noise is added according to the predictive uncertainty. The proposed method is efficient as it can make decisions on-the-fly by utilizing only one randomly chosen model, but is also effective as we show that it can be viewed as a non-Bayesian approximation of Thompson sampling. Moreover, we extend it to an online version that can learn solely from streaming data, which is favored in real applications. Extensive experiments on both synthetic task and large-scale advertising environments show that GuideBoot achieves significant improvements against previous state-of-the-art methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2021

EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits

Contextual multi-armed bandits have been studied for decades and adapted...
research
06/05/2021

Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms

The stochastic contextual bandit problem, which models the trade-off bet...
research
02/26/2018

Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling

Recent advances in deep reinforcement learning have made significant str...
research
02/03/2023

Multiplier Bootstrap-based Exploration

Despite the great interest in the bandit problem, designing efficient al...
research
11/25/2020

Exploration in Online Advertising Systems with Deep Uncertainty-Aware Learning

Modern online advertising systems inevitably rely on personalization met...
research
08/03/2020

Deep Bayesian Bandits: Exploring in Online Personalized Recommendations

Recommender systems trained in a continuous learning fashion are plagued...
research
10/30/2019

Thompson Sampling via Local Uncertainty

Thompson sampling is an efficient algorithm for sequential decision maki...

Please sign up or login with your details

Forgot password? Click here to reset