Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms

06/05/2021
by   Qin Ding, et al.
0

The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We show our Syndicated Bandits framework can achieve the optimal regret upper bounds and is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/04/2020

Hyper-parameter Tuning for the Contextual Bandit

We study here the problem of learning the exploration exploitation trade...
research
02/18/2023

Online Continuous Hyperparameter Optimization for Contextual Bandits

In stochastic contextual bandit problems, an agent sequentially makes ac...
research
05/31/2019

Cascaded Algorithm-Selection and Hyper-Parameter Optimization with Extreme-Region Upper Confidence Bound Bandit

An automatic machine learning (AutoML) task is to select the best algori...
research
01/21/2019

Parallel Contextual Bandits in Wireless Handover Optimization

As cellular networks become denser, a scalable and dynamic tuning of wir...
research
12/28/2021

Learning Across Bandits in High Dimension via Robust Statistics

Decision-makers often face the "many bandits" problem, where one must si...
research
07/18/2021

GuideBoot: Guided Bootstrap for Deep Contextual Bandits

The exploration/exploitation (E E) dilemma lies at the core of interac...
research
06/01/2021

Invariant Policy Learning: A Causal Perspective

In the past decade, contextual bandit and reinforcement learning algorit...

Please sign up or login with your details

Forgot password? Click here to reset