Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms

06/05/2021
by   Qin Ding, et al.
0

The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We show our Syndicated Bandits framework can achieve the optimal regret upper bounds and is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

05/04/2020

Hyper-parameter Tuning for the Contextual Bandit

We study here the problem of learning the exploration exploitation trade...
06/04/2019

Toward Building Conversational Recommender Systems: A Contextual Bandit Approach

Contextual bandit algorithms have gained increasing popularity in recomm...
05/31/2019

Cascaded Algorithm-Selection and Hyper-Parameter Optimization with Extreme-Region Upper Confidence Bound Bandit

An automatic machine learning (AutoML) task is to select the best algori...
01/21/2019

Parallel Contextual Bandits in Wireless Handover Optimization

As cellular networks become denser, a scalable and dynamic tuning of wir...
06/01/2021

Invariant Policy Learning: A Causal Perspective

In the past decade, contextual bandit and reinforcement learning algorit...
12/28/2021

Learning Across Bandits in High Dimension via Robust Statistics

Decision-makers often face the "many bandits" problem, where one must si...
11/19/2017

Estimation Considerations in Contextual Bandits

Contextual bandit algorithms seek to learn a personalized treatment assi...