Combining Online Learning and Offline Learning for Contextual Bandits with Deficient Support

07/24/2021
by   Hung Tran-The, et al.
5

We address policy learning with logged data in contextual bandits. Current offline-policy learning algorithms are mostly based on inverse propensity score (IPS) weighting requiring the logging policy to have full support i.e. a non-zero probability for any context/action of the evaluation policy. However, many real-world systems do not guarantee such logging policies, especially when the action space is large and many actions have poor or missing rewards. With such support deficiency, the offline learning fails to find optimal policies. We propose a novel approach that uses a hybrid of offline learning with online exploration. The online exploration is used to explore unsupported actions in the logged data whilst offline learning is used to exploit supported actions from the logged data avoiding unnecessary explorations. Our approach determines an optimal policy with theoretical guarantees using the minimal number of online explorations. We demonstrate our algorithms' effectiveness empirically on a diverse collection of datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

Off-policy Bandits with Deficient Support

Learning effective contextual-bandit policies from past actions of a dep...
research
06/15/2020

Piecewise-Stationary Off-Policy Optimization

Off-policy learning is a framework for evaluating and optimizing policie...
research
11/27/2021

Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization

Offline policy learning (OPL) leverages existing data collected a priori...
research
01/15/2019

Imitation-Regularized Offline Learning

We study the problem of offline learning in automated decision systems u...
research
10/29/2021

Doubly Robust Interval Estimation for Optimal Policy Evaluation in Online Learning

Evaluating the performance of an ongoing policy plays a vital role in ma...
research
02/02/2020

Safe Exploration for Optimizing Contextual Bandits

Contextual bandit problems are a natural fit for many information retrie...
research
10/24/2022

PAC-Bayesian Offline Contextual Bandits With Guarantees

This paper introduces a new principled approach for offline policy optim...

Please sign up or login with your details

Forgot password? Click here to reset