A Unified Framework of Policy Learning for Contextual Bandit with Confounding Bias and Missing Observations

03/20/2023
by   Siyu Chen, et al.
0

We study the offline contextual bandit problem, where we aim to acquire an optimal policy using observational data. However, this data usually contains two deficiencies: (i) some variables that confound actions are not observed, and (ii) missing observations exist in the collected data. Unobserved confounders lead to a confounding bias and missing observations cause bias and inefficiency problems. To overcome these challenges and learn the optimal policy from the observed dataset, we present a new algorithm called Causal-Adjusted Pessimistic (CAP) policy learning, which forms the reward function as the solution of an integral equation system, builds a confidence set, and greedily takes action with pessimism. With mild assumptions on the data, we develop an upper bound to the suboptimality of CAP for the offline contextual bandit problem.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/04/2014

Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits

We present a new algorithm for the contextual bandit learning problem, w...
research
10/14/2022

Continuous-in-time Limit for Bayesian Bandits

This paper revisits the bandit problem in the Bayesian setting. The Baye...
research
06/01/2023

Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding

A prominent challenge of offline reinforcement learning (RL) is the issu...
research
02/19/2023

Estimating Optimal Policy Value in General Linear Contextual Bandits

In many bandit problems, the maximal reward achievable by a policy is of...
research
02/02/2020

Safe Exploration for Optimizing Contextual Bandits

Contextual bandit problems are a natural fit for many information retrie...
research
12/23/2022

Offline Reinforcement Learning for Human-Guided Human-Machine Interaction with Private Information

Motivated by the human-machine interaction such as training chatbots for...
research
01/15/2019

Imitation-Regularized Offline Learning

We study the problem of offline learning in automated decision systems u...

Please sign up or login with your details

Forgot password? Click here to reset