Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

03/12/2020
by   Hongseok Namkoong, et al.
5

When observed decisions depend only on observed features, off-policy policy evaluation (OPE) methods for sequential decision making problems can estimate the performance of evaluation policies before deploying them. This assumption is frequently violated due to unobserved confounders, unrecorded variables that impact both the decisions and their outcomes. We assess robustness of OPE methods under unobserved confounding by developing worst-case bounds on the performance of an evaluation policy. When unobserved confounders can affect every decision in an episode, we demonstrate that even small amounts of per-decision confounding can heavily bias OPE methods. Fortunately, in a number of important settings found in healthcare, policy-making, operations, and technology, unobserved confounders may primarily affect only one of the many decisions made. Under this less pessimistic model of one-decision confounding, we propose an efficient loss-minimization-based procedure for computing worst-case bounds, and prove its statistical consistency. On two simulated healthcare examples—management of sepsis patients and developmental interventions for autistic children—where this is a reasonable model of confounding, we demonstrate that our method invalidates non-robust results and provides meaningful certificates of robustness, allowing reliable selection of policies even under unobserved confounding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2020

Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

Off-policy evaluation of sequential decision policies from observational...
research
09/08/2023

Offline Recommender System Evaluation under Unobserved Confounding

Off-Policy Estimation (OPE) methods allow us to learn and evaluate decis...
research
02/01/2023

Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders

Offline reinforcement learning is important in domains such as medicine,...
research
04/02/2022

Model-Free and Model-Based Policy Evaluation when Causality is Uncertain

When decision-makers can directly intervene, policy evaluation algorithm...
research
02/26/2023

Kernel Conditional Moment Constraints for Confounding Robust Inference

We study policy evaluation of offline contextual bandits subject to unob...
research
03/05/2017

Controlling for Unobserved Confounds in Classification Using Correlational Constraints

As statistical classifiers become integrated into real-world application...
research
05/22/2018

Confounding-Robust Policy Improvement

We study the problem of learning personalized decision policies from obs...

Please sign up or login with your details

Forgot password? Click here to reset