Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning

02/11/2020
by   Nathan Kallus, et al.
8

Off-policy evaluation of sequential decision policies from observational data is necessary in applications of batch reinforcement learning such as education and healthcare. In such settings, however, observed actions are often confounded with transitions by unobserved variables, rendering exact evaluation of new policies impossible, i.e., unidentifiable. We develop a robust approach that estimates sharp bounds on the (unidentifiable) value of a given policy in an infinite-horizon problem given data from another policy with unobserved confounding subject to a sensitivity model. We phrase the problem precisely as computing the support function of the set of all stationary state-occupancy ratios that agree with both the data and the sensitivity model. We show how to express this set using a new partially identified estimating equation and prove convergence to the sharp bounds, as we collect more confounded data. We prove that membership in the set can be checked by solving a linear program, while the support function is given by a difficult nonconvex optimization problem. We leverage an analytical solution for the finite-state-space case to develop approximations based on nonconvex projected gradient descent. We demonstrate the resulting bounds empirically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2020

Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders

Off-policy evaluation (OPE) in reinforcement learning is an important pr...
research
03/12/2020

Off-policy Policy Evaluation For Sequential Decisions Under Unobserved Confounding

When observed decisions depend only on observed features, off-policy pol...
research
09/20/2021

A Reinforcement Learning Approach to the Stochastic Cutting Stock Problem

We propose a formulation of the stochastic cutting stock problem as a di...
research
02/01/2023

Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders

Offline reinforcement learning is important in domains such as medicine,...
research
02/10/2020

Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions

Off-policy evaluation in reinforcement learning offers the chance of usi...
research
02/26/2023

Kernel Conditional Moment Constraints for Confounding Robust Inference

We study policy evaluation of offline contextual bandits subject to unob...
research
04/02/2022

Model-Free and Model-Based Policy Evaluation when Causality is Uncertain

When decision-makers can directly intervene, policy evaluation algorithm...

Please sign up or login with your details

Forgot password? Click here to reset