A Minimax Learning Approach to Off-Policy Evaluation in Partially Observable Markov Decision Processes

11/12/2021
by   Chengchun Shi, et al.
0

We consider off-policy evaluation (OPE) in Partially Observable Markov Decision Processes (POMDPs), where the evaluation policy depends only on observable variables and the behavior policy depends on unobservable latent variables. Existing works either assume no unmeasured confounders, or focus on settings where both the observation and the state spaces are tabular. As such, these methods suffer from either a large bias in the presence of unmeasured confounders, or a large variance in settings with continuous or large observation/state spaces. In this work, we first propose novel identification methods for OPE in POMDPs with latent confounders, by introducing bridge functions that link the target policy's value and the observed data distribution. In fully-observable MDPs, these bridge functions reduce to the familiar value functions and marginal density ratios between the evaluation and the behavior policies. We next propose minimax estimation methods for learning these bridge functions. Our proposal permits general function approximation and is thus applicable to settings with continuous or large observation/state spaces. Finally, we construct three estimators based on these estimated bridge functions, corresponding to a value function-based estimator, a marginalized importance sampling estimator, and a doubly-robust estimator. Their nonasymptotic and asymptotic properties are investigated in detail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/09/2019

Off-Policy Evaluation in Partially Observable Environments

This work studies the problem of batch off-policy evaluation for Reinfor...
research
07/26/2022

Future-Dependent Value-Based Off-Policy Evaluation in POMDPs

We study off-policy evaluation (OPE) for partially observable MDPs (POMD...
research
07/24/2022

Towards Using Fully Observable Policies for POMDPs

Partially Observable Markov Decision Process (POMDP) is a framework appl...
research
09/12/2022

Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach

In an Markov decision process (MDP), unobservable confounders may exist ...
research
09/22/2021

A Spectral Approach to Off-Policy Evaluation for POMDPs

We consider off-policy evaluation (OPE) in Partially Observable Markov D...
research
09/21/2022

Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models

We study the problem of off-policy evaluation (OPE) for episodic Partial...
research
07/16/2022

ChronosPerseus: Randomized Point-based Value Iteration with Importance Sampling for POSMDPs

In reinforcement learning, agents have successfully used environments mo...

Please sign up or login with your details

Forgot password? Click here to reset