Statistical Estimation of Confounded Linear MDPs: An Instrumental Variable Approach

09/12/2022
by   Miao Lu, et al.
0

In an Markov decision process (MDP), unobservable confounders may exist and have impacts on the data generating process, so that the classic off-policy evaluation (OPE) estimators may fail to identify the true value function of the target policy. In this paper, we study the statistical properties of OPE in confounded MDPs with observable instrumental variables. Specifically, we propose a two-stage estimator based on the instrumental variables and establish its statistical properties in the confounded MDPs with a linear structure. For non-asymptotic analysis, we prove a 𝒪(n^-1/2)-error bound where n is the number of samples. For asymptotic analysis, we prove that the two-stage estimator is asymptotically normal with a typical rate of n^1/2. To the best of our knowledge, we are the first to show such statistical results of the two-stage estimator for confounded linear MDPs via instrumental variables.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2021

A Minimax Learning Approach to Off-Policy Evaluation in Partially Observable Markov Decision Processes

We consider off-policy evaluation (OPE) in Partially Observable Markov D...
research
12/26/2020

Transience in Countable MDPs

The Transience objective is not to visit any state infinitely often. Whi...
research
06/23/2022

Nearly Minimax Optimal Reinforcement Learning with Linear Function Approximation

We study reinforcement learning with linear function approximation where...
research
06/20/2023

Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs

We study the problem of computing an optimal policy of an infinite-horiz...
research
01/29/2023

Asymptotic Inference for Multi-Stage Stationary Treatment Policy with High Dimensional Features

Dynamic treatment rules or policies are a sequence of decision functions...
research
12/29/2022

An Instrumental Variable Approach to Confounded Off-Policy Evaluation

Off-policy evaluation (OPE) is a method for estimating the return of a t...
research
07/26/2022

Future-Dependent Value-Based Off-Policy Evaluation in POMDPs

We study off-policy evaluation (OPE) for partially observable MDPs (POMD...

Please sign up or login with your details

Forgot password? Click here to reset