On Instrumental Variable Regression for Deep Offline Policy Evaluation

05/21/2021
by   Yutian Chen, et al.
26

We show that the popular reinforcement learning (RL) strategy of estimating the state-action value (Q-function) by minimizing the mean squared Bellman error leads to a regression problem with confounding, the inputs and output noise being correlated. Hence, direct minimization of the Bellman error can result in significantly biased Q-function estimates. We explain why fixing the target Q-network in Deep Q-Networks and Fitted Q Evaluation provides a way of overcoming this confounding, thus shedding new light on this popular but not well understood trick in the deep RL literature. An alternative approach to address confounding is to leverage techniques developed in the causality literature, notably instrumental variables (IV). We bring together here the literature on IV and RL by investigating whether IV approaches can lead to improved Q-function estimates. This paper analyzes and compares a wide range of recent IV methods in the context of offline policy evaluation (OPE), where the goal is to estimate the value of a policy using logged data only. By applying different IV techniques to OPE, we are not only able to recover previously proposed OPE methods such as model-based techniques but also to obtain competitive new techniques. We find empirically that state-of-the-art OPE methods are closely matched in performance by some IV methods such as AGMM, which were not developed for OPE. We open-source all our code and datasets at https://github.com/liyuan9988/IVOPEwithACME.

READ FULL TEXT

page 19

page 29

research
06/19/2021

Boosting Offline Reinforcement Learning with Residual Generative Modeling

Offline reinforcement learning (RL) tries to learn the near-optimal poli...
research
06/04/2022

Hybrid Value Estimation for Off-policy Evaluation and Offline Reinforcement Learning

Value function estimation is an indispensable subroutine in reinforcemen...
research
07/21/2023

Model-based Offline Reinforcement Learning with Count-based Conservatism

In this paper, we propose a model-based offline reinforcement learning m...
research
08/28/2018

High-confidence error estimates for learned value functions

Estimating the value function for a fixed policy is a fundamental proble...
research
06/15/2018

Instrumental variables regression

IV regression in the context of a re-sampling is considered in the work....
research
06/01/2023

Improving and Benchmarking Offline Reinforcement Learning Algorithms

Recently, Offline Reinforcement Learning (RL) has achieved remarkable pr...
research
11/30/2020

IV-Posterior: Inverse Value Estimation for Interpretable Policy Certificates

Model-free reinforcement learning (RL) is a powerful tool to learn a bro...

Please sign up or login with your details

Forgot password? Click here to reset