Minimax Confidence Interval for Off-Policy Evaluation and Policy Optimization

02/06/2020
by   Nan Jiang, et al.
2

We study minimax methods for off-policy evaluation (OPE) using value-functions and marginalized importance weights. Despite that they hold promises of overcoming the exponential variance in traditional importance sampling, several key problems remain: (1) They require function approximation and are generally biased. For the sake of trustworthy OPE, is there anyway to quantify the biases? (2) They are split into two styles ("weight-learning" vs "value-learning"). Can we unify them? In this paper we answer both questions positively. By slightly altering the derivation of previous methods (one from each style; Uehara and Jiang, 2019), we unify them into a single confidence interval (CI) that automatically comes with a special type of double robustness: when either the value-function or importance weight class is well-specified, the CI is valid and its length quantifies the misspecification of the other class. We can also tell which class is misspecified, which provides useful diagnostic information for the design of function approximation. Our CI also provides a unified view of and new insights to some recent methods: for example, one side of the CI recovers a version of AlgaeDICE (Nachum et al., 2019b), and we show that the two sides need to be used together and either alone may incur doubled approximation error as a point estimate. We further examine the potential of applying these bounds to two long-standing problems: off-policy policy optimization with poor data coverage (i.e., exploitation), and efficient exploration. With a well-specified value-function class, we show that optimizing the lower and the upper bounds lead to good exploitation and exploration, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/28/2019

Minimax Weight and Q-Function Learning for Off-Policy Evaluation

We provide theoretical investigations into off-policy evaluation in rein...
research
10/27/2022

Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions

Off-policy evaluation often refers to two related tasks: estimating the ...
research
10/09/2019

Policy Optimization Through Approximated Importance Sampling

Recent policy optimization approaches (Schulman et al., 2015a, 2017) hav...
research
10/16/2019

Doubly Robust Bias Reduction in Infinite Horizon Off-Policy Estimation

Infinite horizon off-policy policy evaluation is a highly challenging ta...
research
07/25/2023

The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation

Theoretical guarantees in reinforcement learning (RL) are known to suffe...
research
06/20/2016

Bootstrapping with Models: Confidence Intervals for Off-Policy Evaluation

For an autonomous agent, executing a poor policy may be costly or even d...
research
03/28/2018

One-step dispatching policy improvement in multiple-server queueing systems with Poisson arrivals

Policy iteration techniques for multiple-server dispatching rely on the ...

Please sign up or login with your details

Forgot password? Click here to reset