Near Optimal Provable Uniform Convergence in Off-Policy Evaluation for Reinforcement Learning

07/07/2020
by   Ming Yin, et al.
4

The Off-Policy Evaluation aims at estimating the performance of target policy π using offline data rolled in by a logging policy μ. Intensive studies have been conducted and the recent marginalized importance sampling (MIS) achieves the sample efficiency for OPE. However, it is rarely known if uniform convergence guarantees in OPE can be obtained efficiently. In this paper, we consider this new question and reveal the comprehensive relationship between OPE and offline learning for the first time. For the global policy class, by using the fully model-based OPE estimator, our best result is able to achieve ϵ-uniform convergence with complexity O(H^3·min(S,H)/d_mϵ^2), where d_m is an instance-dependent quantity decided by μ. This result is only one factor away from our uniform convergence lower bound up to a logarithmic factor. For the local policy class, ϵ-uniform convergence is achieved with the optimal complexity O(H^3/d_mϵ^2) in the off-policy setting. This result complements the work of sparse model-based planning (Agarwal et al. 2019) with generative model. Lastly, one interesting corollary of our intermediate result implies a refined analysis over simulation lemma.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/13/2021

Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings

This work studies the statistical limits of uniform convergence for offl...
research
03/07/2023

On the Sample Complexity of Vanilla Model-Based Offline Reinforcement Learning with Dependent Samples

Offline reinforcement learning (offline RL) considers problems where lea...
research
01/29/2020

Asymptotically Efficient Off-Policy Evaluation for Tabular Reinforcement Learning

We consider the problem of off-policy evaluation for reinforcement learn...
research
07/13/2022

A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP

As an important framework for safe Reinforcement Learning, the Constrain...
research
07/13/2023

An Improved Uniform Convergence Bound with Fat-Shattering Dimension

The fat-shattering dimension characterizes the uniform convergence prope...
research
02/21/2017

Sample Efficient Policy Search for Optimal Stopping Domains

Optimal stopping problems consider the question of deciding when to stop...
research
01/05/2021

Off-Policy Evaluation of Slate Policies under Bayes Risk

We study the problem of off-policy evaluation for slate bandits, for the...

Please sign up or login with your details

Forgot password? Click here to reset