The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
Theoretical guarantees in reinforcement learning (RL) are known to suffer multiplicative blow-up factors with respect to the misspecification error of function approximation. Yet, the nature of such approximation factors – especially their optimal form in a given learning problem – is poorly understood. In this paper we study this question in linear off-policy value function estimation, where many open questions remain. We study the approximation factor in a broad spectrum of settings, such as with the weighted L_2-norm (where the weighting is the offline state distribution), the L_∞ norm, the presence vs. absence of state aliasing, and full vs. partial coverage of the state space. We establish the optimal asymptotic approximation factors (up to constants) for all of these settings. In particular, our bounds identify two instance-dependent factors for the L_2(μ) norm and only one for the L_∞ norm, which are shown to dictate the hardness of off-policy evaluation under misspecification.
READ FULL TEXT