On the Convergence Rate of Off-Policy Policy Optimization Methods with Density-Ratio Correction

06/02/2021
by   Jiawei Huang, et al.
0

In this paper, we study the convergence properties of off-policy policy improvement algorithms with state-action density ratio correction under function approximation setting, where the objective function is formulated as a max-max-min optimization problem. We characterize the bias of the learning objective and present two strategies with finite-time convergence guarantees. In our first strategy, we present algorithm P-SREDA with convergence rate O(ϵ^-3), whose dependency on ϵ is optimal. In our second strategy, we propose a new off-policy actor-critic style algorithm named O-SPIM. We prove that O-SPIM converges to a stationary point with total complexity O(ϵ^-4), which matches the convergence rate of some recent actor-critic algorithms in the on-policy setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset