Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Improving the sample efficiency in reinforcement learning has been a long-standing research problem. In this work, we aim to reduce the sample complexity of existing policy gradient methods. We propose a novel policy gradient algorithm called SRVR-PG, which only requires O(1/ϵ^3/2) episodes to find an ϵ-approximate stationary point of the nonconcave performance function J(θ) (i.e., θ such that ∇ J(θ)_2^2≤ϵ). This sample complexity improves the best known result O(1/ϵ^5/3) for policy gradient algorithms by a factor of O(1/ϵ^1/6). In addition, we also propose a variant of SRVR-PG with parameter exploration, which explores the initial policy parameter from a prior probability distribution. We conduct numerical experiments on classic control problems in reinforcement learning to validate the performance of our proposed algorithms.
READ FULL TEXT