Momentum-Based Policy Gradient Methods

07/13/2020
by   Feihu Huang, et al.
0

In the paper, we propose a class of efficient momentum-based policy gradient methods for the model-free reinforcement learning, which use adaptive learning rates and do not require any large batches. Specifically, we propose a fast important-sampling momentum-based policy gradient (IS-MBPG) method based on a new momentum-based variance reduced technique and the importance sampling technique. We also propose a fast Hessian-aided momentum-based policy gradient (HA-MBPG) method based on the momentum-based variance reduced technique and the Hessian-aided technique. Moreover, we prove that both the IS-MBPG and HA-MBPG methods reach the best known sample complexity of O(ϵ^-3) for finding an ϵ-stationary point of the non-concave performance function, which only require one trajectory at each iteration. In particular, we present a non-adaptive version of IS-MBPG method, i.e., IS-MBPG*, which also reaches the best known sample complexity of O(ϵ^-3) without any large batches. In the experiments, we apply four benchmark tasks to demonstrate the effectiveness of our algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset