Sample Complexity of Policy Gradient Finding Second-Order Stationary Points

12/02/2020
by   Long Yang, et al.
0

The goal of policy-based reinforcement learning (RL) is to search the maximal point of its objective. However, due to the inherent non-concavity of its objective, convergence to a first-order stationary point (FOSP) can not guarantee the policy gradient methods finding a maximal point. A FOSP can be a minimal or even a saddle point, which is undesirable for RL. Fortunately, if all the saddle points are strict, all the second-order stationary points (SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider SOSP as the convergence criteria to character the sample complexity of policy gradient. Our result shows that policy gradient converges to an (ϵ,√(ϵχ))-SOSP with probability at least 1-𝒪(δ) after the total cost of 𝒪(ϵ^-9/2(1-γ)√(χ)log1δ), where γ∈(0,1). Our result improves the state-of-the-art result significantly where it requires 𝒪(ϵ^-9χ^3/2δlog1ϵχ). Our analysis is based on the key idea that decomposes the parameter space ℝ^p into three non-intersected regions: non-stationary point, saddle point, and local optimal region, then making a local improvement of the objective of RL in each region. This technique can be potentially generalized to extensive policy gradient methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/18/2019

Sample Efficient Policy Gradient Methods with Recursive Variance Reduction

Improving the sample efficiency in reinforcement learning has been a lon...
research
07/23/2021

A general sample complexity analysis of vanilla policy gradient

The policy gradient (PG) is one of the most popular methods for solving ...
research
04/21/2023

A Cubic-regularized Policy Newton Algorithm for Reinforcement Learning

We consider the problem of control in the setting of reinforcement learn...
research
05/25/2022

Stochastic Second-Order Methods Provably Beat SGD For Gradient-Dominated Functions

We study the performance of Stochastic Cubic Regularized Newton (SCRN) o...
research
12/19/2019

Distributed Reinforcement Learning for Decentralized Linear Quadratic Control: A Derivative-Free Policy Optimization Approach

This paper considers a distributed reinforcement learning problem for de...
research
01/28/2023

Stochastic Dimension-reduced Second-order Methods for Policy Optimization

In this paper, we propose several new stochastic second-order algorithms...
research
06/24/2020

DISK: Learning local features with policy gradient

Local feature frameworks are difficult to learn in an end-to-end fashion...

Please sign up or login with your details

Forgot password? Click here to reset