Geometry and convergence of natural policy gradient methods

11/03/2022
by   Johannes Müller, et al.
0

We study the convergence of several natural policy gradient (NPG) methods in infinite-horizon discounted Markov decision processes with regular policy parametrizations. For a variety of NPGs and reward functions we show that the trajectories in state-action space are solutions of gradient flows with respect to Hessian geometries, based on which we obtain global convergence guarantees and convergence rates. In particular, we show linear convergence for unregularized and regularized NPG flows with the metrics proposed by Kakade and Morimura and co-authors by observing that these arise from the Hessian geometries of conditional entropy and entropy respectively. Further, we obtain sublinear convergence rates for Hessian geometries arising from other convex functions like log-barriers. Finally, we interpret the discrete-time NPG methods with regularized rewards as inexact Newton methods if the NPG is defined with respect to the Hessian geometry of the regularizer. This yields local quadratic convergence rates of these methods for step size equal to the penalization strength.

READ FULL TEXT

page 23

page 24

research
10/04/2022

Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies

We consider infinite-horizon discounted Markov decision processes and st...
research
01/19/2022

On the Convergence Rates of Policy Gradient Methods

We consider infinite-horizon discounted Markov decision problems with fi...
research
07/29/2015

A Gauss-Newton Method for Markov Decision Processes

Approximate Newton methods are a standard optimization tool which aim to...
research
06/06/2022

Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs

We study sequential decision making problems aimed at maximizing the exp...
research
11/01/2022

Convergence of policy gradient methods for finite-horizon stochastic linear-quadratic control problems

We study the global linear convergence of policy gradient (PG) methods f...
research
01/19/2022

Critic Algorithms using Cooperative Networks

An algorithm is proposed for policy evaluation in Markov Decision Proces...
research
08/02/2023

Subgradient Langevin Methods for Sampling from Non-smooth Potentials

This paper is concerned with sampling from probability distributions π o...

Please sign up or login with your details

Forgot password? Click here to reset