Convergence of policy gradient for entropy regularized MDPs with neural network approximation in the mean-field regime

01/18/2022
by   Bekzhan Kerimkulov, et al.
0

We study the global convergence of policy gradient for infinite-horizon, continuous state and action space, entropy-regularized Markov decision processes (MDPs). We consider a softmax policy with (one-hidden layer) neural network approximation in a mean-field regime. Additional entropic regularization in the associated mean-field probability measure is added, and the corresponding gradient flow is studied in the 2-Wasserstein metric. We show that the objective function is increasing along the gradient flow. Further, we prove that if the regularization in terms of the mean-field measure is sufficient, the gradient flow converges exponentially fast to the unique stationary solution, which is the unique maximizer of the regularized MDP objective. Lastly, we study the sensitivity of the value function along the gradient flow with respect to regularization parameters and the initial condition. Our results rely on the careful analysis of non-linear Fokker–Planck–Kolmogorov equation and extend the pioneering work of Mei et al. 2020 and Agarwal et al. 2020, which quantify the global convergence rate of policy gradient for entropy-regularized MDPs in the tabular setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/22/2020

Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regime

We study the problem of policy optimization for infinite-horizon discoun...
research
05/22/2017

A unified view of entropy-regularized Markov decision processes

We propose a general framework for entropy-regularized average-reward re...
research
06/08/2021

Linear Convergence of Entropy-Regularized Natural Policy Gradient with Linear Function Approximation

Natural policy gradient (NPG) methods with function approximation achiev...
research
06/08/2020

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Temporal-difference and Q-learning play a key role in deep reinforcement...
research
06/20/2023

Last-Iterate Convergent Policy Gradient Primal-Dual Methods for Constrained MDPs

We study the problem of computing an optimal policy of an infinite-horiz...
research
02/22/2021

Softmax Policy Gradient Methods Can Take Exponential Time to Converge

The softmax policy gradient (PG) method, which performs gradient ascent ...
research
03/21/2023

Doubly Regularized Entropic Wasserstein Barycenters

We study a general formulation of regularized Wasserstein barycenters th...

Please sign up or login with your details

Forgot password? Click here to reset