Homotopic Policy Mirror Descent: Policy Convergence, Implicit Regularization, and Improved Sample Complexity

01/24/2022
by   Yan Li, et al.
0

We propose the homotopic policy mirror descent (HPMD) method for solving discounted, infinite horizon MDPs with finite state and action space, and study its policy convergence. We report three properties that seem to be new in the literature of policy gradient methods: (1) HPMD exhibits global linear convergence of the value optimality gap, and local superlinear convergence of the policy to the set of optimal policies with order γ^-2. The superlinear convergence of the policy takes effect after no more than 𝒪(log(1/Δ^*)) number of iterations, where Δ^* is defined via a gap quantity associated with the optimal state-action value function; (2) HPMD also exhibits last-iterate convergence of the policy, with the limiting policy corresponding exactly to the optimal policy with the maximal entropy for every state. No regularization is added to the optimization objective and hence the second observation arises solely as an algorithmic property of the homotopic policy gradient method. (3) For the stochastic HPMD method, we further demonstrate a better than 𝒪(|𝒮| |𝒜| / ϵ^2) sample complexity for small optimality gap ϵ, when assuming a generative model for policy evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/19/2022

On the Convergence Rates of Policy Gradient Methods

We consider infinite-horizon discounted Markov decision problems with fi...
research
03/08/2023

Policy Mirror Descent Inherently Explores Action Space

Designing computationally efficient exploration strategies for on-policy...
research
06/06/2022

Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs

We study sequential decision making problems aimed at maximizing the exp...
research
02/03/2023

Stochastic Policy Gradient Methods: Improved Sample Complexity for Fisher-non-degenerate Policies

Recently, the impressive empirical success of policy gradient (PG) metho...
research
07/29/2023

First-order Policy Optimization for Robust Policy Evaluation

We adopt a policy optimization viewpoint towards policy evaluation for r...
research
05/17/2022

On the Convergence of Policy in Unregularized Policy Mirror Descent

In this short note, we give the convergence analysis of the policy in th...
research
06/15/2021

On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control

Reinforcement learning is a framework for interactive decision-making wi...

Please sign up or login with your details

Forgot password? Click here to reset