DeepAI AI Chat
Log In Sign Up

On the Convergence of Policy in Unregularized Policy Mirror Descent

05/17/2022
by   Dachao Lin, et al.
Peking University
0

In this short note, we give the convergence analysis of the policy in the recent famous policy mirror descent (PMD). We mainly consider the unregularized setting following [11] with generalized Bregman divergence. The difference is that we directly give the convergence rates of policy under generalized Bregman divergence. Our results are inspired by the convergence of value function in previous works and are an extension study of policy mirror descent. Though some results have already appeared in previous work, we further discover a large body of Bregman divergences could give finite-step convergence to an optimal policy, such as the classical Euclidean distance.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/30/2023

A Novel Framework for Policy Mirror Descent with General Parametrization and Linear Convergence

Modern policy optimization methods in applied reinforcement learning, su...
01/19/2022

On the Convergence Rates of Policy Gradient Methods

We consider infinite-horizon discounted Markov decision problems with fi...
01/24/2022

Homotopic Policy Mirror Descent: Policy Convergence, Implicit Regularization, and Improved Sample Complexity

We propose the homotopic policy mirror descent (HPMD) method for solving...
05/24/2021

Policy Mirror Descent for Regularized Reinforcement Learning: A Generalized Framework with Linear Convergence

Policy optimization, which learns the policy of interest by maximizing t...
06/17/2021

A Short Note of PAGE: Optimal Convergence Rates for Nonconvex Optimization

In this note, we first recall the nonconvex problem setting and introduc...
11/16/2022

Minimum information divergence of Q-functions for dynamic treatment resumes

This paper aims at presenting a new application of information geometry ...