On the Convergence of Approximate and Regularized Policy Iteration Schemes

09/20/2019
by   Elena Smirnova, et al.
0

Algorithms based on the entropy regularized framework, such as Soft Q-learning and Soft Actor-Critic, recently showed state-of-the-art performance on a number of challenging reinforcement learning (RL) tasks. The regularized formulation modifies the standard RL objective and thus, generally, converges to a policy different from the optimal greedy policy of the original RL problem. Practically, it is important to control the suboptimality of the regularized optimal policy. In this paper, we propose the optimality-preserving regularized modified policy iteration (MPI) scheme that simultaneously (a) provides desirable properties to intermediate policies such as targeted exploration, and (b) guarantees convergence to the optimal policy with explicit rates depending on the decrease rate of the regularization parameter. This result is based on two more general results. First, we show that the approximate MPI scheme converges as fast as the exact MPI if the decrease rate of error sequence is sufficiently fast; otherwise, its rate of convergence slows down to the errors decrease rate. Second, we show the regularized MPI is an instance of the approximate MPI where regularization plays the role of errors. In a special case of negative entropy regularizer (leading to a popular Soft Q-learning algorithm), our result explicitly links the convergence rate of policy / value iterates to exploration.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/18/2021

Regularized Policies are Reward Robust

Entropic regularization of policies in Reinforcement Learning (RL) is a ...
research
07/02/2019

Modified Actor-Critics

Robot Learning, from a control point of view, often involves continuous ...
research
07/13/2020

Fast Global Convergence of Natural Policy Gradient Methods with Entropy Regularization

Natural policy gradient (NPG) methods are among the most widely used pol...
research
02/10/2018

Path Consistency Learning in Tsallis Entropy Regularized MDPs

We study the sparse entropy-regularized reinforcement learning (ERL) pro...
research
07/13/2019

Stochastic Convergence Results for Regularized Actor-Critic Methods

In this paper, we present a stochastic convergence proof, under suitable...
research
05/26/2022

Approximate Q-learning and SARSA(0) under the ε-greedy Policy: a Differential Inclusion Analysis

Q-learning and SARSA(0) with linear function approximation, under ϵ-gree...
research
02/08/2023

Modified Policy Iteration for Exponential Cost Risk Sensitive MDPs

Modified policy iteration (MPI) also known as optimistic policy iteratio...

Please sign up or login with your details

Forgot password? Click here to reset