On the Convergence of Approximate and Regularized Policy Iteration Schemes

09/20/2019
by   Elena Smirnova, et al.
0

Algorithms based on the entropy regularized framework, such as Soft Q-learning and Soft Actor-Critic, recently showed state-of-the-art performance on a number of challenging reinforcement learning (RL) tasks. The regularized formulation modifies the standard RL objective and thus, generally, converges to a policy different from the optimal greedy policy of the original RL problem. Practically, it is important to control the suboptimality of the regularized optimal policy. In this paper, we propose the optimality-preserving regularized modified policy iteration (MPI) scheme that simultaneously (a) provides desirable properties to intermediate policies such as targeted exploration, and (b) guarantees convergence to the optimal policy with explicit rates depending on the decrease rate of the regularization parameter. This result is based on two more general results. First, we show that the approximate MPI scheme converges as fast as the exact MPI if the decrease rate of error sequence is sufficiently fast; otherwise, its rate of convergence slows down to the errors decrease rate. Second, we show the regularized MPI is an instance of the approximate MPI where regularization plays the role of errors. In a special case of negative entropy regularizer (leading to a popular Soft Q-learning algorithm), our result explicitly links the convergence rate of policy / value iterates to exploration.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset