Entropic Regularization of Markov Decision Processes

07/06/2019
by   Boris Belousov, et al.
1

An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent has to discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed to bound the information loss measured by the Kullback-Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ^2-divergence penalty. Other actor-critic pairs arise for various choices of the penalty generating function f. On a concrete instantiation of our framework with the α-divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.

READ FULL TEXT

page 2

page 5

page 6

page 8

page 9

page 12

page 13

page 16

research
12/29/2017

f-Divergence constrained policy improvement

To ensure stability of learning, state-of-the-art generalized policy ite...
research
10/01/2021

Divergence-Regularized Multi-Agent Actor-Critic

Entropy regularization is a popular method in reinforcement learning (RL...
research
01/31/2019

A Theory of Regularized Markov Decision Processes

Many recent successful (deep) reinforcement learning algorithms make use...
research
02/08/2021

Adversarially Guided Actor-Critic

Despite definite success in deep reinforcement learning problems, actor-...
research
02/20/2022

Learning to Control Partially Observed Systems with Finite Memory

We consider the reinforcement learning problem for partially observed Ma...
research
05/29/2018

The Actor Search Tree Critic (ASTC) for Off-Policy POMDP Learning in Medical Decision Making

Off-policy reinforcement learning enables near-optimal policy from subop...
research
01/19/2022

Markov decision processes with observation costs

We present a framework for a controlled Markov chain where the state of ...

Please sign up or login with your details

Forgot password? Click here to reset