Entropic Regularization of Markov Decision Processes

07/06/2019 ∙ by Boris Belousov, et al. ∙ Technische Universität Darmstadt 1

An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent has to discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed to bound the information loss measured by the Kullback-Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α-divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ^2-divergence penalty. Other actor-critic pairs arise for various choices of the penalty generating function f. On a concrete instantiation of our framework with the α-divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 5

page 6

page 8

page 9

page 12

page 13

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Sequential decision-making problems under uncertainty are described by the mathematical framework of Markov decision processes (MDPs) Puterman (1994). The core problem in MDPs is to find an optimal policy—a mapping from states to actions which maximizes the expected cumulative reward collected by an agent over its lifetime. In reinforcement learning (RL), the agent is additionally assumed to have no prior knowledge about the environment dynamics and the reward function Sutton and Barto (1998). Therefore, direct policy optimization in the RL setting can be seen as a form of stochastic black-box optimization: the agent proposes a query point in the form of a policy, the environment evaluates this point by computing the expected return, after that the agent updates the proposal and the process repeats Deisenroth et al. (2013). There are two conceptual steps in this scheme known as policy evaluation and policy improvement Bellman (1957)

. Both steps require function approximation in high-dimensional and continuous state-action spaces due to the curse of dimensionality 

Bellman (1957). Therefore, statistical learning approaches are employed to approximate the value function of a policy and to perform policy improvement based on the data collected from the environment.

In contrast to traditional supervised learning, in reinforcement learning, the data distribution changes with every policy update. State-of-the-art generalized policy iteration algorithms 

Kakade (2001); Peters et al. (2010); Schulman et al. (2015, 2017) are mindful of this covariate shift problem Shimodaira (2000), taking active measures to account for it. To smoothen the learning dynamics, these algorithms limit the information loss between successive policy updates as measured by the KL divergence or approximations thereof Neu et al. (2017). In the optimization literature, such approaches are categorized as proximal (or trust region) algorithms Parikh (2014).

The choice of the divergence function determines the geometry of the information manifold Nielsen (2018). Recently, in particular in the area of implicit generative modeling Goodfellow et al. (2014), the choice of the divergence function was shown to have a dramatic effect both on the optimization performance Bottou et al. (2017) and the perceptual quality of the generated data when various -divergences were employed Nowozin et al. (2016). In this paper, we carry over the idea of using generalized entropic proximal mappings Teboulle (1992) given by an -divergence to reinforcement learning. We show that relative entropy policy search Peters et al. (2010), framed as an instance of stochastic mirror descent Nemirovski and Yudin (1983); Beck and Teboulle (2003) as suggested by Neu et al. (2017), can be extended to use any divergence measure from the family of -divergences. The resulting algorithm provides insights into the compatibility of policy and value function update rules in actor-critic architectures, which we exemplify on several instantiations of the generic -divergence with representatives from the parametric family of -divergences Chernoff (1952); Amari (1985); Cichocki and Amari (2010).

2 Background

This section provides the necessary background on policy gradients Deisenroth et al. (2013) and entropic penalties Teboulle (1992) for later derivations and analysis. Standard RL notation Thomas and Okal (2015) is used throughout.

2.1 Policy Gradient Methods

Policy search algorithms Deisenroth et al. (2013) commonly use the gradient estimator of the following form Sutton et al. (1999)

(1)

where is a stochastic policy and is an estimator of the advantage function at timestep . Expectation indicates an empirical average over a finite batch of samples, in an algorithm that alternates between sampling and optimization. The advantage estimate in (1) can be obtained from an estimate of the value function Peters and Schaal (2008); Schulman et al. (2016), which in its turn is found by least-squares estimation. Specifically, if denotes a parametric value function, and if is taken as its rollout-based estimate, then the parameters can be found as

(2)

The advantage estimate is then obtained by summing the temporal difference errors , also known as the Bellman residuals. Treating as fixed for the purpose of policy improvement, we can view (1) as the gradient of an advantage-weighted log-likelihood; therefore, the policy parameters can be found as

(3)

Thus, actor-critic algorithms that use the gradient estimator (1) to update the policy can be viewed as instances of the generalized policy iteration scheme, alternating between policy evaluation (2) and policy improvement (3). In the following, we will see that the actor-critic pair (2) and (3), that combines least-squares value function fitting with linear-in-the-advantage-weighted maximum likelihood policy improvement, is just one representative from a family of such actor-critic pairs arising for different choices of the -divergence penalty within our entropic proximal policy optimization framework.

2.2 Entropic Penalties

The term entropic penalties Teboulle (1992) refers to both -divergences and Bregman divergences. In this paper, we will focus on -divergences, leaving generalization to Bregman divergences for future work. The -divergence Csiszár (1963) between two distributions and with densities and is defined as

where is a convex function on with and is assumed to be absolutely continuous with respect to . For example, the KL divergence corresponds to , with the formula also applicable to unnormalized distributions Zhu and Rohwer (1995). Many common divergences lie on the curve of -divergences Chernoff (1952); Amari (1985) defined by a special choice of the generator function Cichocki and Amari (2010)

(4)

The -divergence will be used as the primary example of the -divergence throughout the paper. For more details on the -divergence and its properties, see Appendix A. Noteworthy is the symmetry of the -divergence with respect to , which relates reverse divergences as .

3 Entropic Proximal Policy Optimization

Consider the average-reward RL setting Sutton and Barto (1998), where the dynamics of an ergodic MDP are given by the transition density . An intelligent agent can modulate the system dynamics by sampling actions  from a stochastic policy

at every time step of the evolution of the dynamical system. The resulting modulated Markov chain with transition kernel

converges to a stationary state distribution  as time goes to infinity. This stationary state distribution induces a state-action distribution , which corresponds to visitation frequencies of state-action pairs Puterman (1994)

. The goal of the agent is to steer the system dynamics to desirable states. Such objective is commonly encoded by the expectation of a random variable

called reward in this context. Thus, the agent seeks a policy that maximizes the expected reward .

In reinforcement learning, neither the reward function nor the system dynamics are assumed to be known. Therefore, to maximize (or even evaluate) the objective , the agent must sample a batch of experiences in the form of tuples from the dynamics and use an empirical estimate as a surrogate for the original objective. Since the gradient of the expected reward with respect to the policy parameters can be written as Williams (1992)

with a corresponding sample-based counterpart

one may be tempted to optimize a sample-based objective

on a fixed batch of data till convergence. However, such an approach ignores the fact that sampling distribution itself depends on the policy parameters ; therefore, such greedy optimization aims at a wrong objective Peters et al. (2010). To have the correct objective, the dataset must be sampled anew after every parameter update—doing otherwise will lead to overfitting and divergence. This problem is known in statistics as the covariate shift problem Shimodaira (2000).

3.1 Fighting Covariate Shift via Trust Regions

A principled way to account for the change in the sampling distribution at every policy update step is to construct an auxiliary local objective function that can be safely optimized till convergence. Relative entropy policy search (REPS) algorithm Peters et al. (2010) proposes a candidate for such an objective

(5)

with being the current policy under which the data samples were collected, policy being the improvement policy that needs to be found, and being a ‘temperature’ parameter that determines how much the next policy can deviate from the current one. The original formulation employs a relative entropy trust region constraint with radius instead of a penalty, which allows for finding the optimal temperature as a function of the trust region radius .

Importantly, the objective function (5) can be optimized in closed form for policy (i.e., treating the policy itself as a variable and not its parameters, in contrast to standard policy gradients). To that end, several constraints on are added to ensure stationarity with respect to the given MDP Peters et al. (2010). In a similar vein, we can solve Problem (5) with respect to for any -divergence with a twice differentiable generator function .

3.2 Policy Optimization with Entropic Penalties

Following the intuition of REPS, we introduce an -divergence penalized optimization problem that the learning agent must solve at every policy iteration step

(6)
subject to

The agent seeks a policy that maximizes the expected reward and does not deviate from the current policy too much. The first constraint in (6) ensures that the policy is compatible with the system dynamics, and the latter two constraints ensure that

is a proper probability distribution. Please note that

enters Problem (6) indirectly through . Since the objective has the form of free energy Wainwright and Jordan (2007) in  with an -divergence playing the role of the usual KL, the solution can be expressed through the derivative of the convex conjugate function , as shown for general nonlinear problems in Teboulle (1992),

(7)

Here, are the Lagrange dual variables corresponding to the three constraints in (6), respectively. Although we get a closed-form solution for , we still need to solve the dual optimization problem to get the optimal dual variables

(8)
subject to

Remarkably, the advantage function

emerges automatically in the dual objective. The advantage function also appears in the penalty-free linear programming formulation of policy improvement 

Puterman (1994), which corresponds to the zero-temperature limit  of our formulation. Thanks to the fact that the dual objective in (8) is given as an expectation with respect to , it can be straightforwardly estimated from rollouts. The last constraint in (8) on the argument of is easy to evaluate for common -divergences. Indeed, the convex conjugate  of the generator function (4) is given by

(9)

Thus, the constraint on in (4) is just a linear inequality for any -divergence.

3.3 Value Function Approximation

For small grid-world problems, one can solve Problem (8) exactly for . However, for larger problems or if the state space is continuous, one must resort to function approximation. Assume we plug an expressive function approximator in (8

), then vector

becomes a new vector of parameters in the dual objective. Later, it will be shown that minimizing the dual when is closely related to minimizing the mean squared Bellman error.

3.4 Sample-Based Algorithm for Dual Optimization

To solve Problem (8) in practice, we gather a batch of samples from policy and replace the expectation in the objective with a sample average. Please note that in principle one also needs to estimate the expectation of the future rewards . However, since the probability of visiting the same state-action pair in continuous space is zero, one commonly estimates this integral from a single sample Deisenroth et al. (2013), which is equivalent to assuming deterministic system dynamics. Inequality constraints in (8) are linear and they must be imposed for every pair in the dataset.

3.5 Parametric Policy Fitting

Assume Problem (8) is solved on a current batch of data sampled from and thus the optimal dual variables are given. Equation (7) allows one to evaluate the new density on any pair from the dataset. However, it does not yield the new policy directly because representation (7) is variational. A common approach Deisenroth et al. (2013) is to assume that the policy is represented by a parameterized conditional density and fit this density to the data using maximum likelihood.

To fit a parametric density to the true solution given by (7), we minimize the KL divergence . Minimization of this KL is equivalent to maximization of the weighted maximum likelihood . Unfortunately, distribution is in general not known because does not only depend on the policy but also on the system dynamics. Assuming the effect of policy parameters on the stationary state distribution is small Deisenroth et al. (2013), we arrive at the following optimization problem for fitting the policy parameters

(10)

Compare our policy improvement step (10) to the commonly used advantage-weighted maximum likelihood (ML) objective (3). They look surprisingly similar (especially if is a linear function), which is not a coincidence and will be systematically explained in the next sections.

3.6 Temperature Scheduling

The ‘temperature’ parameter trades off reward vs divergence, as can be seen in the objective function in Problem (6). In practice, devising a schedule for may be hard because is sensitive to reward scaling and policy parameterization. A more intuitive way to impose the -divergence proximity condition is by adding it as a constraint with a fixed and then treating the temperature as an optimization variable. Such formulation is easy to incorporate into the dual (8) by adding a term to the objective and a constraint to the list of constraints. Constraint-based formulation was successfully used before with a KL divergence constraint Peters et al. (2010) and with its quadratic approximation Kakade (2001); Schulman et al. (2015).

3.7 Practical Algorithm for Continuous State-Action Spaces

Our proposed approach for entropic proximal policy optimization is summarized in Algorithm 1. Following the generalized policy iteration scheme, we (i) collect data under a given policy, (ii) evaluate the policy by solving (8), and (iii) improve the policy by solving (10). In the following section, several instantiations of Algorithm 1 with different choices of function  will be presented and studied.

Input: Initial actor-critic parameters , divergence function , temperature
while not converged do
       sample one-step transitions under current policy ;
       policy evaluation: optimize dual (8) with to obtain critic parameters ;
       policy improvement: perform weighted ML update (10) to obtain actor parameters ;
end while
Output: Optimal policy and the corresponding value function
Algorithm 1 Primal-dual entropic proximal policy optimization with function approximation

4 High- and Low-Temperature Limits; -Divergences; Analytic Solutions and Asymptotics

How does the -divergence penalty influence policy optimization? How should one choose the generator function ? What role does the step size play in optimization? This section will try to answer these and related questions. First, two special choices of the penalty function are presented, which reveal that the common practice of using mean squared Bellman error minimization coupled with advantage reweighted policy update is equivalent to imposing a Pearson -divergence penalty. Second, high- and low-temperature limits are studied, on one hand revealing the special role the Pearson -divergence plays, being the high-temperature limit of all smooth -divergences, and on the other hand establishing a link to the linear programming formulation of policy search as the low-temperature limit of our entropic penalty-based framework.

4.1 KL Divergence () and Pearson -Divergence ()

As can be deduced from the form of (10), great simplifications occur when is a linear function (, see (9)) or an exponential function (). The fundamental reason for such simplifications lies in the fact that linear and exponential functions are homomorphisms with respect to addition. This allows, in particular, discovery of a closed-form solution for the dual variable and thus eliminate it from the optimization. Moreover, in these two special cases, the dual variables can also be eliminated. They are responsible for non-negativity of probabilities: when (KL), uniformly for all , when (Pearson), for sufficiently big . Table 1 gives the corresponding empirical actor-critic optimization objective pairs. A generic primal-dual actor-critic algorithm with an -divergence penalty performs two steps

inside a policy iteration loop. It is worth comparing the explicit formulas in Table 1 to the customarily used objectives (2) and (3). To make the comparison fair, notice that (2) and (3) correspond to discounted infinite horizon formulation with discount factor whereas formulas in Table 1 are derived for the average-reward setting. In general, the difference between these two settings can be ascribed to an additional baseline that must be subtracted in the average reward setting Sutton and Barto (1998). In our derivations, the baseline corresponds to the dual variable , as in classical linear programming formulation of policy iteration Puterman (1994), and it is automatically gets subtracted from the advantage (see (8)).

KL Divergence () Pearson -Divergence ()
Table 1: Empirical policy evaluation and policy improvement objectives for .

Mean Squared Error Minimization with Advantage Reweighting is Equivalent to Pearson Penalty

The baseline for is given by the average advantage , which also equals the average return in our setting Sutton and Barto (1998); Puterman (1994). Therefore, to translate the formulas from Table 1 to the discounted infinite horizon form (2) and (3), we need to remove the baseline and add discounting to the advantage; that is, set . Then the dual objective

(11)

is proportional to the average squared advantage. Naive optimization of (11) leads to the family of residual gradient algorithms Baird (1995); Dann et al. (2014). However, if the same Monte Carlo estimate of the value function is used as in (2), then (11) and (2) are exactly equivalent. The same holds for the Pearson actor

(12)

and the standard policy improvement (3) provided that . That means (12) is equivalent to (3) if the weight of the divergence penalty is equal to the expected return.

4.2 High- and Low-Temperature Limits

In the previous subsection, we established a direct correspondence between the least-squares value function fitting coupled with the advantage-weighted maximum likelihood policy parameters estimation (2) and (3) and the dual-primal pair of optimization problems (11) and (12) arising from our Algorithm 1 for the special choice of the Pearson -divergence penalty. In this subsection, we will show that this is not a coincidence but a manifestation of the fundamental fact that the Pearson -divergence is the quadratic approximation of any smooth -divergence about unity.

4.2.1 High Temperatures: All Smooth -Divergences Tend Towards Pearson -Divergence

There are two ways to show the independence of the primal-dual solution (8)–(10) on the choice of the divergence penalty: either exactly solve an approximate problem or approximate the exact solution of the original problem. In the first case, the penalty is replaced with its Taylor expansion at , which turns out to be the Pearson -divergence, and then the derivation becomes equivalent to the natural policy gradient derivation Kakade (2001). In the second case, the exact solution (8)–(10) is expanded by Taylor: for big , dual variables  can be dropped if , which yields

(13)

By definition of the -divergence, the generator function  satisfies the condition . Without loss of generality Sason and Verdu (2016), one can impose an additional constraint for convenience. Such constraint ensures that the graph of the function lies entirely in the upper half-plane, touching the -axis at a single point . From the definition of the convex conjugate , we can deduce that and ; by rescaling, it is moreover possible to set . These properties are automatically satisfied by the -divergence, which can be verified by a direct computation. With this in mind, it is straightforward to see that substitution of (13) into (8) yields precisely the quadratic objective from Table 1, the difference being of the second order in .

To obtain the asymptotic policy update objective, one can expand (10) in the high-temperature limit and observe that it equals from Table 1 with the difference being of the second order in . Therefore, it is established that the choice of the divergence function plays a minor role for big temperatures (small policy update steps). Since this is the mode in which the majority of iterative algorithms operate, our entropic proximal policy optimization point of view provides a rigorous justification for the common practice of using the mean squared Bellman error objective for value function fitting and the advantage-weighted maximum likelihood objective for policy improvement.

4.2.2 Low Temperatures: Linear Programming Formulation Emerges in the Limit

Setting to a small number is equivalent to allowing large policy update steps because is the weight of the divergence penalty in the objective function (6). Such regime is rather undesirable in reinforcement learning because of the covariate shift problem mentioned in the introduction. Problem (6) for turns into a well-studied linear programming formulation Puterman (1994); Neu et al. (2017) that can be readily applied if the model is known.

It is not straightforward to derive the asymptotics of policy evaluation (8) and policy improvement (10) for a general smooth -divergence in the low-temperature limit because the dual variables do not disappear, in contrast to the high-temperature limit (13). However, for the KL divergence penalty (see Table 1), one can show that the policy evaluation objective tends towards the supremum of the advantage ; the optimal policy is deterministic, , therefore with .

5 Empirical Evaluations

To develop an intuition regarding the influence of the entropic penalties on policy improvement, we first consider a simplified version of the reinforcement learning problem—namely the stochastic multi-armed bandit problem (Bubeck and Cesa-Bianchi, 2012). In this setting, our algorithm is closely related to the family of Exp3 algorithms (Auer et al., 2003), originally motivated by the adversarial bandit problem. Subsequently, we evaluate our approach in the standard reinforcement learning setting.

5.1 Illustrative Experiments on Stochastic Multi-Armed Bandit Problems

In the stochastic multi-armed bandit problem (Bubeck and Cesa-Bianchi, 2012), at every time step , an agent chooses among actions . After every choice , it receives a noisy reward drawn from a distribution with mean . The goal of the agent is to maximize the expected total reward . Given the true values , the optimal strategy is to always choose the best action, . However, due to the lack of knowledge, the agent faces the exploration-exploitation dilemma. A generic way to encode the exploration-exploitation trade-off is by introducing a policy , i.e., a distribution from which the agent draws actions . Thus, the question becomes: given the current policy  and the current estimate of action values , what should the policy  at the next time step be? Unlike the choice of the best action under perfect information, such sampling policies are hard to derive from first principles (Ghavamzadeh et al., 2015).

We apply our generic Algorithm 1 to the stochastic multi-armed bandit problem to illustrate the effects of the divergence choice. The value function disappears because there is no state and no system dynamics in this problem. Therefore, the estimate plays the role of the advantage, and the dual optimization (8) is performed only with respect to the remaining Lagrange multipliers.

5.1.1 Effects of on Policy Improvement

Figure 1 shows the effects of the -divergence choice on policy updates. We consider a -armed bandit problem with arm values and keep the temperature fixed at for all values of . Several iterations starting from an initial uniform policy are shown in the figure for comparison. Extremely large positive and negative values of result in -elimination and -greedy policies, respectively. Small values of , in contrast, weigh actions according to their values. Policies for are peaked and heavy-tailed, eventually turning into -greedy policies when . Policies for are more uniform, but they put zero mass on bad actions, eventually turning into -elimination policies when . For , policy iteration may spend a lot of time in the end deciding between two best actions, whereas for the final convergence is faster.

Figure 1: Effects of  on policy improvement. Each row corresponds to a fixed . First four iterations of policy improvement together with a later iteration are shown in each row. Large positive ’s eliminate bad actions one by one, keeping the exploration level equal among the rest. Small ’s weigh actions according to their values; actions with low value get zero probability for , but remain possible with small probability for . Large negative ’s focus on the best action, exploring the remaining actions with equal probability.

5.1.2 Effects of on Regret

The average regret is shown in Figure 2 for different values of as a function of the time step with confidence error bars. The performance of the UCB algorithm (Bubeck and Cesa-Bianchi, 2012) is also shown for comparison. The presented results are obtained in a

-armed bandit environment where rewards have Gaussian distribution

. Arm values are estimated from observed rewards and the policy is updated every time steps. The temperature parameter  is decreased starting from after every policy update according to the schedule with . Results are averaged over runs. In general, extreme ’s accumulate more regret. However, they eventually focus on a single action and flatten out. Small ’s accumulate less regret, but they may keep exploring sub-optimal actions longer. Values of perform comparably with UCB after around steps, once reliable estimates of values have been obtained.

Figure 2: Average regret for various values of .

Figure 3 shows the average regret after a given number of time steps as a function of the divergence type . As can be seen from the figure, smaller values of  result in lower regret. Large negative ’s correspond to -greedy policies, which oftentimes prematurely converge to a sub-optimal action, failing to discover the optimal action for a long time if the exploration probability is small. Large positive ’s correspond to -elimination policies, which may by mistake completely eliminate the best action or spend a lot of time deciding between two options in the end of learning, accumulating more regret. The optimal value of the parameter depends on the time horizon for which the policy is being optimized. Depending on the horizon, the minimum of the curves shifts from slightly negative ’s towards the range with increasing time horizon.

Figure 3: Regret after a fixed time as a function of .

5.2 Empirical Evaluations on Ergodic MDPs

We evaluate our policy iteration algorithm with -divergence on standard grid-world reinforcement learning problems from OpenAI Gym (Brockman et al., 2016). The environments that terminate or have absorbing states are restarted during data collection to ensure ergodicity. Figure 4 demonstrates the learning dynamics on different environments for various choices of the divergence function. Parameter settings and other implementation details can be found in Appendix B. In summary, one can either promote risk averse behavior by choosing , which may, however, result in sub-optimal exploration, or one can promote risk seeking behavior with , which may lead to overly aggressive elimination of options. Our experiments suggest that the optimal balance should be found in the range . It should be noted that the effect of the -divergence on policy iteration is not linear and not symmetric with respect to , contrary to what one could have expected given the symmetry of the -divergence as a function of . For example, switching from to may have little effect on policy iteration, whereas switching from to may have a much more pronounced influence on the learning dynamics.

Figure 4: Effects of -divergence on policy iteration. Each row corresponds to a given environment. Results for different values of  are split into three subplots within each row, from the more extreme ’s on the left to the more refined values on the right. In all cases, more negative values initially show faster improvement because they immediately jump to the mode and keep the exploration level low; however, after a certain number of iterations they get overtaken by moderate values that weigh advantage estimates more evenly. Positive

demonstrate high variance in the learning dynamics because they clamp the probability of good actions to zero if the advantage estimates are overly pessimistic, never being able to recover from such a mistake. Large positive

’s may even fail to reach the optimum altogether, as exemplified by in the plots. The most stable and reliable -divergences lie between the reverse KL () and the KL (), with the Hellinger distance () outperforming both on the FrozenLake environment.

6 Related Work

Apart from computational advantages, information-theoretic approaches provide a solid framework for describing and studying aspects of intelligent behavior Tishby and Polani (2011), from autonomy Bertschinger et al. (2008) and curiosity Still and Precup (2012) to bounded rationality Genewein et al. (2015)

and game theory 

Wolpert (2006).

Entropic proximal mappings were introduced in Teboulle (1992) as a general framework for constructing approximation and smoothing schemes for optimization problem. Problem formulation (6) presented here can be considered as an application of this general theory to policy optimization in Markov decision processes. Following the recent work Neu et al. (2017) that establishes links between popular in reinforcement learning KL-divergence-regularized policy iteration algorithms Peters et al. (2010); Schulman et al. (2015) and a well-known in optimization stochastic mirror descent algorithm Nemirovski and Yudin (1983); Beck and Teboulle (2003), one can view our Algorithm 1 as an analog of the mirror descent with an -divergence penalty.

Concurrent works Geist et al. (2019); Li et al. (2019) consider similar regularized formulations, although in the policy space instead of the state-action distribution space and in the infinite horizon discounted setting instead of the average-reward setting. The

-divergence in its entropic form, i.e., when the base measure is a uniform distribution, was used in several papers under the name Tsallis entropy 

Nachum et al. (2018); Lee et al. (2019, 2018a, 2018b), where its sparsifying effect was exploited in large discrete action spaces.

An alternative proximal reinforcement learning scheme was introduced in Mahadevan et al. (2014) based on the extragradient method for solving variational inequalities and leveraging operator splitting techniques. Although the idea of exploiting proximal maps and updates in the primal and dual spaces is similar to ours, regularization in Mahadevan et al. (2014) is applied in the value function space to smoothen generalized TD learning algorithms, whereas we study regularization in the primal space.

7 Conclusions

We presented a framework for deriving actor-critic algorithms as pairs of primal-dual optimization problems resulting from regularization of the standard expected return objective with so-called entropic penalties in the form of an -divergence. Several examples with -divergence penalties have been worked out in detail. In the limit of small policy update steps, all -divergences with twice differentiable generator function are approximated by the Pearson -divergence, which was shown to yield the most commonly used in reinforcement learning pair of actor-critic updates. Thus, our framework provides a sound justification for the common practice of minimizing mean squared Bellman error in the policy evaluation step and fitting policy parameters by advantage-weighted maximum likelihood in the policy improvement step.

In the future work, incorporating non-differentiable generator functions, such as the absolute value that corresponds to the total variation distance, may provide a principled explanation for the empirical success of the algorithms not accounted for by our current smooth -divergence framework, such as the proximal policy optimization algorithm Schulman et al. (2017). Establishing a tighter connection between online convex optimization that employs Bregman divergences and reinforcement learning will likely yield both a deeper understanding of the optimization dynamics in RL and allow for improved practical algorithms building on the firm fundament of optimization theory.

Conceptualization, B.B. and J.P.; investigation, B.B. and J.P.; software, B.B.; supervision, J.P.; writing, B.B. and J.P.

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 640554.

Acknowledgements.
We thank Hany Abdulsamad for many insightful discussions. The authors declare no conflict of interest. no

Appendix A

This section provides the background on the -divergence, the -divergence, and the convex conjugate function, highlighting the key properties required for our derivations.

The -divergence (Csiszár, 1963; Morimoto, 1963; Ali and Silvey, 1966) generalizes many similarity measures between probability distributions (Sason and Verdu, 2016). For two distributions  and  on a finite set , the -divergence is defined as

where is a convex function on such that . For example, the KL divergence corresponds to . Please note that must be absolutely continuous with respect to  to avoid division by zero, i.e., implies for all . We additionally assume to be continuously differentiable, which includes all cases of interest for us. The -divergence can be generalized to unnormalized distributions. For example, the generalized KL divergence (Zhu and Rohwer, 1995) corresponds to . The derivations in this paper benefit from employing unnormalized distributions and subsequently imposing the normalization condition as a constraint.

The -divergence (Chernoff, 1952; Amari, 1985) is a one-parameter family of -divergences generated by the -function with . The particular choice of the family of functions is motivated by generalization of the natural logarithm (Cichocki and Amari, 2010). The -logarithm is a power function for that turns into the natural logarithm for . Replacing the natural logarithm in the derivative of the KL divergence by the -logarithm and integrating  under the condition that  yields the -function

(14)

The -divergence generalizes the KL divergence, reverse KL divergence, Hellinger distance, Pearson -divergence, and Neyman (reverse Pearson) -divergence. Figure 5 displays well-known -divergences as points on the parabola . For every divergence, there is a reverse divergence symmetric with respect to the point , corresponding to the Hellinger distance.

Figure 5: The -divergence smoothly connects several prominent divergences.

The convex conjugate of is defined as , where the angle brackets denote the dot product (Boyd and Vandenberghe, 2004). The key property relating the derivatives of  and  yields Table 2, which lists common functions together with their convex conjugates and derivatives. In the general case (14), the convex conjugate and its derivative are given by

(15)

Function is convex, non-negative, and attains minimum at with . Function  is positive on its domain with . Function  has the property . The linear inequality constraint (15) on the follows from the requirement . Another result from convex analysis crucial to our derivations is Fenchel’s equality

(16)

where . We will occasionally put the conjugation symbol at the bottom, especially for the derivative of the conjugate function .

Divergence
KL
Reverse KL
Pearson
Neyman
Hellinger
Table 2: Function , its convex conjugate , and their derivatives for some values of .

Appendix B

In all experiments, the temperature parameter is exponentially decayed in each iteration . The choice of and depends on the scale of the rewards and the number of samples collected per policy update. Tables for each environment list these parameters along with the number of samples per policy update, the number of policy iteration steps, and the number of runs for averaging the results. Where applicable, environment-specific settings are also listed. (see the Tables 35)

Parameter Value
Number of states 8
Action success probability 0.9
Small and large rewards (2.0, 10.0)
Number of runs 10
Number of iterations 30
Number of samples 800
Temperature parameters (15.0, 0.9)
Table 3: Chain environment.
Parameter Value
Punishment for falling from the cliff
Reward for reaching the goal 100
Number of runs 10
Number of iterations 40
Number of samples 1500
Temperature parameters (50.0, 0.9)
Table 4: CliffWalking environment.
Parameter Value
Action success probability 0.8
Number of runs 10
Number of iterations 50
Number of samples 2000
Temperature parameters (1.0, 0.8)
Table 5: FrozenLake environment.

References

References

  • Puterman (1994) Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley & Sons: Hoboken, NJ, USA, 1994. [CrossRef]
  • Sutton and Barto (1998) Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 1998.
  • Deisenroth et al. (2013) Deisenroth, M.P.; Neumann, G.; Peters, J. A survey on policy search for robotics. Found. Trends® Robot. 2013, 2, 1–142. [CrossRef]
  • Bellman (1957) Bellman, R. Dynamic Programming. Science 1957, 70, 342. [CrossRef]
  • Kakade (2001) Kakade, S.M. A Natural Policy Gradient. In Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic, Vancouver, BC, Canada, 3–8 December 2001; pp. 1531–1538. [CrossRef]
  • Peters et al. (2010) Peters, J.; Mülling, K.; Altun, Y. Relative Entropy Policy Search.

    In Proceedings of the 24th AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010; pp. 1607–1612.

  • Schulman et al. (2015) Schulman, J.; Levine, S.; Moritz, P.; Jordan, M.; Abbeel, P. Trust Region Policy Optimization.

    In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 6–11 July 2015.

  • Schulman et al. (2017) Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; Klimov, O. Proximal policy optimization algorithms. arXiv 2017, arXiv:1707.06347.
  • Shimodaira (2000) Shimodaira, H. Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plann. Inference. 2000, 227–244. [CrossRef]
  • Neu et al. (2017) Neu, G.; Jonsson, A.; Gómez, V. A unified view of entropy-regularized Markov decision processes. arXiv 2017, arXiv:1705.07798.
  • Parikh (2014) Parikh, N. Proximal Algorithms. Found. Trends® Optim. 2014, 1, 127–239. [CrossRef]
  • Nielsen (2018) Nielsen, F. An elementary introduction to information geometry. arXiv 2018, arXiv:1808.08271.
  • Goodfellow et al. (2014) Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014.
  • Bottou et al. (2017) Bottou, L.; Arjovsky, M.; Lopez-Paz, D.; Oquab, M. Geometrical Insights for Implicit Generative Modeling. Braverman Read. Mach. Learn. 2018, 11100, 229–268.
  • Nowozin et al. (2016) Nowozin, S.; Cseke, B.; Tomioka, R. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 271–279.
  • Teboulle (1992) Teboulle, M. Entropic Proximal Mappings with Applications to Nonlinear Programming. Math. Operations Res. 1992, 17, 670–690. [CrossRef]
  • Nemirovski and Yudin (1983) Nemirovski, A.; Yudin, D. Problem complexity and method efficiency in optimization. J. Operational Res. Soc. 1984, 35, 455.
  • Beck and Teboulle (2003) Beck, A.; Teboulle, M. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Res. Lett. 2003, 31, 167–175. [CrossRef]
  • Chernoff (1952) Chernoff, H. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Stat. 1952, 23, 493–507. [CrossRef]
  • Amari (1985) Amari, S. Differential-Geometrical Methods in Statistics; Springer: New York, NY, USA, 1985. [CrossRef]
  • Cichocki and Amari (2010) Cichocki, A.; Amari, S. Families of alpha- beta- and gamma- divergences: Flexible and robust measures of Similarities. Entropy 2010, 12, 1532–1568. [CrossRef]
  • Thomas and Okal (2015) Thomas, P.S.; Okal, B. A notation for Markov decision processes. arXiv 2015, arXiv:1512.09075.
  • Sutton et al. (1999) Sutton, R.S.; Mcallester, D.; Singh, S.; Mansour, Y. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Proceedings of the 12th International Conference on Neural Information Processing Systems, Denver, CO, USA, 29 November–4 December 1999; pp. 1057–1063. [CrossRef]
  • Peters and Schaal (2008) Peters, J.; Schaal, S. Natural Actor-Critic. Neurocomputing 2008, 71, 1180–1190. [CrossRef]
  • Schulman et al. (2016) Schulman, J.; Moritz, P.; Levine, S.; Jordan, M.I.; Abbeel, P. High Dimensional Continuous Control Using Generalized Advantage Estimation. arXiv 2015, arXiv:1506.02438.
  • Csiszár (1963) Csiszár, I. Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Publ. Math. Inst. Hungar. Acad. Sci. 1963, 8, 85–108.
  • Zhu and Rohwer (1995) Zhu, H.; Rohwer, R. Information Geometric Measurements of Generalisation; Technical Report; Aston University: Birmingham, UK, 1995.
  • Williams (1992) Williams, R.J. Simple statistical gradient-following methods for connectionist reinforcement learning. Mach. Learn. 1992, 8, 229–256. [CrossRef]
  • Wainwright and Jordan (2007) Wainwright, M.J.; Jordan, M.I. Graphical Models, Exponential Families, and Variational Inference. Found. Trends Mach. Learn. 2007, 1, 1–305. [CrossRef]
  • Baird (1995) Baird, L. Residual Algorithms: Reinforcement Learning with Function Approximation. In Proceedings of the 12th International Conference on Machine Learning, Tahoe City, CA, USA, 9–12 July 1995; pp. 30–37. [CrossRef]
  • Dann et al. (2014) Dann, C.; Neumann, G.; Peters, J. Policy Evaluation with Temporal Differences: A Survey and Comparison. J. Mach. Learn. Res. 2014, 15, 809–883.
  • Sason and Verdu (2016) Sason, I.; Verdu, S. F-divergence inequalities. IEEE Trans. Inf. Theory 2016, 62, 5973–6006. [CrossRef]
  • Bubeck and Cesa-Bianchi (2012) Bubeck, S.; Cesa-Bianchi, N. Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. Found. Trends Mach. Learn. 2012, 5, 1–122. [CrossRef]
  • Auer et al. (2003) Auer, P.; Cesa-Bianchi, N.; Freund, Y.; Schapire, R. The Non-Stochastic Multi-Armed Bandit Problem. SIAM J. Comput. 2003, 32, 48–77. [CrossRef]
  • Ghavamzadeh et al. (2015) Ghavamzadeh, M.; Mannor, S.; Pineau, J.; Tamar, A. Bayesian Reinforcement Learning: A Survey. Found. Trends Mach. Learn. 2015, 8, 359–483. [CrossRef]
  • Brockman et al. (2016) Brockman, G.; Cheung, V.; Pettersson, L.; Schneider, J.; Schulman, J.; Tang, J.; Zaremba, W. OpenAI Gym. arXiv 2016, arXiv:1606.01540.
  • Tishby and Polani (2011) Tishby, N.; Polani, D. Information theory of decisions and actions. In Perception-Action Cycle; Cutsuridis, V., Hussain, A., Taylor, J., Eds.; Springer: New York, NY, USA, 2011; pp. 601–636.
  • Bertschinger et al. (2008) Bertschinger, N.; Olbrich, E.; Ay, N.; Jost, J. Autonomy: An information theoretic perspective. Biosystems 2008, 91, 331–345. [CrossRef] [PubMed]
  • Still and Precup (2012) Still, S.; Precup, D. An information-theoretic approach to curiosity-driven reinforcement learning. Theory Biosci. 2012, 131, 139–148. [CrossRef]
  • Genewein et al. (2015) Genewein, T.; Leibfried, F.; Grau-Moya, J.; Braun, D.A. Bounded rationality, abstraction, and hierarchical decision-making: An information-theoretic optimality principle. Front. Rob. AI 2015, 2, 27. [CrossRef]
  • Wolpert (2006) Wolpert, D.H. Information theory―the bridge connecting bounded rational game theory and statistical physics. In Complex Engineered Systems; Braha, D., Minai, A., Bar-Yam, Y., Eds.; Springer: Berlin, Germany, 2006; pp. 262–290.
  • Geist et al. (2019) Geist, M.; Scherrer, B.; Pietquin, O. A Theory of Regularized Markov Decision Processes. arXiv 2019, arXiv:1901.11275.
  • Li et al. (2019) Li, X.; Yang, W.; Zhang, Z. A Unified Framework for Regularized Reinforcement Learning. arXiv 2019, arXiv:1903.00725.
  • Nachum et al. (2018) Nachum, O.; Chow, Y.; Ghavamzadeh, M. Path consistency learning in Tsallis entropy regularized MDPs. arXiv 2018, arXiv:1802.03501.
  • Lee et al. (2019) Lee, K.; Kim, S.; Lim, S.; Choi, S.; Oh, S. Tsallis Reinforcement Learning: A Unified Framework for Maximum Entropy Reinforcement Learning. arXiv 2019, arXiv:1902.00137.
  • Lee et al. (2018a) Lee, K.; Choi, S.; Oh, S. Sparse Markov decision processes with causal sparse Tsallis entropy regularization for reinforcement learning. IEEE Rob. Autom. Lett. 2018, 3, 1466–1473. [CrossRef]
  • Lee et al. (2018b) Lee, K.; Choi, S.; Oh, S.

    Maximum Causal Tsallis Entropy Imitation Learning.

    In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 4408–4418.
  • Mahadevan et al. (2014) Mahadevan, S.; Liu, B.; Thomas, P.; Dabney, W.; Giguere, S.; Jacek, N.; Gemp, I.; Liu, J. Proximal reinforcement learning: A new theory of sequential decision making in primal-dual spaces. arXiv 2014, arXiv:1405.6757.
  • Morimoto (1963) Morimoto, T. Markov processes and the H-theorem. J. Phys. Soc. Jpn. 1963, 18, 328–331. [CrossRef]
  • Ali and Silvey (1966) Ali, S.M.; Silvey, S.D. A General Class of Coefficients of Divergence of One Distribution from Another. J. R. Stat. Soc. Ser. B (Methodol.) 1966, 28, 131–142. [CrossRef]
  • Boyd and Vandenberghe (2004) Boyd, S.; Vandenberghe, L. Convex Optimization; Cambridge University Press: Cambridge, UK, 2004; 487p. [CrossRef]