## 1 Introduction

Probabilistic graphical models (PGMs) offer a broadly applicable and useful toolbox for the machine learning researcher

(Koller and Friedman, 2009): by couching the entirety of the learning problem in the parlance of probability theory, they provide a consistent and flexible framework to devise principled objectives, set up models that reflect the causal structure in the world, and allow a common set of inference methods to be deployed against a broad range of problem domains. Indeed, if a particular learning problem can be set up as a probabilistic graphical model, this can often serve as the first and most important step to solving it. Crucially, in the framework of PGMs, it is sufficient to write down the model and pose the question, and the objectives for learning and inference emerge automatically.

Conventionally, decision making problems formalized as reinforcement learning or optimal control have been cast into a framework that aims to generalize probabilistic models by augmenting them with utilities or rewards, where the reward function is viewed as an extrinsic signal. In this view, determining an optimal course of action (a plan) or an optimal decision-making strategy (a policy) is a fundamentally distinct type of problem than probabilistic inference, although the underlying dynamical system might still be described by a probabilistic graphical model. In this article, we instead derive an alterate view of decision making, reinforcement learning, and optimal control, where the decision making problem is simply an inference problem in a particular type of graphical model. Formalizing decision making as inference in probabilistic graphical models can in principle allow us to to bring to bear a wide array of approximate inference tools, extend the model in flexible and powerful ways, and reason about compositionality and partial observability.

Specifically, we will discuss how a generalization of the reinforcement learning or optimal control problem, which is sometimes termed maximum entropy reinforcement learning, is equivalent to exact probabilistic inference in the case of deterministic dynamics, and variational inference in the case of stochastic dynamics. This observation is not a new one, and the connection between probabilistic inference and control has been explored in the literature under a variety of names, including the Kalman duality (Todorov, 2008), maximum entropy reinforcement learning (Ziebart, 2010), KL-divergence control (Kappen et al., 2012; Kappen, 2011), and stochastic optimal control (Toussaint, 2009). While the specific derivations the differ, the basic underlying framework and optimization objective are the same. All of these methods involve formulating control or reinforcement learning as a PGM, either explicitly or implicitly, and then deploying learning and inference methods from the PGM literature to solve the resulting inference and learning problems.

Formulating reinforcement learning and decision making as inference provides a number of other appealing tools: a natural exploration strategy based on entropy maximization, effective tools for inverse reinforcement learning, and the ability to deploy powerful approximate inference algorithms to solve reinforcement learning problems. Furthermore, the connection between probabilistic inference and control provides an appealing probabilistic interpretation for the meaning of the reward function, and its effect on the optimal policy. The design of reward or cost functions in reinforcement learning is oftentimes as much art as science, and the choice of reward often blurs the line between algorithm and objective, with task-specific heuristics and task objectives combined into a single reward. In the control as inference framework, the reward induces a distribution over random variables, and the optimal policy aims to explicitly match a probability distribution defined by the reward and system dynamics, which may in future work suggest a way to systematize reward design.

This article will present the probabilistic model that can be used to embed a maximum entropy generalization of control or reinforcement learning into the framework of PGMs, describe how to perform inference in this model – exactly in the case of deterministic dynamics, or via structured variational inference in the case of stochastic dynamics, – and discuss how approximate methods based on function approximation fit within this framework. Although the particular variational inference interpretation of control differs somewhat from the presentation in prior work, the goal of this article is not to propose a fundamentally novel way of viewing the connection between control and inference. Rather, it is to provide a unified treatment of the topic in a self-contained and accessible tutorial format, and to connect this framework to recent research in reinforcement learning, including recently proposed deep reinforcement learning algorithms. In addition, this article presents a review of the recent reinforcement learning literature that relates to this view of control as probabilistic inference, and offers some perspectives on future research directions.

The basic graphical model for control will be presented in Section 2, variational inference for stochastic dynamics will be discussed in Section 3, approximate methods based on function approximation, including deep reinforcement learning, will be discussed in Section 4, and a survey and review of recent literature will be presented in Section 5. Finally, we will discuss perspectives on future research directions in Section 6.

## 2 A Graphical Model for Control as Inference

In this section, we will present the basic graphical model that allows us to embed control into the framework of PGMs, and discuss how this framework can be used to derive variants of several standard reinforcement learning and dynamic programming approaches. The PGM presented in this section corresponds to a generalization of the standard reinforcement learning problem, where the RL objective is augmented with an entropy term. The magnitude of the reward function trades off between reward maximization and entropy maximization, allowing the original RL problem to be recovered in the limit of infinitely large rewards. We will begin by defining notation, then defining the graphical model, and then presenting several inference methods and describing how they relate to standard algorithms in reinforcement learning and dynamic programming. Finally, we will discuss a few limitations of this method and motivate the variational approach in Section 3.

### 2.1 The Decision Making Problem and Terminology

First, we will introduce the notation we will use for the standard optimal control or reinforcement learning formulation. We will use to denote states and to denote actions, which may each be discrete or continuous. States evolve according to the stochastic dynamics , which are in general unknown. We will follow a discrete-time finite-horizon derivation, with horizon , and omit discount factors for now. A discount can be readily incorporated into this framework simply by modifying the transition dynamics, such that any action produces a transition into an absorbing state with probability , and all standard transition probabilities are multiplied by . A task in this framework can be defined by a reward function . Solving a task typically involves recovering a policy

, which specifies a distribution over actions conditioned on the state parameterized by some parameter vector

. A standard reinforcement learning policy search problem is then given by the following maximization:(1) |

This optimization problem aims to find a vector of policy parameters that maximize the total expected reward of the policy. The expectation is taken under the policy’s *trajectory* distribution , given by

(2) |

For conciseness, it is common to denote the action conditional as , to emphasize that it is given by a parameterized policy with parameters

. These parameters might correspond, for example, to the weights in a neural network. However, we could just as well embed a standard planning problem in this formulation, by letting

denote a sequence of actions in an open-loop plan.Having formulated the decision making problem in this way, the next question we have to ask to derive the control as inference framework is: how can we formulate a probabilistic graphical model such that the most probable trajectory corresponds to the trajectory from the optimal policy? Or, equivalently, how can we formulate a probabilistic graphical model such that inferring the posterior action conditional gives us the optimal policy?

### 2.2 The Graphical Model

To embed the control problem into a graphical model, we can begin simply by modeling the relationship between states, actions, and next states. This relationship is simple, and corresponds to a graphical model with factors of the form , as shown in Figure 1 (a). However, this graphical model is insufficient for solving control problems, because it has no notion of rewards or costs. We therefore have to introduce an additional variable into this model, which we will denote . This additional variable is a binary random variable, where denotes that time step is *optimal*, and denotes that it is not optimal. We will choose the distribution over this variable to be given by the following equation:

(3) |

The graphical model with these additional variables is summarized in Figure 1 (b). While this might at first seem like a peculiar and arbitrary choice, it leads to a very natural posterior distribution over actions when we condition on for all :

(4) |

That is, the probability of observing a given trajectory is given by the product between its probability to occur according to the dynamics (the term in square brackets on the last line), and the exponential of the total reward along that trajectory. It is most straightforward to understand this equation in systems with deterministic dynamics, where the first term is a constant for all trajectories that are dynamically feasible. In this case, the trajectory with the highest reward has the highest probability, and trajectories with lower reward have exponentially lower probability. If we would like to plan for an optimal action sequence starting from some initial state , we can condition on and choose , in which case maximum a posteriori inference corresponds to a kind of planning problem. It is easy to see that this exactly corresponds to standard planning or trajectory optimization in the case where the dynamics are deterministic, in which case Equation (4) reduces to

(5) |

Here, the indicator function simply indicates that the trajectory is dynamically consistent (meaning that ) and the initial state is correct. The case of stochastic dynamics poses some challenges, and will be discussed in detail in Section 3. However, even under deterministic dynamics, we are often interested in recovering a policy rather than a plan. In this PGM, the optimal policy can be written as (we will drop in the remainder of the derivation for conciseness). This distribution is somewhat analogous to in the previous section, with two major differences: first, it is independent of the parameterization , and second, we will see later that it optimizes an objective that is slightly different from the standard reinforcement learning objective in Equation (1).

### 2.3 Policy Search as Probabilistic Inference

We can recover the optimal policy

using a standard sum-product inference algorithm, analogously to inference in HMM-style dynamic Bayesian networks. As we will see in this section, it is sufficient to compute backward messages of the form

These messages have a natural interpretation: they denote the probability that a trajectory can be optimal for time steps from to if it begins in state with the action .^{1}^{1}1Note that is *not* a probability density over , but rather the probability of . Slightly overloading the notation, we will also introduce the message

These messages denote the probability that the trajectory from to is optimal if it begins in state . We can recover the state-only message from the state-action message by integrating out the action:

The factor is the action *prior*. Note that it is not conditioned on

in any way: it does not denote the probability of an optimal action, but simply the prior probability of actions. The PGM in Figure

1 doesn’t actually contain this factor, and we can assume thatfor simplicity – that is, it is a constant corresponding to a uniform distribution over the set of actions. We will see later that this assumption does not actually introduce any loss of generality, because any non-uniform

can be incorporated instead into via the reward function.The recursive message passing algorithm for computing proceeds from the last time step backward through time to . In the base case, we note that is simply proportional to , since there is only one factor to consider. The recursive case is then given as following:

(6) |

From these backward messages, we can then derive the optimal policy . First, note that is conditionally independent of given , which means that , and we can disregard the past when considering the current action distribution. This makes intuitive sense: in a Markovian system, the optimal action does not depend on the past. From this, we can easily recover the optimal action distribution using the two backward messages:

where the order of conditioning in the third step is flipped by using Bayes’ rule, and cancelling the factor of that appears in both the numerator and denominator. The term disappears, since we previously assumed it was a uniform distribution.

This derivation provides us with a solution, but perhaps not as much of the intuition. The intuition can be recovered by considering what these equations are doing in log space. To that end, we will introduce the log-space messages as

The use of and here is not accidental: the log-space messages correspond to “soft” variants of the state and state-action value functions. First, consider the marginalization over actions in log-space:

When the values of are large, the above equation resembles a hard maximum over . That is, for large ,

For smaller values of , the maximum is soft. Hence, we can refer to and as soft value functions and Q-functions, respectively. We can also consider the backup in Equation (6) in log-space. In the case of deterministic dynamics, this backup is given by

which exactly corresponds to the Bellman backup. However, when the dynamics are stochastic, the backup is given by

(7) |

This backup is peculiar, since it does not consider the expected value at the next state, but a “soft max” over the next expected value. Intuitively, this produces Q-functions that are optimistic: if among the possible outcomes for the next state there is one outcome with a very high value, it will dominate the backup, even when there are other possible states that might be likely and have extremely low value. This creates risk seeking behavior: if an agent behaves according to this Q-function, it might take actions that have extremely high risk, so long as they have some non-zero probability of a high reward. Clearly, this behavior is not desirable in many cases, and the standard PGM described in this section is often not well suited to stochastic dynamics. In Section 3, we will describe a simple modification that makes the backup correspond to the soft Bellman backup in the case of stochastic dynamics also, by using the framework of variational inference.

### 2.4 Which Objective does This Inference Procedure Optimize?

In the previous section, we derived an inference procedure that can be used to obtain the distribution over actions conditioned on all of the optimality variables, . But which objective does this policy actually optimize? Recall that the overall distribution is given by

(8) |

which we can simplify in the case of deterministic dynamics into Equation (5). In this case, the conditional distributions are simply obtained by marginalizing the full trajectory distribution and conditioning the policy at each time step on . We can adopt an optimization-based approximate inference approach to this problem, in which case the goal is to fit an approximation such that the trajectory distribution

matches the distribution in Equation (5). In the case of exact inference, as derived in the previous section, the match is exact, which means that , where is the KL-divergence. We can therefore view the inference process as minimizing , which is given by

Negating both sides and substituting in the equations for and , we get

Therefore, minimizing the KL-divergence corresponds to maximizing the expected reward *and* the expected conditional entropy, in contrast to the standard control objective in Equation (1), which only maximizes reward. Hence, this type of control objective is sometimes referred to as maximum entropy reinforcement learning or maximum entropy control.

However, that in the case of stochastic dynamics, the solution is not quite so simple. Under stochastic dynamics, the optimized distribution is given by

(9) |

where the initial state distribution and the dynamics are *also* conditioned on optimality. As a result of this, the dynamics and initial state terms in the KL-divergence do *not* cancel, and the objective does not have the simple entropy maximizing form derived above.^{2}^{2}2In the deterministic case, we know that , since exactly one transition is ever possible. We can still fall back on the original KL-divergence minimization at the trajectory level, and write the objective as

(10) |

However, because of the terms, this objective is difficult to optimize in a model-free setting. As discussed in the previous section, it also results in an optimistic policy that assumes a degree of control over the dynamics that is unrealistic in most control problems. In Section 3, we will derive a variational inference procedure that *does* reduce to the convenient objective in Equation (2.4) even in the case of stochastic dynamics, and in the process also addresses the risk-seeking behavior discussed in Section 2.3.

### 2.5 Alternative Model Formulations

It’s worth pointing out that the definition of in Equation (3) requires an additional assumption, which is that the rewards are always negative.^{3}^{3}3This assumption is not actually very strong: if we assume the reward is bounded above, we can always construct an exactly equivalent reward simply by subtracting the maximum reward. Otherwise, we end up with a negative probability for . However, this assumption is not actually required: it’s quite possible to instead define the graphical model with an undirected factor on , with an unnormalized potential given by . The potential for doesn’t matter, since we always condition on . This leads to the same exact inference procedure as the one we described above, but without the negative reward assumption. Once we are content to working with undirected graphical models, we can even remove the variables completely, and simply add an undirected factor on with the potential , which is mathematically equivalent. This is the conditional random field formulation described by Ziebart (Ziebart, 2010). The analysis and inference methods in this model are identical to the ones for the directed model with explicit optimality variables , and the particular choice of model is simply a notational convenience. We will use the variables in this article for clarity of derivation and stay within the directed graphical model framework, but all derivations are straightforward to reproduce in the conditional random field formulation.

Another common modification to this framework is to incorporate an explicit temperature into the CPD for , such that . The corresponding maximum entropy objective can then be written equivalently as the expectation of the (original) reward, with an additional multiplier of

on the entropy term. This provides a natural mechanism to interpolate between entropy maximization and standard optimal control or RL: as

, the optimal solution approaches the standard optimal control solution. Note that this does not actually increase the generality of the method, since the constant can always be multiplied into the reward, but making this temperature constant explicit can help to illuminate the connection between standard and entropy maximizing optimal control.Finally, it is worth remarking again on the role of discount factors: it is very common in reinforcement learning to use a Bellman backup of the form

where is a discount factor. This allows for learning value functions in infinite-horizon settings, where the backup would otherwise be non-convergent for

, and reduces variance for Monte Carlo advantage estimators in policy gradient algorithms

(Schulman et al., 2016). The discount factor can be viewed a simple redefinition of the system dynamics. If the initial dynamics are given by , adding a discount factor is equivalent to undiscounted value fitting under the modified dynamics , where there is an additional transition with probability , regardless of action, into an absorbing state with reward zero. We will omit from the derivations in this article, but it can be inserted trivially in all cases simply by modifying the (soft) Bellman backups in any place where the expectation over occurs, such as Equation (7) previously or Equation (15) in the next section.## 3 Variational Inference and Stochastic Dynamics

The problematic nature of the maximum entropy framework in the case of stochastic dynamics, discussed in Section 2.3 and Section 2.4, in essence amounts to an assumption that the agent is allowed to control both its actions and the dynamics of the system in order to produce optimal trajectories, but its authority over the dynamics is penalized based on deviation from the true dynamics. Hence, the terms in Equation (10) can be factored out of the equations, producing additive terms that corresponds to the cross-entropy between the posterior dynamics and the true dynamics . This explains the risk-seeking nature of the method discussed in Section 2.3: if the agent is allowed to influence its dynamics, even a little bit, it would reasonably choose to remove unlikely but extremely bad outcomes of risky actions.

Of course, in practical reinforcement learning and control problems, such manipulation of system dynamics is not possible, and the resulting policies can lead to disastrously bad outcomes. We can correct this issue by modifying the inference procedure. In this section, we will derive this correction by freezing the system dynamics, writing down the corresponding maximum entropy objective, and deriving a dynamic programming procedure for optimizing it. Then we will show that this procedure amounts to a direct application of structured variational inference.

### 3.1 Maximum Entropy Reinforcement Learning with Fixed Dynamics

The issue discussed in Section 2.4 for stochastic dynamics can briefly be summarized as following: since the posterior dynamics distribution does not necessarily match the true dynamics , the agent assumes that it can influence the dynamics to a limited extent. A simple fix to this issue is to explicitly disallow this control, by forcing the posterior dynamics and initial state distributions to match and , respectively. Then, the optimized trajectory distribution is given simply by

and the same derivation as the one presented in Section 2.4 for the deterministic case results in the following objective:

(11) |

That is, the objective is still to maximize reward and entropy, but now under stochastic transition dynamics. To optimize this objective, we can compute backward messages like we did in Section 2.3. However, since we are now starting from the maximization of the objective in Equation (11), we have to derive these backward messages from an optimization perspective as a dynamic programming algorithm. As before, we will begin with the base case of optimizing , which maximizes

(12) |

where the equality holds from the definition of KL-divergence, and is the normalizing constant for with respect to where , which is the same soft maximization as in Section 2.3. Since we know that the KL-divergence is minimized when the two arguments represent the same distribution, the optimal policy is given by

(13) |

The recursive case can then computed as following: for a given time step , must maximize two terms:

(14) |

The first term follows directly from the objective in Equation (11), while the second term represents the contribution of to the expectations of all subsequent time steps. The second term deserves a more in-depth derivation. First, consider the base case: given the equation for in Equation (13), we can evaluate the objective for the policy by directly substituting this equation into Equation (12). Since the KL-divergence then evaluates to zero, we are left only with the term. In the recursive case, we note that we can rewrite the objective in Equation (14) as

where we now define

(15) | ||||

which corresponds to a standard Bellman backup with a soft maximization for the value function. Choosing

(16) |

we again see that the KL-divergence evaluates to zero, leaving as the only remaining term in the objective for time step , just like in the base case of . This means that, if we fix the dynamics and initial state distribution, and only allow the policy to change, we recover a Bellman backup operator that uses the expected value of the next state, rather than the optimistic estimate we saw in Section 2.3 (compare Equation (15) to Equation (7)). While this provides a solution to the practical problem of risk-seeking policies, it is perhaps a bit unsatisfying in its divergence from the convenient framework of probabilistic graphical models. In the next section, we will discuss how this procedure amounts to a direct application of structured variational inference.

### 3.2 Connection to Structured Variational Inference

One way to interpret the optimization procedure in Section 3.1 is as a particular type of structured variational inference. In structured variational inference, our goal is to approximate some distribution with another, potentially simpler distribution . Typically, is taken to be some tractable factorized distribution, such as a product of conditional distributions connected in a chain or tree, which lends itself to tractable exact inference. In our case, we aim to approximate , given by

(17) |

via the distribution

(18) |

If we fix and , then is exactly the distribution from Section 3.1, which we’ve renamed here to to emphasize the connection to structured variational inference. Note that we’ve also renamed to for the same reason. In structured variational inference, approximate inference is performed by optimizing the variational lower bound (also called the evidence lower bound). Recall that our evidence here is that for all , and the posterior is conditioned on the initial state . The variational lower bound is given by

where the inequality on the last line is obtained via Jensen’s inequality. Substituting the definitions of and from Equations (17) and (18), and noting the cancellation due to , the bound reduces to

(19) |

up to an additive constant. Optimizing this objective with respect to the policy corresponds exactly to the objective in Equation (11). Intuitively, this means that this objective attempts to find the closest match to the maximum entropy trajectory distribution, subject to the constraint that the agent is only allowed to modify the policy, and not the dynamics. Note that this framework can also easily accommodate any other structural constraints on the policy, including restriction to a particular distribution class (e.g., conditional Gaussian, or a categorical distribution parameterized by a neural network), or restriction to partial observability, where the entire state is not available as an input, but rather the policy only has access to some non-invertible function of the state.

## 4 Approximate Inference with Function Approximation

We saw in the discussion above that a dynamic programming backward algorithm with updates that resemble Bellman backups can recover “soft” analogues of the value function and Q-function in the maximum entropy reinforcement learning framework, and the stochastic optimal policy can be recovered from the Q-function and value function. In this section, we will discuss how practical algorithms for high-dimensional or continuous reinforcement learning problems can be derived from this theoretical framework, with the use of function approximation. This will give rise to several prototypical methods that mirror corresponding techniques in standard reinforcement learning: policy gradients, actor-critic algorithms, and Q-learning.

### 4.1 Maximum Entropy Policy Gradients

One approach to performing structured variational inference is to directly optimize the evidence lower bound with respect to the variational distribution (Koller and Friedman, 2009). This approach can be directly applied to maximum entropy reinforcement learning. Note that the variational distribution consists of three terms: , , and . The first two terms are fixed to and , respectively, leaving only to vary. We can parameterize this distribution with any expressive conditional, with parameters , and will therefore denote it as . The parameters could correspond, for example, to the weights in a deep neural network, which takes as input and outputs the parameters of some distribution class. In the case of discrete actions, the network could directly output the parameters of a categorical distribution (e.g., via a soft max operator). In the case of continuous actions, the network could output the parameters of an exponential family distribution, such as a Gaussian. In all cases, we can directly optimize the objective in Equation (11) by estimating its gradient using samples. This gradient has a form that is nearly identical to the standard policy gradient (Williams, 1992), which we summarize here for completeness. First, let us restate the objective as following:

The gradient is then given by

where the second line follows from applying the likelihood ratio trick (Williams, 1992) and the definition of entropy to obtain the term. The comes from the derivative of the entropy term. The last line follows by noting that the gradient estimator is invariant to additive state-dependent constants, and replacing with a state-dependent baseline . The resulting policy gradient estimator exactly matches a standard policy gradient estimator, with the only modification being the addition of the term to the reward at each time step . Intuitively, the reward of each action is modified by subtracting the log-probability of that action under the current policy, which causes the policy to maximize entropy. This gradient estimator can be written more compactly as

where is an advantage estimator. Any standard advantage estimator, such as the GAE estimator (Schulman et al., 2016), can be used in place of the standard baselined Monte Carlo return above. Again, the only necessary modification is to add to the reward at each time step . As with standard policy gradients, a practical implementation of this method estimates the expectation by sampling trajectories from the current policy, and may be improved by following the natural gradient direction.

### 4.2 Maximum Entropy Actor-Critic Algorithms

Instead of directly differentiating the variational lower bound, we can adopt a message passing approach which, as we will see later, can produce lower-variance gradient estimates. First, note that we can write down the following equation for the optimal target distribution for :

This is because conditioning on makes the action completely independent of all past states, but the action still depends on all future states and actions. Note that the dynamics terms and do not appear in the above equation, since they perfectly cancel. We can simplify the expectation above as follows:

In this case, note that the inner expectation does not contain or , and therefore makes for a natural representation for a message that can be sent from future states. We will denote this message , since it will correspond to a soft value function:

Comments

There are no comments yet.