Exploration by Distributional Reinforcement Learning

05/04/2018 ∙ by Yunhao Tang, et al. ∙ Columbia University 1

We propose a framework based on distributional reinforcement learning and recent attempts to combine Bayesian parameter updates with deep reinforcement learning. We show that our proposed framework conceptually unifies multiple previous methods in exploration. We also derive a practical algorithm that achieves efficient exploration on challenging control tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep reinforcement learning (RL) has enjoyed numerous recent successes in various domains such as video games and robotics control [Schulman et al.2015, Duan et al.2016, Levine et al.2016]. Deep RL algorithms typically apply naive exploration strategies such as greedy [Mnih et al.2013, Lillicrap et al.2016]. However, such myopic strategies cannot lead to systematic exploration in hard environments [Osband et al.2017].

We provide an exploration algorithm based on distributional RL [Bellemare et al.2017] and recent attempts to combine Bayesian parameter updates with deep reinforcement learning. We show that the proposed algorithm provides a conceptual unification of multiple previous methods on exploration in deep reinforcement learning setting. We also show that the algorithm achieves efficient exploration in challenging environments.

2 Background

2.1 Markov Decision Process and Value Based Reinforcement Learning

In a Markov Decision Process (MDP), at time step

, an agent is in state , takes action , receives reward and gets transitioned to next state . At time the agent’s state distribution follows . A policy is a mapping from a state to a distribution over action . The objective is to find a policy to maximize the discounted cumulative reward

(1)

where is a discount factor. In state , the action-value function is defined as the expected cumulative reward that could be received by first taking action and following policy thereafter

From the above definition, it can be shown that satisfies the Bellman equation

Let be the optimal policy and its action value function. satisfies the following Bellman equation

The above equations illustrate the temporal consistency of the action value functions that allows for the design of learning algorithms. Define Bellman operator

When , starting from any , iteratively applying the operator leads to convergence as .

In high dimensional cases, it is critical to use function approximation as a compact representation of action values. Let be such a function with parameter that approximates a table of action values with entry . The aim is to find such that . Let

be the operator that projects arbitrary vector

to the subspace spanned by function . Since the update of action values can now only take place in the subspace spanned by function , the iterate is updated as . In cases where is linear, the above procedure can be shown to converge [Tsitsiklis and Van Roy1996]. However, in cases where

is nonlinear (neural network), the function approximation becomes more expressive at the cost of no convergence guarantee. Many deep RL algorithms are designed following the above formulation, such as Deep Q Network (DQN)

[Mnih et al.2013].

2.2 Distributional Reinforcement Learning

Following [Bellemare et al.2017], instead of considering action value under policy , which is itself an expectation, consider the random return at by following policy , . It follows that . Let be the distribution of . The Bellman equation for random return is similar to that of the action value functions

where both sides are distributions and denotes equality in distribution.111In future notations, we replace by for simplicity. Define distributional Bellman operator under policy as

Notice that operates on distributions. Define as follows

When , starting from any distribution , applying the operator as leads to convergence in expectation . However, the distribution itself may not weakly converge.

To design a practical algorithm, one must use a parametric family of distribution to approximate , with parameter . Let be a discrepancy measure between distribution and . Define the projection operator as follows

In other words, projects a distribution into another distribution in the parametric family with smallest discrepancy from . Hence the distribution is updated as . In practice, the operator is applied to different entries asynchronously. For a given pair , one first selects a greedy action for next state

then updates the distribution to match the target distribution by minimizing the discrepancy

(2)

When only samples are available, let the empirical distribution be 222

is the Dirac distribution that assigns point mass of probability

at ., then (2) reduces to minimizing .

3 Related Work

In reinforcement learning (RL), naive explorations such as greedy [Mnih et al.2013, Lillicrap et al.2016] do not explore well because local perturbations of actions break the consistency between consecutive steps [Osband and Van Roy2015]. A number of prior works apply randomization to parameter space [Fortunato et al.2017, Plappert et al.2016]

to preserve the consistency in exploration, but their formulations are built on heuristics. Posterior sampling is a principled exploration strategy in the bandit setting

[Thompson1933, Russo2017], yet its extension to RL [Osband et al.2013] is hard to scale to large problems. More recent prior works have formulated the exploration strategy as sampling randomized value functions and interpreted the algorithm as approximate posterior sampling [Osband et al.2016, Osband et al.2017]. Instead of modeling value functions, our formulation is built on modeling return distributions which reduces to exact posterior sampling in the bandit setting.

Following similar ideas of randomized value function, multiple recent works have combined approximate Bayesian inference

[Ranganath et al.2014, Blei et al.2017] with Q learning and justified the efficiency of exploration by relating to posterior sampling [Lipton et al.2016, Tang and Kucukelbir2017, Azizzadenesheli et al.2017, Moerland et al.2017]. Though their formulations are based on randomized value functions, we offer an alternate interpretation by modeling return distribution and provide a conceptual framework that unifies these previous methods (Section 5). We will also provide a potential approach that extends the current framework to policy based methods as in [Henderson et al.2017].

Modeling return distribution dated back to early work of [Dearden et al.1998, Morimura et al.2010, Morimura et al.2012], where learning a return distribution instead of only its expectation presents a more statistically challenging task but provides more information during control. More recently, [Bellemare et al.2017] applies a histogram to learn the return distribution and displays big performance gains over DQN [Mnih et al.2013]. Based on [Bellemare et al.2017], we provide a more general distributional learning paradigm that combines return distribution learning and exploration based on approximate posterior sampling.

4 Exploration by Distributional Reinforcement Learning

4.1 Formulation

Recall that is the return distribution for state action pair . In practice, we approximate such distribution by a parametric distribution with parameter . Following [Bellemare et al.2017], we take the discrepancy to be KL divergence. Recall is the empirical distribution of samples , hence the KL divergence reduces to

(3)

where we have dropped a constant in the last equality. Let follow a given distribution with parameter . We propose to minimize the following objective

(4)

where is the entropy of . Note that (3) corresponds to the projection step defined in (2), and the first term of (4) takes an expectation of projection discrepancy over the distribution . The intuition behind (4) is that by the first term, the objective encourages low expected discrepancy (which is equivalent to Bellman error) to learn optimal policies; the second term serves as an exploration bonus to encourage a dispersed distribution over for better exploration during learning.

We now draw the connection between (4) and approximate Bayesian inference. First assign an improper uniform prior on , i.e. . The posterior is defined by Bayes rule given the data as where 333We assume samples drawn from the next state distributions are i.i.d. as in [Bellemare et al.2017].. Since by definition , (4) is equivalent to

(5)

Hence to minimize the objective (4) is to search for a parametric distribution to approximate the posterior . From (5) we can see that the posterior is the minimizer policy of (4), which achieves the optimal balance between minimizing low discrepancy and being as random as possible. The close resemblance between our formulation and posterior sampling partially justifies the potential strength of our exploration strategy.

4.2 Generic Algorithm

A generic algorithm Algorithm 1 can be derived from (5). We start with a proposed distribution over parameter and a distribution model . During control, in state , we sample a parameter from and choose action

. This is equivalent to taking an action based on the approximate posterior probability that it is optimal. During training, we sample from one-step lookahead distribution of the greedy action, and update parameter by optimizing (

4).

1:  INPUT: generic return distribution with parameter , parameter distribution with parameter .
2:  while not converged do
3:     // Control
4:     Sample .
5:     In state , choose , get transition and reward .
6:     // Training
7:     Given state action pair , choose greedy one-step lookahead distribution .
8:     Sample from the distribution and let be the empirical distribution of samples, update parameter by minimizing objective (4).
9:  end while
Algorithm 1 Exploration by Distributional RL: Generic

4.3 Practical Algorithm: Gaussian Assumption

We turn Algorithm 1 into a practical algorithm by imposing assumption on . [Dearden et al.1998] assumes to be Gaussian based on the assumption that the chain is ergodic and close to . We make this assumption here and let be a Gaussian with parametrized mean

and fixed standard error

. The objective (3) reduces to

(6)

We now have an analytical form . The objective (4) reduces to

(7)
1:  INPUT: target parameter update period ; learning rate

; Gaussian distribution parameter

.
2:  INITIALIZE: parameters ; replay buffer ; step counter .
3:  for  do
4:     while episode not terminated do
5:        .
6:        Sample .
7:        In state , choose , get transition and reward .
8:        Save experience tuple to buffer .
9:        Sample parameters and sample tuples from .
10:        Sample target for th tuple in where is greedy w.r.t. .
11:        Take gradient of the KL divergence in (7).
12:        .
13:        if  then
14:           Update target parameter .
15:        end if
16:     end while
17:  end for
Algorithm 2 Exploration by Distributional RL: Gaussian

Parallel to the principal network with parameter , we maintain a target network with parameter to stabilize learning [Mnih et al.2013]. Samples for updates are generated by target network . We also maintain a replay buffer to store off-policy data.

4.4 Randomized Value Function as Randomized Critic for Policy Gradient

In off-policy optimization algorithm like Deep Deterministic Policy Gradient (DDPG) [Lillicrap et al.2016], a policy with parameter and a critic with parameter are trained at the same time. The policy gradient of reward objective (1) is

(8)

where replacing true by a critic

introduces bias but largely reduces variance

[Lillicrap et al.2016].

To extend the formulation of Algorithm 2 to policy based methods, we can interpret as a randomized critic with a distribution induced by . At each update we sample a parameter and compute the policy gradient (8) through the sampled critic to update . The distributional parameters are updated as in Algorithm 2 with the greedy actions replaced by actions produced by the policy .

Policy gradients computed from randomized critic may lead to better exploration directly in the policy space as in [Plappert et al.2016], since the uncertainties in the value function can be propagated into the policy via gradients through the uncertain value functions.

5 Connections with Previous Methods

We now argue that the above formulation provides a conceptual unification to multiple previous methods. We can recover the same objective functions as previous methods by properly choosing the parametric form of return distribution , the distribution over model parameter and the algorithm to optimize the objective (5).

5.1 Posterior Sampling for Bandits

In the bandit setting, we only have a set of actions . Assume the underlying reward for each action is Gaussian distributed. To model the return distribution of action , we set to be Gaussian with unknown mean parameters , i.e. . We assume the distribution over parameter to be Gaussian as well. Due to the conjugacy between improper uniform prior (assumed in Section 4.1) and likelihood , the posterior is still Gaussian. We can minimize (5) exactly by setting . During control, Algorithm 1 selects action with sampled , which is exact posterior sampling. This shows that our proposed algorithm reduces to exact posterior sampling for bandits. For general RL cases, the equivalence is not exact but this connection partially justifies that our algorithm can achieve very efficient exploration.

5.2 Deep Q Network with Bayesian Updates

Despite minor algorithmic differences, Algorithm 2 has very similar objective as Variational DQN [Tang and Kucukelbir2017], BBQ Network [Lipton et al.2016] and Bayesian DQN [Azizzadenesheli et al.2017], i.e. all three algorithms can be interpreted as having Gaussian assumption over return distribution and proposing Gaussian distribution over parameters . However, it is worth recalling that Algorithm 2 is formulated by modeling return distributions, while previous methods are formulated by randomizing value functions.

If we are to interpret these three algorithms as instantiations of Algorithm 2, the difference lies in how they optimize (7). Variational DQN and BBQ apply variational inference to minimize the divergence between and posterior , while Bayesian DQN applies exact analytical updates (exact minimization of (7)), by using the conjugacy of prior and likelihood distributions as discussed above. Algorithm 1 generalizes these variants of DQN with Bayesian updates by allowing for other parametric likelihood models , though in practice Gaussian distribution is very popular due to its simple analytical form.

To recover NoisyNet [Fortunato et al.2017] from (7), we can properly scale the objective (by multiplying (7) by ) and let . This implies that NoisyNet makes less strict assumption on return distribution (Gauss parameter does not appear in objective) but does not explicitly encourage exploration by adding entropy bonus, hence the exploration purely relies on the randomization of parameter . To further recover the objective of DQN [Mnih et al.2013], we set to be the Dirac distribution. Finally, since DQN has no randomness in the parameter , its exploration relies on greedy action perturbations.

5.3 Distributional RL

Distributional RL [Bellemare et al.2017] models return distribution using categorical distribution and does not introduce parameter uncertainties. Since there is no distribution over parameter , Algorithm 1 recovers the exact objective of distributional RL from (4) by setting and letting

be categorical distributions. As the number of atoms in the categorical distribution increases, the modeling becomes increasingly close to non-parametric estimation. Though having more atoms makes the parametric distribution more expressive, it also poses a bigger statistical challenge during learning due to a larger number of parameters. As with general

, choosing a parametric form with appropriate representation power is critical for learning.

6 Experiments

In all experiments, we implement Algorithm 2 and refer to it as GE (Gauss exploration) in the following. We aim to answer the following questions,

  • In environments that require consistent exploration, does GE achieve more efficient exploration than conventional naive exploration strategies like greedy in DQN and direct parameter randomization in NoisyNet?

  • When a deterministic critic in an off-policy algorithm like DDPG [Lillicrap et al.2016] is replaced by a randomized critic, does the algorithm achieve better exploration?

6.1 Testing Environment

Chain MDP.

The chain MDP [Osband et al.2016] (Figure 1) serves as a benchmark to test if an algorithm entails consistent exploration. The environment consists of states and each episode lasts time steps. The agent has two actions at each state , while state are both absorbing. The transition is deterministic. At state the agent receives reward , at state the agent receives reward and no reward anywhere else. The initial state is always , making it hard for the agent to escape local optimality at . If the agent explores uniformly randomly, the expected number of time steps required to reach is . For large , it is almost not possible for the randomly exploring agent to reach in a single episode, and the optimal strategy to reach will never be learned.

Figure 1: Chain MDP with states

Sparse Reward Environments.

All RL agents require reward signals to learn good policies. In sparse reward environment, agents with naive exploration strategies randomly stumble around for most of the time and require many more samples to learn good policies than agents that explore consistently. We modify the reward signals in OpenAI gym [Brockman et al.2016] and MuJoCo benchmark tasks [Todorov et al.2012] to be sparse as follows.

  • MountainCar, Acrobot: when the episode terminates and otherwise.

  • CartPole, InvertedPendulum, InvertedDoublePendulum: when the episode terminates and otherwise.

6.2 Experiment Results

Exploration in Chain MDP.

In Figure 2 (a) - (c) we compare DQN vs NoisyNet vs GE in Chain MDP environments with different number of states . When , all three algorithms can solve the task. When , DQN cannot explore properly and cannot make progress, GE explores more efficiently and converges to optimal policy faster than NoisyNet. When , both NoisyNet and DQN get stuck while GE makes progress more consistently. Compared to Bootstrapped DQN (BDQN)[Osband et al.2016], GE has a higher variance when . This might be because BDQN represents the distribution using multiple heads and can approximate more complex distributions, enabling better exploration on this particular task. In general, however, our algorithm is much more computationally feasible than BDQN yet still achieves very efficient exploration.

Figure 2 (d) plots the state visit frequency for GE vs. DQN within the first 10 episodes of training. DQN mostly visits states near (the initial state), while GE visits a much wider range of states. Such active exploration allows the agent to consistently visit and learns the optimal policy within a small number of iterations.

(a) Chain MDP
(b) Chain MDP
(c) Chain MDP
(d) State visit frequency
Figure 2: Comparison of DQN vs NoisyNet vs GE on Chain MDP environments with (a) (b) and (c) states. Figure 2 (d) plots state visit frequency within the first iteration in training for Gauss vs. DQN in Chain MDP . For state , set if is ever visited in one episode and otherwise. The moving average of across multiple episodes computes the state visit frequency. Each iteration consists of episodes.

Exploration in Sparse Reward Environments.

In Figure 3 (a) - (c) we present the comparison of three algorithms in sparse reward environments. For each environment, we plot the rewards at a different scale. In CartPole, the plotted cumulative reward is the episode length; in MountainCar, the plotted cumulative reward is for reaching the target within one episode and otherwise; in Acrobot, the plotted cumulative reward is the negative of the episode length. In all sparse reward tasks, GE entails much faster progress than the other two algorithms. For example, in Sparse MountainCar, within the given number of iterations, DQN and NoisyNet have never (or very rarely) reached the target, hence they make no (little) progress in cumulative reward. On the other hand, GE reaches the targets more frequently since early stage of the training, and makes progress more steadily.

In Figure 3 (d) we plot the state visit trajectories of GE vs. DQN in Sparse MountainCar. The vertical and horizontal axes of the plot correspond to two coordinates of the state space. Two panels of (d) correspond to training after and iterations respectively. As the training proceeds, the state visits of DQN increasingly cluster on a small region in state space and fail to efficiently explore. On the contrary, GE maintains a widespread distribution over states and can explore more systematically.

(a) Sparse CartPole
(b) Sparse MountainCar
(c) Sparse Acrobot
(d) State visit trajectories
Figure 3: Comparison of DQN vs NoisyNet vs GE on sparse reward environments (a) Sparse CartPole (b) Sparse MountainCar (c) Sparse Acrobot. Each iteration corresponds to episodes. Rewards are plotted using moving windows of episodes. Figure 3 (d) plots state visit trajectories for Gauss vs. DQN in Sparse MountainCar. Left panel of (d) is training after iterations and the right panel is after iterations. The vertical and horizontal axes correspond to two coordinates of the state space.

Randomized Critic for Exploration.

We evaluate the performance of DDPG with different critics. When DQN is used as a critic, the agent explores by injecting noise into actions produced by the policy [Lillicrap et al.2016]. When critics are NoisyNet or randomized DQN with GE, the agent explores by updating its parameters using policy gradients computed through randomized critics, effectively injecting noise into the parameter space. In conventional continuous control tasks (Figure 4 (a) and (b)), randomized critics do not enjoy much advantage: for example, in simple control task like InvertedPendulum, where exploration is not important, DDPG with action noise injection makes progress much faster (Figure 4 (a)), though DDPG with randomized critics seem to make progress in a steadier manner. In sparse reward environments (Figure 4 (c) and (d)), however, DDPG with randomized critics tend to make progress at a slightly higher rate than action noise injection.

(a) InvertedPendulum
(b) InvertedDoublePendulum
(c) Sparse InvertedPendulum
(d) Sparse DoublePendulum
Figure 4: Comparison of original Q function (DQN) vs NoisyNet vs GE as baselines for DDPG on sparse reward environments (a) InvertedPendulum (b) InvertedDoublePendulum (c) Sparse InvertedPendulum (d) Sparse InvertedDoublePendulum.

Hyper-parameter.

In all experiments, we set to be factorized Gaussian. In GE, as in NoisyNet [Fortunato et al.2017], each parameter in a fully connected layer (weight and bias) has two distributional parameters: the mean and standard error . Set and let be the actual hyper-parameter to tune. If is large, the distribution over is widespread and the agent can execute a larger range of policies before committing to a solution. For both NoisyNet and GE, we require all to be the same, denoted as , and set the range for grid search. A second hyper-parameter for GE is the Gauss parameter to determine the balance between expected Bellman error and entropy in (7). In our experiments, we tune on the log scale .

We empirically find that both are critical to the performance of GE. For each algorithm, We use a fairly exhaustive grid search to obtain the best hyper-parameters. Each experiment is performed multiple times and the reward plots in Figure 2,3,4 are averaged over five different seeds. In Figure 5, we plot the performance of GE under different and on Sparse CartPole. From Figure 5 we see that the performance is not monotonic in : large (small ) generally leads to more active exploration but may hinder fast convergence, and vice versa. One must strike a proper balance between exploration and exploitation to obtain good performance. In DQN, we set the exploration constant to be . In all experiments, we tune the learning rate .

(a) Hyper-parameter
(b) Hyper-parameter
Figure 5: Hyper-parameter for Gauss Exploration (GE)

7 Conclusion

We have provided a framework based on distributional RL that unifies multiple previous methods on exploration in reinforcement learning, including posterior sampling for bandits as well as recent efforts in Bayesian updates of DQN parameters. We have also derived a practical algorithm based on the Gaussian assumption of return distribution, which allows for efficient control and parameter updates. We have observed that the proposed algorithm obtains good performance on challenging tasks that require consistent exploration. A further extension of our current algorithm is to relax the Gaussian assumption on return distributions. We leave it be future work if more flexible assumption can lead to better performance and whether it can be combined with model-based RL.

References