1 Introduction
Deep reinforcement learning (RL) with neural network function approximators has achieved superhuman performance in several challenging domains
(Mnih et al., 2015; Silver et al., 2016, 2018). Some of the most successful recent applications of deep RL to difficult environments such as Dota 2 (OpenAI, 2018a), Capture the Flag (Jaderberg et al., 2019), Starcraft II (DeepMind, 2019), and dexterous object manipulation (OpenAI, 2018b) have used policy gradientbased methods such as Proximal Policy Optimization (PPO) (Schulman et al., 2017) and the ImportanceWeighted ActorLearner Architecture (IMPALA) (Espeholt et al., 2018), both in the approximately onpolicy setting.Policy gradients, however, can suffer from large variance that may limit performance, especially for highdimensional action spaces (Wu et al., 2018). In practice, moreover, policy gradient methods typically employ carefully tuned entropy regularization in order to prevent policy collapse. As an alternative to policy gradientbased algorithms, in this work we introduce an approximate policy iteration algorithm that adapts Maximum a Posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018a, b) to the onpolicy setting. The modified algorithm, VMPO, relies on a learned statevalue function instead of the stateaction value function used in MPO. Like MPO, rather than directly updating the parameters in the direction of the policy gradient, VMPO first constructs a target distribution for the policy update subject to a samplebased KL constraint, then calculates the gradient that partially moves the parameters toward that target, again subject to a KL constraint.
As we are particularly interested in scalable RL algorithms that can be applied to multitask settings where a single agent must perform a wide variety of tasks, we show for the case of discrete actions that the proposed algorithm surpasses previously reported performance in the multitask setting for both the Atari57 (Bellemare et al., 2012) and DMLab30 (Beattie et al., 2016) benchmark suites, and does so reliably without populationbased tuning of hyperparameters (Jaderberg et al., 2017a). For a few individual levels in DMLab and Atari we also show that VMPO can achieve scores that are substantially higher than has previously been reported, especially in the challenging Ms. Pacman.
VMPO is also applicable to problems with highdimensional, continuous action spaces. We demonstrate this in the context of learning to control both a 22dimensional simulated humanoid from full state observations—where VMPO reliably achieves higher asymptotic performance than previous algorithms—and a 56dimensional simulated humanoid from pixel observations (Tassa et al., 2018; Merel et al., 2019). In addition, for several OpenAI Gym tasks (Brockman et al., 2016) we show that VMPO achieves higher asymptotic performance than has previously been reported.
2 Background and setting
We consider the discounted RL setting, where we seek to optimize a policy
for a Markov Decision Process described by states
, actions , initial state distribution, transition probabilities
, reward function , and discount factor . In deep RL, the policy , which specifies the probability that the agent takes action in state at time , is described by a neural network with parameters . We consider problems where both the states and actions may be discrete or continuous. Two functions play a central role in RL: the statevalue function and the stateaction value function , where , , and .In the usual formulation of the RL problem, the goal is to find a policy that maximizes the expected return given by . In policy gradient algorithms (Williams, 1992; Sutton et al., 2000; Mnih et al., 2016)
, for example, this objective is directly optimized by estimating the gradient of the expected return. An alternative approach to finding optimal policies derives from research that treats RL as a problem in probabilistic inference, including Maximum a Posteriori Policy Optimization (MPO)
(Levine, 2018; Abdolmaleki et al., 2018a, b). Here our objective is subtly different, namely, given a suitable criterion for what are good actions to take in a certain state, how do we find a policy that achieves this goal?As was the case for the original MPO algorithm, the following derivation is valid for any such criterion. However, the policy improvement theorem (Sutton & Barto, 1998) tells us that a policy update performed by exact policy iteration, , can improve the policy if there is at least one stateaction pair with a positive advantage and nonzero probability of visiting the state. Motivated by this classic result, in this work we specifically choose an exponential function of the advantages .
Notation. In the following we use to indicate both discrete and continuous sums (i.e., integrals) over states and actions depending on the setting. A sum with indices only, such as , denotes a sum over all possible states and actions, while , for example, denotes a sum over sample states and actions from a batch of trajectories (the “dataset”) .
3 Related work
VMPO shares many similarities, and thus relevant related work, with the original MPO algorithm (Abdolmaleki et al., 2018a, b). In particular, the general idea of using KL constraints to limit the size of policy updates is present in both Trust Region Policy Optimization (TRPO; Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017); we note, however, that this corresponds to the Estep constraint in VMPO. Meanwhile, the introduction of the Mstep KL constraint and the use of top advantages distinguishes VMPO from Relative Entropy Policy Search (REPS) (Peters et al., 2008). Interestingly, previous attempts to use REPS with neural network function approximators reported very poor performance, being particularly prone to local optima (Duan et al., 2016). In contrast, we find that the principles of EMstyle policy optimization, when combined with appropriate constraints, can reliably train powerful neural networks, including transformers, for RL tasks.
Like VMPO, Supervised Policy Update (SPU) (Vuong et al., 2019) seeks to exactly solve an optimization problem and fit the parametric policy to this solution. As we argue in Appendix D, however, SPU uses this nonparametric distribution quite differently from VMPO; as a result, the final algorithm is closer to a policy gradient algorithm such as PPO.
4 Method
VMPO is an approximate policy iteration (Sutton & Barto, 1998) algorithm with a specific prescription for the policy improvement step. In general, policy iteration uses the fact that the true statevalue function corresponding to policy can be used to obtain an improved policy . Thus we can

Generate trajectories from an old “target” policy whose parameters are fixed. To control the amount of data generated by a particular policy, we use a target network which is fixed for learning steps (Fig. 5a in the Appendix).

Evaluate the policy by learning the value function from empirical returns and estimating the corresponding advantages for the actions that were taken.

Estimate an improved “online” policy based on .
The first two steps are standard, and describing VMPO’s approach to step (3) is the essential contribution of this work. At a high level, our strategy is to first construct a nonparametric target distribution for the policy update, then partially move the parametric policy towards this distribution subject to a KL constraint. Ultimately, we use gradient descent to optimize a single, relatively simple loss, which we provide here in complete form in order to ground the derivation of the algorithm.
Consider a batch of data consisting of a number of trajectories, with total stateaction samples. Each trajectory consists of an unroll of length of the form including the bootstrapped state , where . The total loss is the sum of a policy evaluation loss and a policy improvement loss,
(1) 
where are the parameters of the value network, the parameters of the policy network, and and are Lagrange multipliers. In practice, the policy and value networks share most of their parameters in the form of a shared convolutional network (a ResNet) and recurrent LSTM core, and are optimized together (Fig. 5b in the Appendix) (Mnih et al., 2016). We note, however, that the value network parameters are considered fixed for the policy improvement loss, and gradients are not propagated.
The policy evaluation loss for the value function, , is the standard regression to step returns and is given by Eq. 6 below. The policy improvement loss is given by
(2) 
Here the policy loss is the weighted maximum likelihood loss
(3) 
where the advantages for the target network policy are estimated according to the standard method described below. The tilde over the dataset, , indicates that we take samples corresponding to the top half advantages in the batch of data. The , or “temperature”, loss is
(4) 
The KL constraint, which can be viewed as a form of trustregion loss, is given by
(5) 
where indicates a stop gradient, i.e., that the enclosed term is assumed constant with respect to all variables. Note that here we use the full batch , not .
We used the Adam optimizer (Kingma & Ba, 2015)
with default TensorFlow hyperparameters to optimize the total loss in Eq.
1. In particular, the learning rate was fixed at for all experiments.4.1 Policy evaluation
In the present setting, policy evaluation means learning an approximate statevalue function given a policy , which we keep fixed for learning steps (i.e., batches of trajectories). We note that the value function corresponding to the target policy is instantiated in the “online” network receiving gradient updates; bootstrapping uses the online value function, as it is the best available estimate of the value function for the target policy. Thus in this section refers to , while the value function update is performed on the current , which may share parameters with the current .
We fit a parametric value function with parameters by minimizing the squared loss
(6) 
where is the standard step target for the value function at state at time (Sutton & Barto, 1998). This return uses the actual rewards in the trajectory and bootstraps from the value function for the rest: for each in an unroll, . The advantages, which are the key quantity of interest for the policy improvement step in VMPO, are then given by for each in the batch of trajectories.
PopArt normalization. As we are interested in the multitask setting where a single agent must learn a large number of tasks with differing reward scales, we used PopArt (van Hasselt et al., 2016; Hessel et al., 2018) for the value function, even when training on a single task. Specifically, the value function outputs a separate value for each task in normalized space, which is converted to actual returns by a shift and scaling operation, the statistics of which are learned during training. We used a scale lower bound of , scale upper bound of , and learning rate of for the statistics. The lower bound guards against numerical issues when rewards are extremely sparse.
Importanceweighting for offpolicy data. It is possible to importanceweight the samples using Vtrace to correct for offpolicy data (Espeholt et al., 2018), for example when data is taken from a replay buffer. For simplicity, however, no importanceweighting was used for the experiments presented in this work, which were mostly onpolicy.
4.2 Policy improvement in VMPO
In this section we show how, given the advantage function for the stateaction distribution induced by the old policy , we can estimate an improved policy . More formally, let denote the binary event that the new policy is an improvement (in a sense to be defined below) over the previous policy: if the policy is successfully improved and 0 otherwise. Then we would like to find the mode of the posterior distribution over parameters conditioned on this event, i.e., we seek the maximum a posteriori (MAP) estimate
(7) 
where we have written as to emphasize the parametric nature of the dependence on . We use the wellknown identity for any latent distribution , where
is the KullbackLeibler divergence between
and with respect to , and the first term is a lower bound because the KL divergence is always nonnegative. Then considering as latent variables,(8) 
Policy improvement in VMPO consists of the following two steps which have direct correspondences to the expectation maximization (EM) algorithm
(Neal & Hinton, 1998): In the expectation (E) step, we choose the variational distribution such that the lower bound on is as tight as possible, by minimizing the KL term. In the maximization (M) step we then find parameters that maximize the corresponding lower bound, together with the prior term in Eq. 7.4.2.1 Estep
In the Estep, our goal is to choose the variational distribution such that the lower bound on is as tight as possible, which is the case when the KL term in Eq. 8 is zero. Given the old parameters , this simply leads to , or
(9) 
Intuitively, this solution weights the probability of each stateaction pair with its relative improvement probability . We now choose a distribution that leads to our desired outcome. As we prefer actions that lead to a higher advantage in each state, we suppose that this probability is given by
(10) 
for some temperature , from which we obtain the equation on the right in Eq. 3. This probability depends on the old parameters and not on the new parameters . Meanwhile, the value of
allows us to control the diversity of actions that contribute to the weighting, but at the moment is arbitrary. It turns out, however, that we can tune
as part of the optimization, which is desirable since the optimal value of changes across iterations. The convex loss that achieves this, Eq. 4, is derived in Appendix A by minimizing the KL term in Eq. 8 subject to a hard constraint on .Top advantages. We found that learning improves substantially if we take only the samples corresponding to the highest 50% of advantages in each batch for the Estep, corresponding to the use of rather than in Eqs. 3, 4. Importantly, these must be consistent between the maximum likelihood weights in Eq. 3 and the temperature loss in Eq. 4, since, mathematically, this is justified by choosing the corresponding policy improvement probability in Eq. 10 to only use the top half of the advantages. This is similar to the technique used in Covariance Matrix Adaptation  Evolutionary Strategy (CMAES) (Hansen et al., 1997; Abdolmaleki et al., 2017), and is a special case of the more general feature that any rankpreserving transformation is allowed under this formalism.
Importance weighting for offpolicy corrections. As for the value function, importance weights can be used in the policy improvement step to correct for offpolicy data. While not used for the experiments presented in this work, details for how to carry out this correction are given in Appendix E.
4.2.2 Mstep: Constrained supervised learning of the parametric policy
In the Estep we found the nonparametric variational stateaction distribution , Eq. 9, that gives the tightest lower bound to in Eq. 8. In the Mstep we maximize this lower bound together with the prior term with respect to the parameters , which effectively leads to a constrained weighted maximum likelihood problem. Thus the introduction of the nonparametric distribution in Eq. 9 separates the RL procedure from the neural network fitting.
We would like to find new parameters that minimize
(11) 
Note, however, that so far we have worked with the joint stateaction distribution while we are in fact optimizing for the policy, which is the conditional distribution . Writing since only the policy is parametrized by and dropping terms that are not parametrized by , the first term of Eq. 11 is seen to be the weighted maximum likelihood policy loss
(12) 
In the samplebased computation of this loss, we assume that any stateaction pairs not in the batch of trajectories have zero weight, leading to the normalization in Eq. 3.
As in the original MPO algorithm, a useful prior is to keep the new policy close to the old policy : . While intuitive, we motivate this more formally in Appendix B. It is again more convenient to specify a bound on the KL divergence instead of tuning directly, so we solve the constrained optimization problem
(13) 
Intuitively, the constraint in the Estep expressed by Eq. 19 in Appendix A for tuning the temperature only constrains the nonparametric distribution; it is the constraint in Eq. 13 that directly limits the change in the parametric policy, in particular for states and actions that were not in the batch of samples and which rely on the generalization capabilities of the neural network function approximator.
To make the constrained optimization problem amenable to gradient descent, we use Lagrangian relaxation to write the unconstrained objective as
(14) 
which we can optimize by following a coordinatedescent strategy, alternating between the optimization over and . Thus, in addition to the policy loss we arrive at the constraint loss
(15) 
Replacing the sum over states with samples gives Eq. 5. Since and are Lagrange multipliers that must be positive, after each gradient update we project the resulting and to a small positive value which we choose to be throughout the results presented below.
For continuous action spaces parametrized by Gaussian distributions, we use decoupled KL constraints for the Mstep in Eq.
15 as in Abdolmaleki et al. (2018b); the precise form is given in Appendix C.5 Experiments
Details on the network architecture and hyperparameters used for each task are given in Appendix F.
5.1 Discrete actions: DMLab, Atari
DMLab. DMLab30 (Beattie et al., 2016) is a collection of visually rich, partially observable 3D environments played from the firstperson point of view. Like IMPALA, for DMLab we used pixel control as an auxiliary loss for representation learning (Jaderberg et al., 2017b; Hessel et al., 2018). However, we did not employ the optimistic asymmetric reward scaling used by previous IMPALA experiments to aid exploration on a subset of the DMLab levels, by weighting positive rewards more than negative rewards (Espeholt et al., 2018; Hessel et al., 2018; Kapturowski et al., 2019). Unlike in Hessel et al. (2018) we also did not use populationbased training (PBT) (Jaderberg et al., 2017a). Additional details for the settings used in DMLab can be found in Table 5 of the Appendix.
Fig. 1a shows the results for multitask DMLab30, comparing the VMPO learning curves to data obtained from Hessel et al. (2018) for the PopArt IMPALA agent with pixel control. We note that the result for VMPO at 10B environment frames across all levels matches the result for the Recurrent Replay Distributed DQN (R2D2) agent (Kapturowski et al., 2019) trained on individual levels for 10B environment steps per level. Fig. 2 shows example individual levels in DMLab where VMPO achieves scores that are substantially higher than has previously been reported, for both R2D2 and IMPALA. The pixelcontrol IMPALA agents shown here were carefully tuned for DMLab and are similar to the “experts” used in Schmitt et al. (2018); in all cases these results match or exceed previously published results for IMPALA (Espeholt et al., 2018; Kapturowski et al., 2019).
Atari. The Atari Learning Environment (ALE) (Bellemare et al., 2012) is a collection of 57 Atari 2600 games that has served as an important benchmark for recent deep RL methods. We used the standard preprocessing scheme and a maximum episode length of 30 minutes (108,000 frames), see Table 6 in the Appendix. For the multitask setting we followed Hessel et al. (2018) in setting the discount to zero on loss of life; for the example single tasks we did not employ this trick, since it can prevent the agent from achieving the highest score possible by sacrificing lives. Similarly, while in the multitask setting we followed previous work in clipping the maximum reward to 1.0, no such clipping was applied in the singletask setting in order to preserve the original reward structure. Additional details for the settings used in Atari can be found in Table 6 in the Appendix.
Fig. 1b shows the results for multitask Atari57, demonstrating that it is possible for a single agent to achieve “superhuman“ median performance on Atari57 in approximately 4 billion (70 million per level) environment frames.
We also compare the performance of VMPO on a few individual Atari levels to R2D2 (Kapturowski et al., 2019), which previously achieved some of the highest scores reported for Atari. Again, VMPO can match or exceed previously reported scores while requiring fewer interactions with the environment. In Ms. Pacman, the final performance approaches 300,000 with a 30minute timeout (and the maximum 1M without), effectively solving the game. Inspired by the argument in Kapturowski et al. (2019) that in a fully observable environment LSTMs enable the agent to utilize more useful representations than is available in the immediate observation, for the singletask setting we used a TransformerXL (TrXL) (Dai et al., 2019) to replace the LSTM core. Unlike previous work for single Atari levels, we did not employ any reward clipping (Mnih et al., 2015; Espeholt et al., 2018) or nonlinear value function rescaling (Kapturowski et al., 2019).
5.2 Continuous control
To demonstrate VMPO’s effectiveness in highdimensional, continuous action spaces, here we present examples of learning to control both a simulated humanoid with 22 degrees of freedom from full state observations and one with 56 degrees of freedom from pixel observations (Tassa et al., 2018; Merel et al., 2019). As shown in Fig. 4a, for the 22dimensional humanoid VMPO reliably achieves higher asymptotic returns than has previously been reported, including for Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015), Stochastic Value Gradients (SVG) (Heess et al., 2015), and MPO. These algorithms are far more sampleefficient but reach a lower final performance.
In the “gaps” task the 56dimensional humanoid must run forward to match a target velocity of 4 m/s and jump over the gaps between platforms by learning to actuate joints with positioncontrol (Merel et al., 2019). Previously, only an agent operating in the space of prelearned motor primitives was able to solve the task from pixel observations (Merel et al., 2018, 2019); here we show that VMPO can learn a challenging visuomotor task from scratch (Fig. 4b). For this task we also demonstrate the importance of the parametric KL constraint, without which the agent learns poorly.
6 Conclusion
In this work we have introduced a scalable onpolicy deep reinforcement learning algorithm, VMPO, that is applicable to both discrete and continuous control domains. For the results presented in this work neither importance weighting nor entropy regularization was used; moreover, since the size of neural network parameter updates is limited by KL constraints, we were also able to use the same learning rate for all experiments. This suggests that a scalable, performant RL algorithm may not require some of the tricks that have been developed over the past several years. Interestingly, both the original MPO algorithm for replaybased offpolicy learning (Abdolmaleki et al., 2018a, b) and VMPO for onpolicy learning are derived from similar principles, providing evidence for the benefits of this approach as an alternative to popular policy gradientbased methods.
Acknowledgments
We thank Lorenzo Blanco, Trevor Cai, Greg Wayne, Chloe Hillier, and Vicky Langston for their assistance and support.
References

Abdolmaleki et al. (2017)
Abbas Abdolmaleki, Bob Price, Nuno Lau, Luis P Reis, and Gerhard Neumann.
Deriving and Improving CMAES with Information Geometric Trust
Regions.
Proceedings of the Genetic and Evolutionary Computation Conference
, 2017.  Abdolmaleki et al. (2018a) Abbas Abdolmaleki, Jost Tobias Springenberg, Jonas Degrave, Steven Bohez, Yuval Tassa, Dan Belov, Nicolas Heess, and Martin Riedmiller. Relative Entropy Regularized Policy Iteration. arXiv preprint, 2018a. URL https://arxiv.org/pdf/1812.02256.pdf.
 Abdolmaleki et al. (2018b) Abbas Abdolmaleki, Jost Tobias Springenberg, Yuval Tassa, Remi Munos, Nicolas Heess, and Martin Riedmiller. Maximum a Posteriori Policy Optimisation. Int. Conf. Learn. Represent., 2018b. URL https://arxiv.org/pdf/1806.06920.pdf.
 Anonymous Authors (2019) Anonymous Authors. OffPolicy ActorCritic with Shared Experience Replay. Under review, Int. Conf. Learn. Represent., 2019.
 Beattie et al. (2016) Charles Beattie, Joel Z Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, et al. Deepmind Lab. arXiv preprint arXiv:1612.03801, 2016.

Bellemare et al. (2012)
Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling.
The Arcade Learning Environment: An Evaluation Platform for General
Agents.
Journal of Artificial Intelligence Research
, 47, 2012.  Brockman et al. (2016) Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv preprint, 2016. URL http://arxiv.org/abs/1606.01540.
 Buchlovsky et al. (2019) Peter Buchlovsky, David Budden, Dominik Grewe, Chris Jones, John Aslanides, Frederic Besse, Andy Brock, Aidan Clark, Sergio Gomez Colmenarejo, Aedan Pope, Fabio Viola, and Dan Belov. TFReplicator: Distributed Machine Learning for Researchers. arXiv preprint, 2019. URL http://arxiv.org/abs/1902.00465.
 Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. TransformerXL: Attentive Language Models Beyond a FixedLength Context. arXiv preprint, 2019. URL http://arxiv.org/abs/1901.02860.
 DeepMind (2019) DeepMind. AlphaStar: Mastering the RealTime Strategy Game StarCraft II, 2019. URL https://deepmind.com/blog/alphastarmasteringrealtimestrategygamestarcraftii/.
 Duan et al. (2016) Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking Deep Reinforcement Learning for Continuous Control. arXiv preprint, 2016. URL http://arxiv.org/abs/1604.06778.
 Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: Scalable Distributed DeepRL with Importance Weighted ActorLearner Architectures. arXiv preprint, 2018. URL http://arxiv.org/abs/1802.01561.
 Google (2018) Google. Cloud TPU, 2018. URL https://cloud.google.com/tpu/.
 Haarnoja et al. (2018) Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft ActorCritic: OffPolicy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. arXiv preprint, 2018. URL http://arxiv.org/abs/1801.01290.
 Hansen et al. (1997) Nikolaus Hansen, Andreas Ostermeier, and Andreas Ostermeier. Convergence Properties of Evolution Strategies with the Derandomized Covariance Matrix Adaptation: CMAES. 1997. URL http://www.cmap.polytechnique.fr/~nikolaus.hansen/CMAES2.pdf.
 Heess et al. (2015) Nicolas Heess, Greg Wayne, David Silver, Timothy P. Lillicrap, Yuval Tassa, and Tom Erez. Learning continuous control policies by stochastic value gradients. arXiv preprint, 2015. URL http://arxiv.org/abs/1510.09142.
 Hessel et al. (2018) Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado van Hasselt. Multitask Deep Reinforcement Learning with PopArt. arXiv preprint, 2018. URL https://arxiv.org/pdf/1809.04474.pdf.
 Jaderberg et al. (2017a) Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population Based Training of Neural Networks. arXiv preprint, 2017a. URL http://arxiv.org/abs/1711.09846.
 Jaderberg et al. (2017b) Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement Learning with Unsupervised Auxiliary Tasks. Int. Conf. Learn. Represent., 2017b. URL https://openreview.net/pdf?id=SJ6yPD5xg.
 Jaderberg et al. (2019) Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. Humanlevel performance in 3d multiplayer games with populationbased reinforcement learning. Science, 364:859–865, 2019. URL https://science.sciencemag.org/content/364/6443/859.
 Kapturowski et al. (2019) Steven Kapturowski, Georg Ostrovski, John Quan, Rémi Munos, and Will Dabney. Recurrent Experience Replay in Distributed Reinforcement Learning. Int. Conf. Learn. Represent., 2019. URL https://openreview.net/pdf?id=r1lyTjAqYX.
 Kingma & Ba (2015) Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. Int. Conf. Learn. Represent., 2015. URL https://arxiv.org/abs/1412.6980.
 Levine (2018) Sergey Levine. Reinforcement Learning and Control as Probabilistic Inference: Tutorial and Review. arXiv preprint, 2018. URL http://arxiv.org/abs/1805.00909.
 Lillicrap et al. (2015) Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint, 2015. URL http://arxiv.org/abs/1509.02971.
 Merel et al. (2018) Josh Merel, Leonard Hasenclever, Alexandre Galashov, Arun Ahuja, Vu Pham, Greg Wayne, Yee Whye Teh, and Nicolas Heess. Neural probabilistic motor primitives for humanoid control. arXiv preprint, 2018. URL http://arxiv.org/abs/1811.11711.
 Merel et al. (2019) Josh Merel, Arun Ahuja, Vu Pham, Saran Tunyasuvunakool, Siqi Liu, Dhruva Tirumala, Nicolas Heess, and Greg Wayne. Hierarchical Visuomotor Control of Humanoids. Int. Conf. Learn. Represent., 2019. URL https://openreview.net/pdf?id=BJfYvo09Y7.
 Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. HumanLevel Control through Deep Reinforcement Learning. Nature, 518:529–533, 2015. URL http://dx.doi.org/10.1038/nature14236.
 Mnih et al. (2016) Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P Lillicrap, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. arXiv:1602.01783, 2016. URL http://arxiv.org/abs/1602.01783.
 Neal & Hinton (1998) Radford M. Neal and Geoffrey E. Hinton. A View of the EM Algorithm that Justifies Incremental, Sparse, and Other Variants. In M.I. Jordan (ed.), Learn. Graph. Model. NATO ASI Ser. vol. 89. Springer, Dordrecht, 1998.
 OpenAI (2018a) OpenAI. OpenAI Five, 2018a. URL https://openai.com/blog/openaifive/.
 OpenAI (2018b) OpenAI. Learning Dexterity, 2018b. URL https://openai.com/blog/learningdexterity/.
 Peters et al. (2008) Jan Peters, M Katharina, and Yasemin Altün. Relative Entropy Policy Search. Proceedings of the TwentyFourth AAAI Conference on Artificial Intelligence, pp. 1607–1612, 2008.
 Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsupervised Multitask Learners. 2019. URL https://d4mucfpksywv.cloudfront.net/betterlanguagemodels/language_models_are_unsupervised_multitask_learners.pdf.
 Schmitt et al. (2018) Simon Schmitt, Jonathan J. Hudson, Augustin Zídek, Simon Osindero, Carl Doersch, Wojciech M. Czarnecki, Joel Z. Leibo, Heinrich Küttler, Andrew Zisserman, Karen Simonyan, and S. M. Ali Eslami. Kickstarting Deep Reinforcement Learning. arXiv preprint, 2018. URL http://arxiv.org/abs/1803.03835.
 Schulman et al. (2015) John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust Region Policy Optimization. arXiv preprint, 2015. URL http://arxiv.org/abs/1502.05477.
 Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint, 2017. URL http://arxiv.org/abs/1707.06347.
 Silver et al. (2016) David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks and tree search. Nature, 529:484–489, 2016. URL http://www.nature.com/doifinder/10.1038/nature16961.
 Silver et al. (2018) David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis. A general reinforcement learning algorithm that masters chess, shogi, and go through selfplay. Science, 362:1140–1144, 2018. URL https://science.sciencemag.org/content/362/6419/1140.
 Sutton & Barto (1998) Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
 Sutton et al. (2000) Richard S Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In S. A. Solla, T. K. Leen, and K. Müller (eds.), Advances in Neural Information Processing Systems 12, pp. 1057–1063. MIT Press, 2000. URL http://papers.nips.cc/paper/1713policygradientmethodsforreinforcementlearningwithfunctionapproximation.pdf.
 Tassa et al. (2018) Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. DeepMind Control Suite. arXiv preprint, 2018. URL http://arxiv.org/abs/1801.00690.
 van Hasselt et al. (2016) Hado van Hasselt, Arthur Guez, Matteo Hessel, and David Silver. Learning functions across many orders of magnitudes. arXiv preprint, 2016. URL http://arxiv.org/abs/1602.07714.
 Vuong et al. (2019) Quan Vuong, Keith Ross, and Yiming Zhang. Supervised Policy Update for Deep Reinforcement Learning. arXiv preprint, 2019. URL http://arxiv.org/abs/1805.11706.
 Williams (1992) Ronald J. Williams. Simple statistical gradientfollowing methods for connectionist reinforcement learning. Mach. Learn., 8:229–256, 1992. URL http://dx.doi.org/10.1007/BF00992696.
 Wu et al. (2018) Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M. Bayen, Sham Kakade, Igor Mordatch, and Pieter Abbeel. Variance reduction for policy gradient with actiondependent factorized baselines. arXiv preprint, 2018. URL http://arxiv.org/abs/1803.07246.
Appendix A Derivation of the VMPO temperature loss
In this section we derive the Estep temperature loss in Eq. 23. To this end, we explicitly commit to the more specific improvement criterion in Eq. 10 by plugging into the original objective in Eq. 8. We seek that minimizes
(16)  
(17) 
where after multiplying through by , which up to this point in the derivation is given. We wish to automatically tune so as to enforce a bound on the KL term multiplying it in Eq. 17, in which case the temperature optimization can also be viewed as a nonparametric trust region for the variational distribution with respect to the old distribution. We therefore consider the constrained optimization problem
(18)  
s.t.  (19) 
We can now use Lagrangian relaxation to transform the constrained optimization problem into one that maximizes the unconstrained objective
(20) 
with . (Note we are reusing the variables and for the new optimization problem.) Differentiating with respect to and setting equal to zero, we obtain
(21) 
Normalizing over (using the freedom given by ) then gives
(22) 
which reproduces the general solution Eq. 9 for our specific choice of policy improvement in Eq. 10. However, the value of can now be found by optimizing the corresponding dual function. Plugging Eq. 22 into the unconstrained objective in Eq. 20 gives rise to the dependent term
(23) 
Replacing the expectation with samples from in the batch of trajectories leads to the loss in Eq. 4.
Appendix B Mstep KL constraint
Here we give a somewhat more formal motivation for the prior . Consider a normal prior with mean and covariance . We choose where is a scaling parameter and is the Fisher information for evaluated at . Then , where the first term is precisely the secondorder approximation to the KL divergence . We now follow TRPO (Schulman et al., 2015)
in heuristically approximating this as the stateaveraged expression,
. We note that the KL divergence in either direction has the same secondorder expansion, so our choice of KL is an empirical one (Abdolmaleki et al., 2018a).Appendix C Decoupled KL constraints for continuous control
As in Abdolmaleki et al. (2018b), for continuous action spaces parametrized by Gaussian distributions we use decoupled KL constraints for the Mstep. This uses the fact that the KL divergence between two
dimensional multivariate normal distributions with means
and covariances can be written as(24) 
where is the matrix determinant. Since the first distribution and hence in the KL divergence of Eq. 14 depends on the old target network parameters, we see that we can separate the overall KL divergence into a mean component and a covariance component:
(25)  
(26) 
With the replacement for and corresponding in Eq. 15, we obtain the total loss
(27) 
where and are the same as before. Note, however, that unlike in Abdolmaleki et al. (2018a) we do not decouple the policy loss.
We generally set to be much smaller than (see Table 7). Intuitively, this allows the policy to learn quickly in action space while preventing premature collapse of the policy, and, conversely, increasing “exploration” without moving in action space.
Appendix D Relation to Supervised Policy Update
Like VMPO, Supervised Policy Update (SPU) (Vuong et al., 2019)
adopts the strategy of first solving a nonparametric constrained optimization problem exactly, then fitting a neural network to the resulting solution via a supervised loss function. There is, however, an important difference from VMPO, which we describe here.
In SPU, the KL loss, which is the sole loss in SPU, leads to a parametric optimization problem that is equivalent to the nonparametric optimization problem posed initially. To see this, we observe that the SPU loss seeks parameters (note the direction of the KL divergence)
(28)  
(29)  
(30) 
Multiplying by since it can be treated as a constant up to this point, we then see that this corresponds exactly to the (Lagrangian form) of the problem
(31)  
(32) 
which is the original nonparametric problem posed in Vuong et al. (2019).
Appendix E Importanceweighting for offpolicy corrections
The network that generates the data may lag behind the target network in common distributed, asynchronous implementations (Espeholt et al., 2018). We can compensate for this by multiplying the exponentiated advantages by importance weights :
(33)  
(34) 
where are the parameters of the behavior policy that generated and which may be different from . The clipped importance weights are given by
(35) 
As was the case with Vtrace for the value function, we did not find it necessary to use importance weighting and all experiments presented in this work did not use them for the sake of simplicity.
Appendix F Network architecture and hyperparameters
For DMLab the visual observations were 7296 RGB images, while for Atari the observations were 4 stacked frames of 8484 grayscale images. The ResNet used to process visual observations is similar to the 3section ResNet used in Hessel et al. (2018), except the number of channels was multiplied by 4 in each section, so that the number of channels were (64, 128, 128) (Anonymous Authors, 2019). For individual DMLab levels we used the same number of channels as Hessel et al. (2018), i.e., (16, 32, 32). Each section consisted of a convolution and maxpooling operation (stride 2), followed by residual blocks of size 2, i.e., a convolution followed by a ReLU nonlinearity, repeated twice, and a skip connection from the input residual block input to the output. The entire stack was passed through one more ReLU nonlinearity. All convolutions had a kernel size of 3 and a stride of 1. For the humanoid control tasks from vision, the number of channels in each section were (16, 32, 32).
Since some of the levels in DMLab require simple language processing, for DMLab the agents contained an additional 256unit LSTM receiving an embedding of hashed words as input. The output of the language LSTM was then concatenated with the output of the visual processing pathway as well as the previous reward and action, then fed to the main LSTM.
For multitask DMLab we used a 3layer LSTM, each with 256 units, and an unroll length of 95 with batch size 128. For the singletask setting we used a 2layer LSTM. For multitask Atari and the 56dimensional humanoidgaps control task a single 256unit LSTM was used, while for the 22dimensional humanoidrun task the core consisted only of a 2layer MLP with 512 and 256 units (no LSTM). For singletask Atari a TransformerXL was used in place of the LSTM. Note that we followed Radford et al. (2019) in placing the layer normalization on only the inputs to each subblock. For Atari the unroll length was 63 with a batch size of 128. For both humanoid control tasks the batch size was 64, but the unroll length was 40 for the 22dimensional humanoid and 63 for the 56dimensional humanoid.
In all cases the policy logits (for discrete actions) and Gaussian distribution parameters (for continuous actions) consisted of a 256unit MLP followed by a linear readout, and similarly for the value function.
The initial values for the Lagrange multipliers in the VMPO loss are given in Table 1
Implementation note. We implemented VMPO in an actorlearner framework (Espeholt et al., 2018) that utilizes TFReplicator (Buchlovsky et al., 2019) for distributed training on TPU 8core and 16core configurations (Google, 2018). One practical consequence of this is that a full batch of data was in fact split into 8 or 16 minibatches, one per core/replica, and the overall result obtained by averaging the computations performed for each minibatch. More specifically, the determination of the highest advantages and the normalization of the nonparametric distribution, Eq. 3, is performed within minibatches. While it is possible to perform the fullbatch computation by utilizing crossreplica communication, we found this to be unnecessary.
Hyperparameter  Value  

DMLab  Atari  Continuous control  
Initial 
1.0  1.0  1.0 
Initial  5.0  5.0   
Initial      1.0 
Initial      1.0 

DMLab action set. Ignoring the “jump” and “crouch” actions which we do not use, an action in the native DMLab action space consists of 5 integers whose meaning and allowed values are given in Table 2. Following previous work on DMLab (Hessel et al., 2018), we used the reduced action set given in Table 3 with an action repeat of 4.
Action name  Range 

LOOK_LEFT_RIGHT_PIXELS_PER_FRAME 
[512, 512] 
LOOK_DOWN_UP_PIXELS_PER_FRAME  [512, 512] 
STRAFE_LEFT_RIGHT  [1, 1] 
MOVE_BACK_FORWARD  [1, 1] 
FIRE  [0, 1] 

Action  Native DMLab action 

Forward (FW) 
[ 0, 0, 0, 1, 0] 
Backward (BW)  [ 0, 0, 0, 1, 0] 
Strafe left 
[ 0, 0, 1, 0, 0] 
Strafe right  [ 0, 0, 1, 0, 0] 
Small look left (LL) 
[10, 0, 0, 0, 0] 
Small look right (LR)  [ 10, 0, 0, 0, 0] 
Large look left (LL )  [60, 0, 0, 0, 0] 
Large look right (LR)  [ 60, 0, 0, 0, 0] 
Look down 
[ 0, 10, 0, 0, 0] 
Look up  [ 0, 10, 0, 0, 0] 
FW + small LL 
[10, 0, 0, 1, 0] 
FW + small LR  [ 10, 0, 0, 1, 0] 
FW + large LL  [60, 0, 0, 1, 0] 
FW + large LR  [ 60, 0, 0, 1, 0] 
Fire 
[ 0, 0, 0, 0, 1] 

Level name  Episode reward  Humannormalized  

IMPALA  VMPO  IMPALA  VMPO  
alien 
1163.00 148.43  2332.00 290.16  13.55 2.15  30.50 4.21 
amidar  192.50 9.16  423.60 20.53  10.89 0.53  24.38 1.20 
assault  4215.30 294.51  1225.90 60.64  768.46 56.68  193.13 11.67 
asterix  4180.00 303.91  9955.00 2043.48  47.87 3.66  117.50 24.64 
asteroids  3473.00 381.30  2982.00 164.35  5.90 0.82  4.85 0.35 
atlantis  997530.00 3552.89  940310.00 6085.96  6086.50 21.96  5732.81 37.62 
bank_heist  1329.00 2.21  1563.00 15.81  177.94 0.30  209.61 2.14 
battle_zone  43900.00 4738.04  61400.00 5958.52  119.27 13.60  169.52 17.11 
beam_rider  4598.00 618.09  3868.20 666.55  25.56 3.73  21.16 4.02 
berzerk  1018.00 72.63  1424.00 150.93  35.68 2.90  51.87 6.02 
bowling  63.60 0.84  27.60 0.62  29.43 0.61  3.27 0.45 
boxing  93.10 0.94  100.00 0.00  775.00 7.86  832.50 0.00 
breakout  484.30 57.24  400.70 18.82  1675.69 198.77  1385.42 65.36 
centipede  6037.90 994.99  3015.00 404.97  39.76 10.02  9.31 4.08 
chopper_command  4250.00 417.91  4340.00 714.45  52.29 6.35  53.66 10.86 
crazy_climber  100440.00 9421.56  116760.00 5312.12  357.94 37.61  423.09 21.21 
defender  41585.00 4194.42  98395.00 17552.17  244.78 26.52  604.01 110.99 
demon_attack  77880.00 8798.44  20243.00 5434.41  4273.35 483.72  1104.56 298.77 
double_dunk  0.80 0.31  12.60 1.94  809.09 14.08  1418.18 88.19 
enduro  1187.90 76.10  1453.80 104.37  138.05 8.84  168.95 12.13 
fishing_derby  21.60 3.46  33.80 2.10  213.77 6.54  236.79 3.96 
freeway  32.10 0.17  33.20 0.28  108.45 0.58  112.16 0.93 
frostbite  250.00 0.00  260.00 0.00  4.33 0.00  4.56 0.00 
gopher  11720.00 1687.71  7576.00 973.13  531.92 78.32  339.62 45.16 
gravitar  1095.00 232.75  3125.00 191.87  29.01 7.32  92.88 6.04 
hero  13159.50 68.90  29196.50 752.06  40.71 0.23  94.53 2.52 
ice_hockey  4.80 1.31  10.60 2.00  132.23 10.83  180.17 16.50 
jamesbond  1015.00 91.39  3805.00 595.92  360.12 33.38  1379.11 217.65 
kangaroo  1780.00 18.97  12790.00 629.52  57.93 0.64  427.02 21.10 
krull  9738.00 360.95  7359.00 1064.84  762.53 33.81  539.67 99.75 
kung_fu_master  44340.00 2898.70  38620.00 2346.48  196.11 12.90  170.66 10.44 
montezuma_revenge  0.00 0.00  0.00 0.00  0.00 0.00  0.00 0.00 
ms_pacman  1953.00 227.12  2856.00 324.54  24.77 3.42  38.36 4.88 
name_this_game  5708.00 354.92  9295.00 679.83  59.33 6.17  121.64 11.81 
phoenix  37030.00 6415.95  19560.00 1843.44  559.60 98.99  290.05 28.44 
pitfall  4.90 2.34  2.80 1.40  3.35 0.04  3.39 0.02 
pong  20.80 0.19  21.00 0.00  117.56 0.54  118.13 0.00 
private_eye  100.00 0.00  100.00 0.00  0.11 0.00  0.11 0.00 
qbert  5512.50 741.08  15297.50 1244.47  40.24 5.58  113.86 9.36 
riverraid  8237.00 97.09  11160.00 733.06  43.72 0.62  62.24 4.65 
road_runner  28440.00 1215.99  51060.00 1560.72  362.91 15.52  651.67 19.92 
robotank  29.60 2.15  46.80 3.42  282.47 22.22  459.79 35.29 
seaquest  1888.00 63.26  9953.00 973.02  4.33 0.15  23.54 2.32 
skiing  16244.00 592.28  15438.10 1573.39  6.69 4.64  13.01 12.33 
solaris  1794.00 279.04  2194.00 417.91  5.03 2.52  8.64 3.77 
space_invaders  793.50 90.61  1771.50 201.95  42.45 5.96  106.76 13.28 
star_gunner  44860.00 5157.74  60120.00 1953.60  461.05 53.80  620.24 20.38 
surround  2.50 1.04  4.00 0.62  75.76 6.31  84.85 3.74 
tennis  0.10 0.09  23.10 0.26  152.90 0.61  302.58 1.69 
time_pilot  10890.00 787.46  22330.00 2443.11  440.77 47.40  1129.42 147.07 
tutankham  218.50 13.53  254.60 9.99  132.59 8.66  155.70 6.40 
up_n_down  175083.00 16341.05  82913.00 12142.08  1564.09 146.43  738.18 108.80 
venture  0.00 0.00  0.00 0.00  0.00 0.00  0.00 0.00 
video_pinball  59898.40 23875.14  198845.20 98768.54  339.02 135.13  1125.46 559.03 
wizard_of_wor  6960.00 1730.97  7890.00 1595.77  152.55 41.28  174.73 38.06 
yars_revenge  12825.70 2065.90  41271.70 4726.72  18.90 4.01  74.16 9.18 
zaxxon  11520.00 646.81  18820.00 754.69  125.67 7.08  205.53 8.26 
Median  117.56  155.70  
Setting  Singletask  Multitask 

Agent discount  0.99  
Image height  72  
Image width  96  
Number of action repeats  4  
Number of LSTM layers  2  3 
Pixelcontrol cost  
10  
(loguniform) 
Setting  Singletask  Multitask 

Environment discount on end of life  1  0 
Agent discount  0.997  0.99 
Clipped reward range  no clipping  
Max episode length  30 mins (108,000 frames)  
Image height  84  
Image width  84  
Grayscale  True  
Number of stacked frames  4  
Number of action repeats  4  
TrXL: Key/Value size  32  
TrXL: Number of heads  4  
TrXL: Number of layers  8  
TrXL: MLP size  512  
1000  100  
(loguniform) 
Setting  HumanoidPixels  Humanoidstate  OpenAI Gym 

Agent discount  0.99  
Unroll length  63  63  39 
Image height  64  
Image width  64  
Target update period  100  
0.1  0.01  
(loguniform)  
(loguniform) 
Comments
There are no comments yet.