Proximal Policy Optimization with Mixed Distributed Training

07/15/2019 ∙ by Zhenyu Zhang, et al. ∙ Shanghai University 2

Instability and slowness are two main problems in deep reinforcement learning. Even if proximal policy optimization is the state of the art, it still suffers from these two problems. We introduce an improved algorithm based on proximal policy optimization (PPO), mixed distributed proximal policy optimization (MDPPO), and show that it can accelerate and stabilize the training process. In our algorithm, multiple different policies train simultaneously and each of them controls several identical agents that interact with environments. Actions are sampled by each policy separately as usual but the trajectories for training process are collected from all agents, instead of only one policy. We find that if we choose some auxiliary trajectories elaborately to train policies, the algorithm will be more stable and quicker to converge especially in the environments with sparse rewards.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 7

page 8

page 9

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Function approximation is now becoming a standard method to represent any policy or value function instead of tabular form in reinforcement learning [25]

. An emerging trend of function approximation is to combine deep learning with much more complicated neural networks such as convolutional neural network (DQN

[14]) in large-scale or continuous state. However, while DQN solves problems with high-dimensional state spaces, it can only handle discrete action spaces. Policy-gradient-like [26] or actor-critic style [11]

algorithms directly model and train the policy. The output of a policy network is usually a continuous Gaussian distribution. However, compared with traditional lookup tables or linear function approximation, nonlinear function approximations cannot be guaranteed to convergence

[2] due to its nonconvex optimization. Thus the performance of algorithms using nonlinear value function or policy is very sensitive to the initial value of network weights.

Although PPO which we will mainly discuss in this paper is the state of the art, it is under the actor-critic framework. According to [17]

, actor-critic is a kind of generative adversarial network to some extent which has a very tricky disadvantage of instability during the training process. Combining nonlinear function approximations makes the training even worse. Thus, the speed to convergence varies greatly or divergence may happen inevitably even if two policies have the same hyperparameters and environments, which makes developers confused about whether the reason is algorithm itself, hyperparameters or just initial values of neural network weights. So, for now, instability and slowness to convergence are still two significant problems in deep reinforcement learning with nonlinear function approximation

[3] and actor-critic style architecture.

In this paper, we propose a mixed distributed training approach which is more stable and quick to convergence, compared with standard PPO. We set multiple policies with different initial parameters to control several agents at the same time. Decisions that affect the environments can only be made by each agent, but all trajectories are gathered and allocated to policies to train after elaborate selection. We define two selection criteria, (i) complete trajectories, where agent successfully completes a task like reaching a goal or last till the end of an episode, and (ii) auxiliary trajectories, where an agent doesn’t complete a task but the cumulative reward is much higher than other trajectories. In previous works, [23],[16],[8],[5] only distribute agents, not policies. [13] or RL with evolution strategies like [18] distribute policies, but they only choose and dispatch the best policy with the highest cumulative rewards. So, this work is the first, to our knowledge, to distribute both policies and agents and mix trajectories to stabilize and accelerate RL training.

For the purposes of simple implementation, we evaluated our approach on the Unity platform with Unity ML-Agents Toolkit [10] which is able to easily tackle with parallel physical scenes. We tested four Unity scenarios, simple roller, 3d ball, obstacle roller and simple boat. Our algorithm performs very well in all scenarios with random initial weights. The source code can be accessed at https://github.com/BlueFisher/RL-PPO-with-Unity.

Ii Related Work

Several works have been done to solve the problem of instability. In deep Q network (DQN [15]), a separated neural network is used to generate the target Q value in online Q-learning update, making the training process much more stable. The experience replay technique stores agent’s transitions at each time-step in a data set and samples a mini-batch from it during the training process, in order to break the correlation between transitions since the input of neural networks is preferably independent and identically distributed. Due to the maximization step, conventional DQN is affected by an overestimation bias, Double Q-learning [27] addresses this overestimation by decoupling the selection of the action from its evaluation. Prioritized experience replay [20] improves data efficiency, by replaying more often transitions. The dueling network architecture [28] helps to predict advantage function instead of state-action value function. Deep deterministic policy gradient (DDPG [12]) applies both experience replay and target network into the deterministic policy gradient [24], and uses two different target networks to represent deterministic policy and Q value. However, in the aspect of stochastic policy gradient, almost all algorithms that based on actor-critic style [11] have the problem of divergence, because the hardness of convergence of both actor and critic makes the whole algorithm even more unstable. Trust region policy optimization (TRPO [21]) and proximal policy optimization (PPO [23],[7]) try to bound the update of parameters between the policies before and after update, in order to make the extent of optimization not too large to out of control.

In terms of accelerating training process, the general reinforcement learning architecture (Gorila [16]) , asynchronous advantage actor critic (A3C [13]) and distributed PPO (DPPO [7]) take the full use of multi-core processors and distributed systems. Multiple agents act in their own copy of the environments simultaneously. The gradients are computed separately and sent to a central parameter brain, which updates a central copy of the model asynchronously. The updated parameters will be then added and sent to every agent at a specific frequency. Distributed prioritized experience replay [8] overcomes the disadvantage that reinforcement learning can’t efficiently utilize the computing resource of GPU, but it only can be applied to the algorithm that uses replay buffer like DQN or DDPG. Besides, a distributed system is not easy and economical to implement. Reinforcement learning with unsupervised auxiliary tasks (UNREAL [9]

) accelerates training by adding some additional tasks to learn. However, these tasks mainly focus on the tasks with graphical state input, which predict the change of pixels. And it is difficult to be transferred to some simple problems with vector state space environments.

Iii Background

In reinforcement learning, algorithms based on policy gradient provide an outstanding paradigm for continuous action space problems. The purpose of all such algorithms is to maximize the cumulative expected rewards . The most commonly used gradient of objective function with baseline can be written into the following form

(1)

where denotes the set of all states and denotes the set of all actions respectively. is an on-policy distribution ver states. a stochastic policy that maps state to action , and an advantage function.

TRPO replaces objective function with a surrogate objective, which has a constraint on the extent of update from an old policy to a new one. Specifically,

(2)

where is an old policy that generates actions but the parameters are fixed during the training process. is the policy that we try to optimize but not too much, so an upper limit is added to constrain the KL divergence between and .

PPO can be treated as an approximated but much simpler version of TRPO. It roughly clip the ratio between and and change the surrogate objective function into the form

(3)

where and is the clipping bound. Noted that PPO here is an actor-critic style algorithm where actor is policy and critic is advantage function .

In intuition, advantage function represents how good a state-action pair is compared with the average value of current state, intuitively, i.e.,

. The most used technique for computing advantage function is generalized advantage estimation (GAE

[22]). One popular style that can be easily applied to PPO or any policy-gradient-like algorithm is

(4)

where denotes the maximum length of a trajectory but not the terminal time step of a complete task, and is a discounted factor. If the episode terminates, we only need to set to zero, without bootstrapping, which becomes where is the discounted return following time defined in [25]. An alternative option is to use a more generalized version of GAE:

(5)

where which is known as td-error. Noted that (5) will be reduced to (4) when

but has high variance, or be reduced to

when but introduce bias. Furthermore, in DPPO, both data collection and gradient computation are distributed. All workers share one centralized network, synchronized calculated gradients and updated parameters.

We only need to approximate instead of to compute the approximation of advantage function where

. So, instead of optimizing the surrogate objective, we have to minimize loss function

(6)

where is the fixed value function which is used to compute .

Iv Mixed Distributed Proximal Policy Optimization

We distribute both policies and agents in contrast to DPPO where only agents are distributed. We set policies that have exactly the same architectures of policy network and value function network as well as hyperparameters. Each policy, like a command center, controls a group of identical agents where denotes the serial number of policy that the group related to. So, environments are running paralleled, interacting with agents divided into groups. Each agent interacts its own shared policy , sending states and receiving actions. When all agents have finished a complete episode , each policy is not only going to be fed with trajectories from the group of agents that it controls but also a bunch of ones from other groups elaborately that may accelerate and stabilize the training process. Figure 1 illustrates how decision and training data flows.

Fig. 1: MDPPO: There are policies in the figure. Each policy controls agents (red lines), gives them the decision that how to interact with environments (blue and green lines). The data for computing gradients flows through gray lines where each policy gets updated not only by the data generated from agents that it controls, but also from other groups of agents.

We define two sorts of trajectories being used to mixed training.

Complete trajectories, where an agent successfully completes a task. We simply divide tasks into two categories, (i) task with a specific goal, like letting an unmanned vehicle reach a designated target point, or making a robotic arm grab an object, and (ii) without a goal but trying to make the trajectory as long as possible till the maximum episode time is reached, like keeping a balance ball on a wobbling plate.

Auxiliary trajectories, where an agent doesn’t complete a task but the cumulative reward is much higher than other trajectories. For in many real-world scenarios, extrinsic rewards are extremely sparse. An unmanned surface vehicle may get no reward or cannot reach the target point for a long time in the beginning. However, we can utilize trajectories that have the highest cumulative rewards to encourage all policies to learn from currently better performance.

1 for  do
2       for  do
3             for  do
4                   Run policy in environment for timesteps and compute advantage function.
5                   Store trajectories and cumulative rewards in dataset .
6             end for
7            
8       end for
9      Compute ”auxiliary” and ”complete” trajectories from and store them in .
10       for  do
11             optimize (or ) and respectively or combined (or ) with mixed transitions from and .
12       end for
13      
14 end for
Algorithm 1 Mixed Distributed Proximal Policy Optimization (MDPPO)

We mix complete trajectories and auxiliary trajectories, and allocate them to every policy during the training process. So each policy computes gradient with its usual and mixed data altogether to update policy networks and value functions independently.

Noted that in practice, mixed transitions from other policies may cause the denominator of ratio in (3) be zero if

is too small that exceeds the floating range of computer. So that gradients cannot be computed correctly and result in ”nan” problem when using deep learning frameworks like TensorFlow

[1]. We introduce four solutions, (i) simply removing all transitions that make before training, (ii) adding a small number to denominator such that to make sure it has a lower bound, (iii) rewriting the ratio into exponential form without modifying the clipping bound, and (iv), which we used in our experiments, rewriting the ratio into subtraction form instead of fractional one and limit the update of new policy to the range of :

(7)

We no longer need to worry about ”nan” problem since there is no denominator in (7).

Furthermore, there are two common forms of loss functions and network architectures. (i) training separated and as the standard Actor-Critic style algorithms with completely different weights between policy and value function networks, and (ii) policy and value function networks can share a part of the same weights [23]. In this case, and should be combined into one loss function, , to make sure the update of shared weights is stable.

Noted that the entropy term in [23] only acts as a regularizer, instead of maximizing the entropy [6]. We discover that in our approach, the entropy term is optimal. Because a part of data being used to update policy is actually generated by other different policies, which is the reason that MDPPO can encourage exploration. Although all policies’ architectures and hyperparameters are identical, the initial weights of neural networks are totally different, which is the source of instability in standard PPO that gives MDPPO a more powerful ability of exploration.

Algorithm 1 illustrates a mixed distributed proximal policy optimization (MDPPO).

In contrast to DPPO where the computation of gradients is distributed to every agent and gradients are summarized to the center policy update then, in our approach, to take the full advantage of GPU resources, agents are only used to generate data, and transitions data are transferred to the policy that it controls. So, gradients are computed in each policy centralized. In order to break the correlation between transitions, we tend to shuffle all transitions before training.

Iv-a MDPPO with Separated Critic

Actually, although the label of red box in Figure 1 is shown as ”policy”, the data through gray lines should be used to update both policy and value function, which we usually call them actor and critic in actor-critic style algorithms. We find that having different value function networks is not quite necessary. We expect more actors and mix them to some extent to encourage exploration and increase diversity, which may speed up the training process and make it more stable. However, critic only needs state-reward transitions as much as possible, trying to make the approximation of states’ value more correct. Besides, the loss function is irrelevant to the policy or action. So, we can separate the value function network from Figure 1 and feed all transitions to one centralized critic. Figure 2 shows the separated value function which is extracted from the policies in Figure 1 .

Fig. 2: MDPPO with Separated Critic (MDPPOSC): The architecture is the same as Figure 1, except for the separated value function. The data through gray lines are now not only fed to actual policies, but to centralized value function (gray box), which needs all transitions generated by agents without filtering.

Algorithm 2 illustrates MDPPO with separated critic where the separated value function is denoted as .

1 for  do
2       for  do
3             for  do
4                   Run policy in environment for timesteps and compute advantage function.
5                   Store trajectories and cumulative rewards in dataset .
6             end for
7            
8       end for
9      Compute ”auxiliary” and ”complete” trajectories from and store them in .
10       minimize with transitions from
11       for  do
12             optimize (or ) with mixed transitions from and .
13       end for
14      
15 end for
Algorithm 2 Mixed Distributed Proximal Policy Optimization with Separated Critic (MDPPOSC)

However, under the circumstances of sparse positive rewards, the value function may be fed with a large number of useless transitions generated from a bad policy at the beginning, which will cause the critic to converge to a local optimum in a very short time and hard to jump out the gap, since the algorithm would have learned a terrible policy with the guide of a terrible advantage function. In practice, we propose two tricks, (i) reducing transitions fed to train. We only use a part of data, usually , to train value function. (ii) We take some ideas from prioritized experience replay [19], but only simply feed transitions where td-error is greater than a threshold. We also set the threshold to exponential decay for we’d like value function to explore more possibilities at the beginning, but converge to the real optimal point at the end.

V Experiments

We tested our approach in four environments, Simple Roller, 3D Ball, Obstacle Roller and USV on the Unity platform with Unity ML-Agents Toolkit [10]. Distributed experiment environments are difficult and expensive to build in the physical world. Even in one computer, it is still not easy to implement a parallel system which has to manage multiple agents, threads and physical simulations. However, as a game engine, Unity can provide us with fast and distributed simulation. What we only need to do is copying an environment several times, Unity will run all environments in parallel without any extra coding. Besides, Unity has a powerful physical engine, which makes it easy to implement an environment that conforms physical law without the details of physical algorithms like gravity, collision detection, etc. Simple Roller and 3D Ball are two standard example environments in Unity ML-Agents Toolkit. Obstacle Roller is an obstacle avoidance environment based on Simple Roller which will be introduced in detail in Section V-A . USV is a custom target tracking environment on the water, and will be introduced in Section V-A . The details of experimental hyperparameters are listed in Appendix A .

V-a Environments Setup

Simple Roller ( Figure 2(a) ) is a tutorial project of Unity ML-Agents Toolkit. The agent roller’s goal is to hit the target as quickly as possible while avoiding falling down the floor plane. The state space size is 8, representing relative position between agent and target, agent’s distance to edges of floor plane and agent’s velocity. The action space size is 2, controlling force in the x and z directions respectively which will be applied to the roller. We set the initial state of each episode to be the terminal state of the previous episode if the agent in the previous episode hit the target. Otherwise, we reset the initial state. Hence, a little oscillation around optimal rewards was normal.

3D Ball ( Figure 2(b) ) is an example project of Unity ML-Agents Toolkit, where we try to control a platform agent to catch a falling ball and keep it balanced on the platform. The state space size is 8, representing the agent’s rotation, the relative position between ball and agent, and agent’s velocity. The action space size is 2, controlling the agent’s rotation along x and z axes.

There is a hard version of 3D Ball, 3D Ball Hard, which has the same environment setting and goal except for the state information. The state space size is only 6, where the velocity of the ball is removed. So, the agent platform can only be aware of a part of the environment and the training process becomes a partially observed MDP problem. The agent has to take the current and a bunch of past observations as the input state of policy and value function neural network. We stacked 9 observations which were recommended by ML-Agents. So, the state space was actually 54.

Obstacle Roller ( Figure 2(c) ) is a custom obstacle avoidance environment based on Simple Roller. We added a white ball between the target and the agent roller, and gave it a static policy, trying to prevent the agent from hitting the target. If the agent roller hit the yellow target, we rewarded it . If it fell or hit the obstacle, we rewarded it . Otherwise, we gave it time penalty as usual in Simple Roller. However, besides the time penalty, to tackle the problem of sparse extrinsic reward signal, we added reward every step where represented whether the agent’s velocity direction was to the target. The initial state of an episode was the last state of the previous episode unless the agent fell the platform or hit the obstacle. We tried to make the roller stay on the platform as long as possible while avoiding hitting the obstacle.

USV ( Figure 2(d) ) is a target tracking environment just like Simple Roller. The agent is an unmanned surface vessel sailing on the water, trying to hit the white cuboid target while avoiding sailing out of a circular area. The target will appear randomly in the area after the USV reaches the target. We developed a realistic double-rudder twin-engine USV and ocean surface effect with the full advantage of the Unity physical system. Compare to the simple 2D environment in [4], our environment exhibits more realistic physics engine (gravity, water friction, drift), graphics (water reflection, illuminations, etc.) and collision system. The USV model has the realistic thrust, rudder design compare with the USV in real world. Table I shows the main physical coefficients of USV. For the simplicity of training, the state space is 12, representing agent’s position, velocity, heading angle and target’s position. The actions control USV’s thrust and steering. In order to tackle the sparse extrinsic reward,the time penalty of reward function is where and represent whether USV’s velocity and heading directions are toward the target. So the goal of an USV is to reach the target as soon as possible, besides the heading angle and speed direction toward the target as far as possible.

max angle 35 degree
rotation speed 50 degree/s
max speed 35 m/s
max RPM 800
min RPM 6000
max thrust 5000
reverse thrust coefficient 0.5
TABLE I: Physical coefficients of USV
(a) Simple Roller
(b) 3D Ball
(c) Obstacle Roller
(d) USV
Fig. 3: Four environments where we tested our algorithms. Noted that the environment of 3D Ball Hard was the same as 3D Ball, except that it removed the ball’s velocity as a part of the state.

V-B Our Experiments

Simple Roller. We first tested Simple Roller under 5 policies, i.e., . Each policy was modeled as a multivariate Gaussian distribution with a mean vector and a diagonal covariance matrix, parameterized by . We set 20 distributed agents and environments in all, i.e., each policy controlled 4 agents and . The ”complete” trajectories were simply regarded as the transitions that rollers hit targets. As for ”auxiliary” trajectories, we chose the top 40% transitions that had the greatest cumulative rewards. Figure 4 shows our four experiments, MDPPO, MDPPO with subtraction ratio, standard PPO with four distributed agents and twenty distributed agents. We divided them into two groups, one with shared weights and the other not.

(a) MDPPO with separated weights
(b) MDPPO with shared weights
Fig. 4: MDPPO (red) and MDPPO with subtraction ratio (green) were tested with 5 policies. Each policy had 4 agents. Yellow and blue lines are standard PPO algorithms with 4 agents and 20 agents respectively. PPO with 4 agents were tested 5 times, meanwhile PPO with 20 agents were tested 10 times. The lighter color in the background shows the best and worst performance of each algorithm test. The more narrow the background, the more stable the algorithm.

Both MDPPO and MDPPO with subtraction ratio outperformed standard PPO. The time to convergence was shorter, and most importantly, MDPPO was much more stable. All five policies were converged almost simultaneously and had similar performance. In contrast, only a few policies that optimized by standard PPO had converged, which showed the instability of PPO. Besides, the speed to convergence of some policies was quite different. Some converged as quickly as MDPPO, but most of them delayed much, even not converged. We couldn’t find a pattern of time to convergence with different initial parameters for now. Table II compares the four algorithms in terms of time and convergence.

Algorithm Parameters Policies Converged Episode*
MDPPO separated 5 5 200 - 250
MDPPO_Sub 5 5 125 - 750
PPO_4agents 5 0 N/A
PPO_20agents 10 6 350 - 2k+**
MDPPO shared 5 5 150 - 250
MDPPO_Sub 5 5 125 - 400
PPO_4agents 5 2 450 - 2k+**
PPO_20agents 10 9 150 - 2k+**
MDPPOSC 5 5 400 - 400
MDPPOSC_Sub 5 5 200 - 400
  • There is no such criterion of time to convergence, so the episode here is quite subjective and has error between the range .

  • We ran these algorithms 2000 episodes. They showed a trend of convergence around the 2000th episode but didn’t actually converge to the optimal reward. We use ”2k+” for we believe that these algorithms may converge in the future episodes.

TABLE II: Count and time to convergence of different algorithms

We also tested the performance of our approach with different combinations of and , but the total number of agents was set to 20, i.e., . Figure 4(a) shows the comparison between five combinations. We found that with the decrease of , the algorithm will be more unstable, especially when , i.e., standard PPO. Besides, even the algorithm that had the best performance was slower to convergence than any other algorithms that . However, it was also not always good to increase . In this experiment, a larger might indeed accelerate the training process, but it would also make the algorithm much more unstable.

(a) Comparison between combinations of and
(b) MDPPOSC
Fig. 5: 4(a) compares five combinations of policies number and agents number each policy , which shows that with the decrease of policies, the algorithm is getting more unstable and slow to convergence. However, too many policies may also cause the instability. A moderate number of policies is a better choice from the experiments. 4(b) shows MDPPO with separated critic. Two algorithms’ performance is similar and outperform standard PPO both.

Finally, we tested MDPPO with separated critic. The Simple Roller is actually not a sparse positive reward environment, so we didn’t set any td-error threshold. Figure 4(b) illustrates the results of algorithms with two ratio forms. Both showed similar great performance.

3D Ball. Similar to Simple Roller experiments, we tested MDPPO and PPO with separated and shared weights and MDPPOSC. However, 3D Ball had a different structure of reward function, so the criterion of ”complete” trajectories should be modified. Unlike Simple Roller, where we defined ”complete” trajectories as the agent hitting the target, there was no such specific goal or target in 3D Ball. What the agent tried to do was keeping the ball on the platform as long as it could until the maximum step reached. Thus, we modified the definition of ”complete” trajectories into the ones that lasted the longest steps which were 500 we set. Figure 6 shows the result of 3D Ball. Due to the simpleness and full awareness of the environment, the performance of our algorithms was quite similar to PPO but still had a little improvement of stability.

Figure 7 shows that both MDPPO and MDPPOSC outperform PPO. Although the speed to convergence was not our algorithms’ advantage in this environment, they showed strong stability compared to PPO. All five policies converged quickly and almost simultaneously.

(a) MDPPO (separated)
(b) MDPPO (shared)
(c) MDPPOSC
Fig. 6: The results of 3D Ball.
(a) MDPPO (separated)
(b) MDPPO (shared)
(c) MDPPOSC
Fig. 7: The results of 3D Ball Hard.

Obstacle Roller. Due to the sparse extrinsic reward in this scenario, we set the td-error threshold to 0.2 in MDPPOSC to reduce a large number of negative transitions that may lead the value function into the convergence of local optimum too soon, which may cause the agent to learn a terrible policy with the guide of false advantage.

Our experiments results are shown in Figure 8 . Both MDPPO and MDPPOSC had better performance and were more stable. All policies performed similarly. In contrast, the standard PPO, even training with 20 agents paralleled, couldn’t perform well and had much bigger uncertainty. Most initial parameters would cause very terrible policies.

(a) MDPPO (separated)
(b) MDPPO (shared)
(c) MDPPOSC
Fig. 8: The results of Obstacle Roller.

USV. Figure 9 shows the result of USV environment. MDPPOSC didn’t perform well in the USV scenario. Only 2 of the 5 fractional form policies and 4 of 5 subtraction form policies converged. However, MDPPO showed robust performance, especially MDPPO with separated weights. Although it was very difficult for a person to control an USV sailing on dynamic water surface to hit small target, the USV learned when to increase speed or slow down and when to rudder to the right or left according to the new position of the target. So that the USV could head to it as soon as possible. We held a match between human control and our trained policy control, to see who can complete the task faster. The result was that the trained USV won. It not only reached the target faster, but it also got out of the border less. However

(a) MDPPO (separated)
(b) MDPPO (shared)
(c) MDPPOSC
Fig. 9: The results of USV

Vi Conclusion

Slowness and instability are two major problems in reinforcement learning, which makes it difficult to directly apply RL algorithms to real-world environments. To tackle with these two problems, we presented Mixed Distributed Proximal Policy Optimization (MDPPO) and Mixed Distributed Proximal Policy Optimization with Separated Critic (MDPPOSC) in this paper and made three contributions. (i), we proposed a method that distributed not only agents but also policies. Multiple policies trained simultaneously and each of them controlled multiple identical agents. (ii), the transitions were mixed from all policies and fed to each policy for training after elaborate selection. We defined complete trajectories and auxiliary trajectories

to accelerate the training process, (iii), we formulated a practical MDPPO algorithm based on the Unity platform with Unity ML-Agents Toolkit to simplify the implementation of distributed reinforcement learning system. Our experiments indicated that MDPPO was much more robust under any random initial neural network weights and could accelerate the training process compared with standard PPO. However, because of the limitation of training equipment, we couldn’t test our algorithms in a real distributed system. Also, the implementation of such system is quite complex. The data transmission and mixture may slow down the training speed. We leave this engineering problem for future work.

References

  • [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, and others.

    Tensorflow: A system for large-scale machine learning.

    In 12th ${$USENIX$}$ Symposium on Operating Systems Design and Implementation (${$OSDI$}$ 16), pages 265–283.
  • [2] Leemon Baird. Residual algorithms: Reinforcement learning with function approximation. In Machine Learning Proceedings 1995, pages 30–37. Elsevier.
  • [3] Shalabh Bhatnagar, Doina Precup, David Silver, Richard S Sutton, Hamid R. Maei, and Csaba Szepesvári. Convergent temporal-difference learning with arbitrary smooth function approximation. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1204–1212. Curran Associates, Inc.
  • [4] Sam Michael Devlin and Daniel Kudenko. Dynamic potential-based reward shaping. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems, pages 433–440. IFAAMAS, 2012.
  • [5] Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
  • [6] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. 2018.
  • [7] Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, and David Silver. Emergence of locomotion behaviours in rich environments.
  • [8] Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado van Hasselt, and David Silver. Distributed prioritized experience replay.
  • [9] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks.
  • [10] Arthur Juliani, Vincent-Pierre Berges, Esh Vckay, Yuan Gao, Hunter Henry, Marwan Mattar, and Danny Lange. Unity: A general platform for intelligent agents.
  • [11] Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In Advances in neural information processing systems, pages 1008–1014.
  • [12] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning.
  • [13] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning.
  • [14] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. page 9.
  • [15] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. 518(7540):529–533.
  • [16] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, and others. Massively parallel methods for deep reinforcement learning.
  • [17] David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods.
  • [18] Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
  • [19] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay.
  • [20] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
  • [21] John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization.
  • [22] John Schulman, Philipp Moritz, Sergey Levine, Michael I Jordan, and Pieter Abbeel. HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION. page 14.
  • [23] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms.
  • [24] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. page 9.
  • [25] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press.
  • [26] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
  • [27] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In

    Thirtieth AAAI Conference on Artificial Intelligence

    , 2016.
  • [28] Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Van Hasselt, Marc Lanctot, and Nando De Freitas. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.

Appendix A Hyperparameters

batch size 2048
epoch size 10
entropy coefficient 0.001
epsilon 0.2
epsilon (subtraction form) 0.02
lambda 0.99
auxiliary transitions ratio 0.4
TABLE III: General hyperparameters for MDPPO and MDPPOSC