A Logarithmic Barrier Method For Proximal Policy Optimization

12/16/2018 ∙ by Cheng Zeng, et al. ∙ 0

Proximal policy optimization(PPO) has been proposed as a first-order optimization method for reinforcement learning. We should notice that an exterior penalty method is used in it. Often, the minimizers of the exterior penalty functions approach feasibility only in the limits as the penalty parameter grows increasingly large. Therefore, it may result in the low level of sampling efficiency. This method, which we call proximal policy optimization with barrier method (PPO-B), keeps almost all advantageous spheres of PPO such as easy implementation and good generalization. Specifically, a new surrogate objective with interior penalty method is proposed to avoid the defect arose from exterior penalty method. Conclusions can be draw that PPO-B is able to outperform PPO in terms of sampling efficiency since PPO-B achieved clearly better performance on Atari and Mujoco environment than PPO.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reinforcement learning is a computational approach to understanding and automating goal-directed learning and decision making. It is distinguished from other computational approaches by its emphasis on learning by an agent from direct interaction with its environment, without requiring exemplary supervision or complete models of the environment.

The integration of reinforcement learning and neural networks has a long history  

[Sutton and Barto1998, Bertsekas et al.1996, Schmidhuber2015]

. With recent exciting achievements of deep learning  

[Lecun et al.2015, Heaton2017], benefiting from big data, new algorithmic techniques and powerful computation, we have been witnessing the renaissance of reinforcement learning, especially, the combination of reinforcement learning and deep neural networks, i.e., deep reinforcement learning.

Deep reinforcement learning methods have shown tremendous success in a large variety tasks, such as Atari  [Mnih et al.2013], continuous control  [Lillicrap et al.2015, Schulman et al.2015] and even Go at the human grandmaster level  [Silver et al.2016, Silver et al.2017]. Policy gradient methods  [Williams1992] is an important family of methods in model-free reinforcement learning.

Policy gradient methods use neural network to describe the relationship between state and action, or a distribution over action. We can get the expression of the expected return under the policy

. After that, we derive the formula and write it in a form of mathematical expectation so as to get the formula of the gradient estimate under the current policy. Then we could use Monte Carlo approach to estimate the policy gradient with the data obtained from the interaction between the current policy and the environment. After that, a gradient ascent method will be implemented on the parameters in neural networks. Once the parameters are determined finally, we actually determine an optimal policy.

There are some problems in policy gradient methods: for instance, it is in the cards to converge to the local minimum value. Sample inefficiency is also a major issue. What’s more, once the optimization is done without any constraints, the parameters may update on a large scale and lead to divergence eventually. For the sake of averting this problem, TRPO creates a new constraint on the KL divergence between the two updated action distributions, and proves that under this constraint the optimal solution of our constraint problem will ensure that the expected return is increasing. However, TRPO is very complex in implement and computation. PPO algorithm is proposed to avoid this. It transform a constrain optimization to an unconstrained optimization. It is not only easy to implement, but also fairly well in experiment. In a sense, PPO is one of the best algorithm in policy gradient method.

In the original PPO, two objective functions are proposed. One is with penalty on KL divergence between two distributions, the other is a well designed pessimistic clipped surrogate objective. Experimental results showed that the PPO algorithm with the “clipped” surrogate objective were better in more games. Although PPO with KLD penalty doesn’t perform well in experiment, it can give some enlightenment to our follow-up work. From the point of view of optimization, PPO with penalty on KL divergence is actually a exterior penalty method. It constantly penalizes the larger KL divergence, forcing it to converge to the constraint region.

The exterior penalty method has some drawbacks. Though every step of iteration penalizes solutions that are not within the scope, it still can not guarantee that every update is in the feasible region, in other words, the inequality control range. When out of feasible region, the gradient will not be estimated by rule and line. Thus, in order to avoid this situation, we proposed to use the barrier function to constrain the search set of every update. Barrier functions is a family of functions which tend to infinity when it approaches the edge of constrained region, it will force every update in the feasible region. The collected data can help us to estimate the gradient accurately, so that parameters can be updated more effectively. We conducted a complete experiment in the Atari and Mujuco environment, and achieved very competitive results relative to PPO.

The rest of this paper is organized as follows: in the first, we will elaborate the background of this paper and summarize previous work. Next, we will introduce the logarithmic barrier method. Then a new surrogate objective will be proposed to improve sampling efficiency. In the end, we will show the experience results and analysis them.

2 Background: Policy Optimization

2.1 Policy Gradient

The policy gradient methods are an important family of reinforcement learning algorithms. It assumes that the policy is a map from state to an action. is the parameter of policy, and a fixed will decide a unique policy. In policy gradient method, the expected cumulative reward is written as a function of policy. That is

(1)

where is an estimator of the advantage function at timestep t. We take fixed to interact with the environment, and the collected data can update many . In this way, the gradient estimator has the form

(2)

The core idea of the policy gradient algorithm is to interact with the environment using the policy, and to estimate the gradient by Monte Carlo method. In this way, we also get a new policy, and interact with the environment with the new policy. The data obtained are used to update the parameters, and this cycle improves the expected return of the policy.

2.2 Trpo

If the algorithm is on-policy, the data derived from can only be used for a time. Once the is updated, the data collected before will be useless. This is not in line with our want. It is appealing to perform multiple steps of optimization on the loss using the same trajectory.

In this case, the more effective way is to adopt off-policy. This method needs the importance sampling. In TRPO, it gives the objective function that we need to optimize:

(3)

The is the dominant function under the parameter .

TRPO proves that if maximizing the upper form in every iteration, we can guarantee the expectation is monotonous. Due to the property of importance sampling, if we want to use the data more efficient, it is necessary to keep the KL divergence between old and new action distribution not very large. So what we want to deal with is a optimization problem in Equation (3) under the constraint of .

Therefore, the constrained optimization problem proposed by TRPO is

(4)

A lot of techniques are used in TRPO to deal with this complex constrained optimization problem, such as making a linear approximation to the objective and a quadratic approximation to the constraint, which also increase the difficulty of computation and implementation.

2.3 Ppo

PPO method has been proposed to benefit the reliability and stability from TRPO with the goal of simpler implementation, better generalization and better empirical sample complexity. PPO algorithm puts forward two ways to improve TRPO’s constrained optimization problem. The first is to use penalty method instead of constraint. Which is Shown on Equation (5). In essence, it is a exterior penalty method. For the selection of the penalty parameter, PPO proposes an adaptive way to adjust the penalty coefficient.

(5)

The second is relatively ingenious. It replaces the original constrained problem with a “clipped” surrogate objective, which is

(6)

From the experimental results, PPO-clip is better.

For the first method, the exterior penalty function is used to solve the problem. However, the method of exterior penalty function will cause some problems. It can not guarantee that the two distributions are strictly in the constrain domain at each update. Therefore, this paper begins to consider the use of barrier method to ensure that parameter is strictly in the constrain region at each update. From the experimental results, our method is better than PPO-clip, and achieve state-of-art performance in policy gradient methods. We will elaborate this method in detail in the following chapters.

3 The Logarithmic Barrier Method

The first method in PPO are sometimes known as exterior penalty methods, because the penalty term for constraint is nonzero only when independent variables is infeasible with respect to that constraint. Often, the minimizers of the penalty functions are infeasible with respect to the original problem, and approach feasibility only in the limits as the penalty parameter grows increasingly large.

Now that exterior penalty method in PPO does not guarantee the KL divergence satisfies the constraint, a interior penalty method will be proposed here to avoid these. The interior penalty method is also called the barrier method. We introduce the concept of barrier functions by a generalized inequality-constrained optimization problem. Consider the problem

(7)

the strictly feasible region is defined by

(8)

we assume that is nonempty for purposes of this discussion. Barrier functions for this problem have the properties that (a)they are smooth inside ; (b)they are infinite everywhere except in ; (c)their value approaches as x approaches the boundary of .

The most commonly used barrier function is the logarithmic barrier function, which for the constraint set , has the form

(9)

where denotes the natural logarithm.

For the inequality-constrained optimization problem, the combined objective/barrier function is given by

(10)

where is referred to here as the barrier parameter. As of now, we refer to itself as the “logarithmic barrier function” for the Equation (7), or simply the “log barrier function ” for short.

Consider the following problem in a single variable x:

(11)

for which we have

(12)

We graph this function for different values of in Figure 1. Naturally, for small values of , the function is close to the objective over most of the feasible set; it approaches only in narrow “boundary layers.” (In Figure 1, the curve is almost indistinguishable from to the resolution of our plot, though this function also approaches when x is very close to the endpoints 1 and 2.) Also, it is clear that as , the minimizer of is approaching the solution of the constrained problem.

Figure 1: figure of for different

Since the minimizer of lies in the strictly feasible set , we can in principle search for it by using the unconstrained minimization algorithms. Unfortunately, the minimizer becomes more and more difficult to find as . So we should choose a suitable , not as small as possible.

M. Wright  [Wright and Holt1985] proved the effectiveness of log barrier function method in the case of convex function.

Theorem 1

Suppose that and , in Equation(7) and (8) are all convex functions, and that the strictly feasible region defined by Equation (8) is nonempty. Assume that the solution set is nonempty and bounded. Then for any , is convex in and attains a minimizer (not necessarily unique) on . Any local minimizer is also a global minimizer of .

In fact, other functions can be used as barrier functions. We compare the experimental results of some barrier functions and finally choose log function as our barrier function.

4 Barrier Method for PPO

In this section, we will apply the logarithmic barrier method to solve constrained optimization problems of Equation (4), which will improve the Sampling Efficiency of PPO.

The essence of PPO with penalization objective is an exterior penalty method. The constrained condition is:

(13)

so we penalize the KL divergence with coefficient , which is chosen by an adaptive way. When leaving the constraint region, the penalization is increased, forcing it to move closer to the feasible area. In the feasible area, the difference between the two is not large enough to be considered, so we can have the similar optimal solution. Unlike the exterior penalty function, we use the interior penalty method to solve this problem. The essence of exterior penalty function method is to approximate the optimal solution of the constraint problem from the outside of the feasible region, while interior penalty function contrarily. Each update will keep the constrain strictly. So this method is more suitable for solving inequality constrained optimization problems. Specifically, we use the logarithmic barrier function proposed in the previous section to solve this problem.

The barrier method is also called the interior penalty function. The minimum point of the function is a strictly feasible point, that is, a point satisfying the inequality constraint. When the minimal point sequence of the barrier function approaches the boundary of the feasible domain from the feasible domain, then the barrier function tends to infinity, so as to prevent the iteration point from falling out of the feasible region. At the same time, when we find a suitable , the extreme value of the objective function with barrier function is closed to that of the original function, so we only need to solve an unconstrained optimization problem.

Compared with the penalty function in PPO, the objective function with barrier function is more explanatory, and according to the property of the barrier function, it can strictly ensure that the difference of distribution between two successive iterations is not too big, so that the purpose of making full use of the sample can be achieved. The experimental results in the following chapters also confirm our idea.

When applied to the problems we need to deal with, we can transform problems into

(14)

But in practice, the distance determined by KL divergence is not a very robust distance. In practice, inspired by PPO-clip, we hope to use the distance between and to limit the difference between the two distributions. There are many ways to measure the distance between them. We have conducted several sets of controlled trials on several games, and finally chose a distance characterized by . It can be easily proved that the distance is less than the Angular distance between two action distribution.

In this way, the objective function we need to optimize is changed to:

(15)

We solve the whole problem in the framework of A2C  [Mnih et al.2016]

. It should be noted that our method will introduce two hyperparameters

and , but fortunately experiments show that it is sufficient to simply choose fixed coefficients( and ) and optimize the penalized objective Equation (15) with SGD.

The pseudo code is shown on Algorithm 1. It is almost the same as the pseudo code of the PPO algorithm, except that the objective function is replaced by Equation (15).

  Input: max iterations , actors

, epochs

  for iteration=1,2 to  do
     for  actor=1,2 to  do
        Run policy for time steps
        Compute advantage estimations
     end for
     for epoch=1,2 to  do
        Optimized loss objective wrt according to Equation (15) with mini-batch size , then update .
     end for
  end for
Algorithm 1 POP-B, Actor-Critic Style

5 Experiments

In this section, we experimentally compare PPO-B and PPO on version-4 49 benchmark Atari game playing tasks provided by OpenAI Gym  [Brockman et al.2016] and version-2 7 benchmark control tasks provided by the robotics RL environment of PyBullet  [Plappert et al.2018]. we focus on a detailed quantitative comparison with PPO to check whether PPO-B could improve sampling efficiency and performance.

In our setting, the same policy network architecture given in  [Mnih et al.2015] for Atari game playing tasks and the same network architecture given in  [Schulman et al.2017, Duan et al.2016] for benchmark control tasks are adopted in both algorithms. We also use the same training steps and make use of the same amount of game frames (40M for Atari game and 10M for Mujoco). Meanwhile we follow strictly the hyperparameters settings used in  [Schulman et al.2017] for both PPO-B and PPO and initialize parameters using the same policy as  [Schulman et al.2017]. The only exception is the number of actors which is set to 8 in  [Schulman et al.2017] but equals to 16 in our experiments for both tasks. In addition to the hyperparameters used in PPO, PPO-B requires two extra hyperparameters and . In our experiments, we also tested several different settings for and and chose parameters ( and ) that performed best in 7 Mujoco environments.

For searching over hyperparameters for our algorithm, we used a computationally cheap benchmark proposed by  [Schulman et al.2017] to test the algorithms on. It consists of 7 simulated robotics tasks implemented in OpenAI Gym, which use the Mujoco Environment. There are only 1 million time steps for each environment. Each algorithm was run on all 7 environments, with 3 random seeds on each. We scored each run of the algorithm by computing the average total reward of the last 100 episodes. We shifted and scaled the scores for each environment so that the random policy gave a score of 0 and the best result was set to 1, and averaged over 21 runs to produce a single score for each algorithm setting. Parameters ( and ) performed best in 7 Mujoco environments, so we used the two parameters in our experiment settings.

First of all, the performance of both algorithms will be examined based on the learning curves presented in Figure 2 and Figure 3. Afterwards, we will compare the sample efficiency of PPO-B and PPO by using the performance scores summarized in Table 1 and Table 2.

5.1 Compare With PPO on the Atari Domain

We compared PPO-B in Atari environment with PPO algorithm. It is noteworthy that PPO used the best parameters in this experiment. The specific parameters are shown in Table 3 and Table 4. The PPO-B does not modify any of the parameters in the PPO, only adjusts the newly introduced parameters. In fact, in this case, the setting is beneficial to the PPO. We conducted experiments on 49 games on Atari and conducted three experiments in each game environment.

The final result is as follows:

PPO PPO-B
15
22
Table 1: Number of games “won” by each algorithm on ”Atari”

We present the learning curves of the two algorithms on 49 Atari games in Figure 3. As can be clearly seen in these figures, PPO-B can outperform PPO on 34 out of the 49 games. On some game such as DemonAttack and Gopher, PPO-B performed better than PPO. On some games such as BreakOut and KungFuMaster, the two algorithms exhibited similar performance at the start. However PPO-B managed to achieve better performance towards the end of the learning process. On other games such as Kangaroo and Gopher, the performance differences can be witnessed shortly after learning started.

To compare PPO-B and PPO in terms of their sample efficiency, we adopt the two scoring metrics introduced in  [Schulman et al.2017]: (1) average reward per episode over the entire training period that measures fast learning and (2) average reward per episode over last 100 episodes of training that measures final performance. As evidenced in Table 1 and Table 2, PPO-B is clearly more efficient in sampling than PPO on 34 out of 49 Atari games in the metrics of average reward per episode over last 100 episodes of training.

5.2 Compare With PPO on the Mujoco Domain

We also configure PPO-B and PPO in Mujoco which is a continuous environment. The two algorithms have the same parameters as those in the previous section. For each Mujoco environment, we have done three experiments on each algorithm. The experimental results are shown below. We can see that our algorithm is superior to or competitive with PPO.

PPO PPO-B
Table 2: Number of games “won” by each algorithm on ”Mujoco”

In Table 7 we present the mean of rewards of the last 100 episodes in training as a function of training time steps. Notably, PPO-B outperforms PPO in HalfCheetah, Hopper ,InvertedDoublePendulum , InvertedPendulum and Walker2d. In the Swimmer and Reacher environment, however, we observe a different result (in Table 2) where PPO outperforms PPO-B.

6 Conclusion

We have proposed an improved PPO algorithm, which uses barrier method instead of exterior penalty method, and achieves good results in Atari and Mujoco environments. PPO-B makes full use of the advantages of barrier method, increasing the sampling efficiency of each actors. PPO-B also keeps the advantage of simple implementation, good generalization. It achieve better performance to the PPO algorithm, and can also give some enlightenment to the following work.

Acknowledgements

The auther are grateful to Ruoyu Wang, Minghui Qin and others at AMSS for insightful comments.

References

  • [Bertsekas et al.1996] D. P. Bertsekas, J. N. Tsitsiklis, and A. Volgenant. Neuro-dynamic programming. Encyclopedia of Optimization, 27(6):1687–1692, 1996.
  • [Brockman et al.2016] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016.
  • [Duan et al.2016] Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In

    International Conference on International Conference on Machine Learning

    , pages 1329–1338, 2016.
  • [Heaton2017] Jeff Heaton. Ian goodfellow, yoshua bengio, and aaron courville: Deep learning. Genetic Programming & Evolvable Machines, 19(1-2):1–3, 2017.
  • [Lecun et al.2015] Y Lecun, Y Bengio, and G Hinton. Deep learning. Nature, 521(7553):436, 2015.
  • [Lillicrap et al.2015] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
  • [Mnih et al.2013] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. Computer Science, 2013.
  • [Mnih et al.2015] V Mnih, K Kavukcuoglu, D Silver, A. A. Rusu, J Veness, M. G. Bellemare, A Graves, M Riedmiller, A. K. Fidjeland, and G Ostrovski. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
  • [Mnih et al.2016] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. 2016.
  • [Plappert et al.2018] Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, Vikash Kumar, and Wojciech Zaremba. Multi-goal reinforcement learning: Challenging robotics environments and request for research, 2018.
  • [Schmidhuber2015] Jurgen Schmidhuber. Deep learning in neural networks: An overview. Neural Netw, 61:85–117, 2015.
  • [Schulman et al.2015] John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy optimization. Computer Science, pages 1889–1897, 2015.
  • [Schulman et al.2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. 2017.
  • [Silver et al.2016] D Silver, A. Huang, C. J. Maddison, A Guez, L Sifre, den Driessche G Van, J Schrittwieser, I Antonoglou, V Panneershelvam, and M Lanctot. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • [Silver et al.2017] D Silver, J Schrittwieser, K Simonyan, I Antonoglou, A. Huang, A Guez, T Hubert, L Baker, M. Lai, and A Bolton. Mastering the game of go without human knowledge. Nature, 550(7676):354–359, 2017.
  • [Sutton and Barto1998] R Sutton and A Barto. Reinforcement Learning:An Introduction. MIT Press, 1998.
  • [Williams1992] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256, 1992.
  • [Wright and Holt1985] S. J. Wright and J. N. Holt. An inexact levenberg-marquardt method for large sparse nonlinear least squres. Journal of the Australian Mathematical Society, 26(4):387–403, 1985.

Appendix A Hyperparameters

HYPER-PARAMETER Value
HORIZON (T) 128
ADAM STEP-SIZE
NUM EPOCHS 3
MINI-BATCH SIZE
DISCOUNT() 0.99
GAE PARAMETER() 0.95
NUMBER OF ACTORS 16
CLIPPING PARAMETER
VF COEFF 1
ENTROPY COEFF 0.01
Table 3: PPO’s hyper-parameters for Atari game.
HYPER-PARAMETER Value
HORIZON (T) 128
ADAM STEP-SIZE
NUM EPOCHS 3
MINI-BATCH SIZE
DISCOUNT() 0.99
GAE PARAMETER() 0.95
NUMBER OF ACTORS 16
VF COEFF 1
ENTROPY COEFF
BARRIER FUNCTION PARAMETER () 1
BARRIER FUNCTION PARAMETER () 0.5
Table 4: PPO-B’s hyper-parameters for Atari game.
HYPER-PARAMETER Value
HORIZON (T) 2048
ADAM STEP-SIZE
NUM. EPOCHS 10
MINI-BATCH SIZE
DISCOUNT() 0.99
GAE PARAMETER() 0.95
Table 5: PPO’s hyper-parameters for Mujoco game.
HYPER-PARAMETER Value
HORIZON (T) 2048
ADAM STEP-SIZE
NUM. EPOCHS 10
MINI-BATCH SIZE
DISCOUNT() 0.99
GAE PARAMETER() 0.95
BARRIER FUNCTION PARAMETER () 1
BARRIER FUNCTION PARAMETER () 0.5
Table 6: PPO-B’s hyper-parameters for Mujoco.

Appendix B Performance on Mujoco Games

Figure 2: Comparison of several algorithms on several MuJoCo environments
PPO-B PPO
HalfCheetah 4784.27 2252.16
Hopper 2968.56 2187.83
InvertedDoublePendulum 8562.62 8377.86
InvertedPendulum 999.43 905.88
Reacher -6.31 -4.51
Swimmer 85.28 113.57
Walker2d 4201.24 3794.44
Table 7: Mean final scores (last 100 episodes) of PPO-B and PPO on Mujoco

Appendix C Performance on More Atari Games

Figure 3: Comparison of PPO-B and PPO on all 49 ATARI games included in OpenAI Gym
PPO-B PPO
Alien 1629.73
Amidar 563.07
Assault 3666.45
Asterix 2890.33
Asteroids 1276.87
Atlantis 1956856.33
BankHeist 1160.53
BattleZone 4826.67
BeamRider 1567.07 2809.02
Bowling 41.06
Boxing 90.36
Breakout 6 200.26
Centipede 3710.1
ChopperCommand 3280.67
CrazyClimber 106883.67
DemonAttack 12197.92
DoubleDunk -9.53
Enduro 627.78
FishingDerby -28.78
Freeway 29.59
Frostbite 273.1
Gopher 1296.47
Gravitar 282.0
IceHockey -4.68
Jamesbond 476.12
Kangaroo 2849.33
Krull 7823.15
KungFuMaster 20762.67
MontezumaRevenge 0.0
MsPacman 2104.13
NameThisGame 5842.5
Pitfall -6.41
Pong 19.03
PrivateEye -2.45
Qbert 10885.08
Riverraid 6921.53
RoadRunner 34237.0
Robotank 2.99
Seaquest 1409.27
SpaceInvaders 805.98
StarGunner 28097.33
Tennis -17.63
TimePilot 5474.67
Tutankham 205.35
UpNDown 145988.73
Venture 0.0 0.0
VideoPinball 27183.01
WizardOfWor 3987.67
Zaxxon 2420.67
Table 8: Mean final scores (last 100 episodes) of PPO-B and PPO on Atari games