Proximal Policy Optimization with Relative Pearson Divergence

10/07/2020 ∙ by Taisuke Kobayashi, et al. ∙ 0

Deep reinforcement learning (DRL) is one of the promising approaches for introducing robots into complicated environments. The recent remarkable progress of DRL stands on regularization of policy. By constraining the update of policy, DRL allows the policy to improve stably and efficiently. Among them, a popular method, named proximal policy optimization (PPO), has been introduced. PPO clips density ratio of the latest and baseline policies with a threshold, while its minimization target is unclear. As another problem of PPO, the symmetric threshold is given numerically while the density ratio itself is in asymmetric domain, thereby causing unbalanced regularization of the policy. This paper therefore proposes a new variant of PPO by considering a regularization problem of relative Pearson (RPE) divergence, so-called PPO-RPE. This regularization yields the clear minimization target, which constrains the latest policy to the baseline one. Through its analysis, the intuitive threshold-based design consistent with the asymmetry of the threshold and the domain of density ratio can be derived. Four benchmark tasks were simulated to compare PPO-RPE and the conventional methods. As a result, PPO-RPE outperformed the conventional methods on all the tasks in terms of the task performance by the learned policy.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.