Maximum Entropy Model Rollouts: Fast Model Based Policy Optimization without Compounding Errors

06/08/2020
by   Chi Zhang, et al.
5

Model usage is the central challenge of model-based reinforcement learning. Although dynamics model based on deep neural networks provide good generalization for single step prediction, such ability is over exploited when it is used to predict long horizon trajectories due to compounding errors. In this work, we propose a Dyna-style model-based reinforcement learning algorithm, which we called Maximum Entropy Model Rollouts (MEMR). To eliminate the compounding errors, we only use our model to generate single-step rollouts. Furthermore, we propose to generate diverse model rollouts by non-uniform sampling of the environment states such that the entropy of the model rollouts is maximized. To accomplish this objective, we propose to utilize a prioritized experience replay. We mathematically show that the entropy of the model rollouts is maximally increased when the sampling criteria is the negative likelihood under historical model rollouts distribution. Our preliminary experiments in challenging locomotion benchmarks show that our approach achieves the same sample efficiency of the best model-based algorithms, matches the asymptotic performance of the best model-free algorithms, and significantly reduces the computation requirements of other model-based methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset