DeepAI AI Chat
Log In Sign Up

Evaluating model-based planning and planner amortization for continuous control

by   Arunkumar Byravan, et al.

There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC. We find that well-tuned model-free agents are strong baselines even for high DoF control problems but MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency in hard multi-task/multi-goal settings. Finally, we show that it is possible to distil a model-based planner into a policy that amortizes the planning computation without any loss of performance. Videos of agents performing different tasks can be seen at


page 3

page 14


Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning

Model-free deep reinforcement learning algorithms have been shown to be ...

Learning Dynamics Models for Model Predictive Agents

Model-Based Reinforcement Learning involves learning a dynamics model fr...

Model-based Lookahead Reinforcement Learning

Model-based Reinforcement Learning (MBRL) allows data-efficient learning...

On the model-based stochastic value gradient for continuous reinforcement learning

Model-based reinforcement learning approaches add explicit domain knowle...

Data Efficient Reinforcement Learning for Legged Robots

We present a model-based framework for robot locomotion that achieves wa...

Model Predictive Control with Self-supervised Representation Learning

Over the last few years, we have not seen any major developments in mode...

Adaptive Online Planning for Continual Lifelong Learning

We study learning control in an online lifelong learning scenario, where...