Continual Model-Based Reinforcement Learning with Hypernetworks

09/25/2020
by   Yizhou Huang, et al.
0

Effective planning in model-based reinforcement learning (MBRL) and model-predictive control (MPC) relies on the accuracy of the learned dynamics model. In many instances of MBRL and MPC, this model is assumed to be stationary and is periodically re-trained from scratch on state transition experience collected from the beginning of environment interactions. This implies that the time required to train the dynamics model - and the pause required between plan executions - grows linearly with the size of the collected experience. We argue that this is too slow for lifelong robot learning and propose HyperCRL, a method that continually learns the encountered dynamics in a sequence of tasks using task-conditional hypernetworks. Our method has three main attributes: first, it enables constant-time dynamics learning sessions between planning and only needs to store the most recent fixed-size portion of the state transition experience; second, it uses fixed-capacity hypernetworks to represent non-stationary and task-aware dynamics; third, it outperforms existing continual learning alternatives that rely on fixed-capacity networks, and does competitively with baselines that remember an ever increasing coreset of past experience. We show that HyperCRL is effective in continual model-based reinforcement learning in robot locomotion and manipulation scenarios, such as tasks involving pushing and door opening. Our project website with code and videos is at this link http://rvl.cs.toronto.edu/blog/2020/hypercrl/

READ FULL TEXT

page 5

page 7

research
11/29/2022

The Surprising Effectiveness of Latent World Models for Continual Reinforcement Learning

We study the use of model-based reinforcement learning methods, in parti...
research
03/12/2023

Predictive Experience Replay for Continual Visual Control and Forecasting

Learning physical dynamics in a series of non-stationary environments is...
research
06/19/2020

Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes

Continuously learning to solve unseen tasks with limited experience has ...
research
05/22/2022

Should Models Be Accurate?

Model-based Reinforcement Learning (MBRL) holds promise for data-efficie...
research
11/18/2022

Building a Subspace of Policies for Scalable Continual Learning

The ability to continuously acquire new knowledge and skills is crucial ...

Please sign up or login with your details

Forgot password? Click here to reset