Predictive Experience Replay for Continual Visual Control and Forecasting

by   Wendong Zhang, et al.

Learning physical dynamics in a series of non-stationary environments is a challenging but essential task for model-based reinforcement learning (MBRL) with visual inputs. It requires the agent to consistently adapt to novel tasks without forgetting previous knowledge. In this paper, we present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting. The key assumption is that an ideal world model can provide a non-forgetting environment simulator, which enables the agent to optimize the policy in a multi-task learning manner based on the imagined trajectories from the world model. To this end, we first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting, which we call predictive experience replay. Finally, we extend these methods to continual RL and further address the value estimation problems with the exploratory-conservative behavior learning approach. Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks. It is also shown to effectively alleviate the forgetting of spatiotemporal dynamics in video prediction datasets with evolving domains.


page 2

page 4

page 7

page 9

page 10

page 11

page 17


Continual Predictive Learning from Videos

Predictive learning ideally builds the world model of physical processes...

Same State, Different Task: Continual Reinforcement Learning without Interference

Continual Learning (CL) considers the problem of training an agent seque...

Continual Model-Based Reinforcement Learning with Hypernetworks

Effective planning in model-based reinforcement learning (MBRL) and mode...

Life-Long Multi-Task Learning of Adaptive Path Tracking Policy for Autonomous Vehicle

This paper proposes a life-long adaptive path tracking policy learning m...

Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline

We study task-agnostic continual reinforcement learning (TACRL) in which...

Task-Agnostic Online Reinforcement Learning with an Infinite Mixture of Gaussian Processes

Continuously learning to solve unseen tasks with limited experience has ...

Continual Task Allocation in Meta-Policy Network via Sparse Prompting

How to train a generalizable meta-policy by continually learning a seque...

Please sign up or login with your details

Forgot password? Click here to reset