Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

12/08/2020
by   Riccardo Volpi, et al.
0

Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature - the well-known "catastrophic forgetting" issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past ones in favor of the most recent. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization - for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains, while also easing adaptation to them. Such meta-domains, are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments - spanning from classification to semantic segmentation - that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.

READ FULL TEXT

page 3

page 13

research
04/30/2020

Bayesian Online Meta-Learning with Laplace Approximation

Neural networks are known to suffer from catastrophic forgetting when tr...
research
09/09/2019

Meta-learnt priors slow down catastrophic forgetting in neural networks

Current training regimes for deep learning usually involve exposure to a...
research
02/21/2020

Learning to Continually Learn

Continual lifelong learning requires an agent or model to learn many seq...
research
04/12/2019

ACE: Adapting to Changing Environments for Semantic Segmentation

Deep neural networks exhibit exceptional accuracy when they are trained ...
research
01/18/2021

Studying Catastrophic Forgetting in Neural Ranking Models

Several deep neural ranking models have been proposed in the recent IR l...
research
01/07/2020

Frosting Weights for Better Continual Training

Training a neural network model can be a lifelong learning process and i...
research
06/12/2020

BI-MAML: Balanced Incremental Approach for Meta Learning

We present a novel Balanced Incremental Model Agnostic Meta Learning sys...

Please sign up or login with your details

Forgot password? Click here to reset