Frosting Weights for Better Continual Training

01/07/2020
by   Xiaofeng Zhu, et al.
0

Training a neural network model can be a lifelong learning process and is a computationally intensive one. A severe adverse effect that may occur in deep neural network models is that they can suffer from catastrophic forgetting during retraining on new data. To avoid such disruptions in the continuous learning, one appealing property is the additive nature of ensemble models. In this paper, we propose two generic ensemble approaches, gradient boosting and meta-learning, to solve the catastrophic forgetting problem in tuning pre-trained neural network models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2020

Neural Network Retraining for Model Serving

We propose incremental (re)training of a neural network model to cope wi...
research
06/24/2019

Lifelong Learning Starting From Zero

We present a deep neural-network model for lifelong learning inspired by...
research
11/27/2019

Improving Fictitious Play Reinforcement Learning with Expanding Models

Fictitious play with reinforcement learning is a general and effective f...
research
09/09/2019

Meta-learnt priors slow down catastrophic forgetting in neural networks

Current training regimes for deep learning usually involve exposure to a...
research
05/15/2018

Continuous Learning in a Hierarchical Multiscale Neural Network

We reformulate the problem of encoding a multi-scale representation of a...
research
10/18/2019

Theoretical Investigation of Composite Neural Network

A composite neural network is a rooted directed acyclic graph combining ...
research
12/08/2020

Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning

Most standard learning approaches lead to fragile models which are prone...

Please sign up or login with your details

Forgot password? Click here to reset