DeepAI AI Chat
Log In Sign Up

Frosting Weights for Better Continual Training

01/07/2020
by   Xiaofeng Zhu, et al.
Iowa State University of Science and Technology
Florida Atlantic University
Northwestern University
0

Training a neural network model can be a lifelong learning process and is a computationally intensive one. A severe adverse effect that may occur in deep neural network models is that they can suffer from catastrophic forgetting during retraining on new data. To avoid such disruptions in the continuous learning, one appealing property is the additive nature of ensemble models. In this paper, we propose two generic ensemble approaches, gradient boosting and meta-learning, to solve the catastrophic forgetting problem in tuning pre-trained neural network models.

READ FULL TEXT

page 1

page 2

page 3

page 4

04/29/2020

Neural Network Retraining for Model Serving

We propose incremental (re)training of a neural network model to cope wi...
06/24/2019

Lifelong Learning Starting From Zero

We present a deep neural-network model for lifelong learning inspired by...
11/27/2019

Improving Fictitious Play Reinforcement Learning with Expanding Models

Fictitious play with reinforcement learning is a general and effective f...
09/09/2019

Meta-learnt priors slow down catastrophic forgetting in neural networks

Current training regimes for deep learning usually involve exposure to a...
05/15/2018

Continuous Learning in a Hierarchical Multiscale Neural Network

We reformulate the problem of encoding a multi-scale representation of a...
10/18/2019

Theoretical Investigation of Composite Neural Network

A composite neural network is a rooted directed acyclic graph combining ...
10/22/2019

Composite Neural Network: Theory and Application to PM2.5 Prediction

This work investigates the framework and performance issues of the compo...