Incremental Learning Using a Grow-and-Prune Paradigm with Efficient Neural Networks

05/27/2019
by   Xiaoliang Dai, et al.
0

Deep neural networks (DNNs) have become a widely deployed model for numerous machine learning applications. However, their fixed architecture, substantial training cost, and significant model redundancy make it difficult to efficiently update them to accommodate previously unseen data. To solve these problems, we propose an incremental learning framework based on a grow-and-prune neural network synthesis paradigm. When new data arrive, the neural network first grows new connections based on the gradients to increase the network capacity to accommodate new data. Then, the framework iteratively prunes away connections based on the magnitude of weights to enhance network compactness, and hence recover efficiency. Finally, the model rests at a lightweight DNN that is both ready for inference and suitable for future grow-and-prune updates. The proposed framework improves accuracy, shrinks network size, and significantly reduces the additional training cost for incoming data compared to conventional approaches, such as training from scratch and network fine-tuning. For the LeNet-300-100 and LeNet-5 neural network architectures derived for the MNIST dataset, the framework reduces training cost by up to 64 scratch (network fine-tuning), respectively. For the ResNet-18 architecture derived for the ImageNet dataset and DeepSpeech2 for the AN4 dataset, the corresponding training cost reductions against training from scratch (network fine-tunning) are 64 contain fewer network parameters but achieve higher accuracy relative to conventional baselines.

READ FULL TEXT
research
04/09/2023

CILIATE: Towards Fairer Class-based Incremental Learning by Dataset and Training Refinement

Due to the model aging problem, Deep Neural Networks (DNNs) need updates...
research
11/06/2017

NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm

Neural networks (NNs) have begun to have a pervasive impact on various a...
research
02/20/2018

Bayesian Incremental Learning for Deep Neural Networks

In industrial machine learning pipelines, data often arrive in parts. Pa...
research
11/21/2021

Accretionary Learning with Deep Neural Networks

One of the fundamental limitations of Deep Neural Networks (DNN) is its ...
research
10/12/2020

TUTOR: Training Neural Networks Using Decision Rules as Model Priors

The human brain has the ability to carry out new tasks with limited expe...
research
12/12/2019

STEERAGE: Synthesis of Neural Networks Using Architecture Search and Grow-and-Prune Methods

Neural networks (NNs) have been successfully deployed in many applicatio...
research
06/08/2015

Learning both Weights and Connections for Efficient Neural Networks

Neural networks are both computationally intensive and memory intensive,...

Please sign up or login with your details

Forgot password? Click here to reset