CGaP: Continuous Growth and Pruning for Efficient Deep Learning

05/27/2019
by   Xiaocong Du, et al.
0

Today a canonical approach to reduce the computation cost of Deep Neural Networks (DNNs) is to pre-define an over-parameterized model before training to guarantee the learning capacity, and then prune unimportant learning units (filters and neurons) during training to improve model compactness. We argue it is unnecessary to introduce redundancy at the beginning of the training but then reduce redundancy for the ultimate inference model. In this paper, we propose a Continuous Growth and Pruning (CGaP) scheme to minimize the redundancy from the beginning. CGaP starts the training from a small network seed, then expands the model continuously by reinforcing important learning units, and finally prunes the network to obtain a compact and accurate model. As the growth phase favors important learning units, CGaP provides a clear learning purpose to the pruning phase. Experimental results on representative datasets and DNN architectures demonstrate that CGaP outperforms previous pruning-only approaches that deal with pre-defined structures. For VGG-19 on CIFAR-100 and SVHN datasets, CGaP reduces the number of parameters by 78.9 85.8 reduces 64.0

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/27/2019

Efficient Network Construction through Structural Plasticity

Deep Neural Networks (DNNs) on hardware is facing excessive computation ...
research
11/11/2019

Structural Pruning in Deep Neural Networks: A Small-World Approach

Deep Neural Networks (DNNs) are usually over-parameterized, causing exce...
research
05/04/2021

Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems

Deep learning recommendation systems at scale have provided remarkable g...
research
11/06/2017

NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm

Neural networks (NNs) have begun to have a pervasive impact on various a...
research
05/23/2019

Disentangling Redundancy for Multi-Task Pruning

Can prior network pruning strategies eliminate redundancy in multiple co...
research
06/15/2018

Detecting Dead Weights and Units in Neural Networks

Deep Neural Networks are highly over-parameterized and the size of the n...
research
09/19/2020

Redundancy of Hidden Layers in Deep Learning: An Information Perspective

Although the deep structure guarantees the powerful expressivity of deep...

Please sign up or login with your details

Forgot password? Click here to reset