Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks

02/23/2020
by   Sai Aparna Aketi, et al.
0

The enormous inference cost of deep neural networks can be scaled down by network compression. Pruning is one of the predominant approaches used for deep network compression. However, existing pruning techniques have one or more of the following limitations: 1) Additional energy cost on top of the compute heavy training stage due to pruning and fine-tuning stages, 2) Layer-wise pruning based on the statistics of a particular, ignoring the effect of error propagation in the network, 3) Lack of an efficient estimate for determining the important channels globally, 4) Unstructured pruning requires specialized hardware for effective use. To address all the above issues, we present a simple-yet-effective gradual channel pruning while training methodology using a novel data driven metric referred as Feature relevance score. The proposed technique gets rid of the additional retraining cycles by pruning least important channels in a structured fashion at fixed intervals during the actual training phase. Feature relevance scores help in efficiently evaluating the contribution of each channel towards the discriminative power of the network. We demonstrate the effectiveness of the proposed methodology on architectures such as VGG and ResNet using datasets such as CIFAR-10, CIFAR-100 and ImageNet, and successfully achieve significant model compression while trading off less than 1% accuracy. Notably on CIFAR-10 dataset trained on ResNet-110, our approach achieves 2.4× compression and a 56% reduction in FLOPs with an accuracy drop of 0.01% compared to the unpruned network.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset