Entropy Induced Pruning Framework for Convolutional Neural Networks

08/13/2022
by   Yiheng Lu, et al.
0

Structured pruning techniques have achieved great compression performance on convolutional neural networks for image classification task. However, the majority of existing methods are weight-oriented, and their pruning results may be unsatisfactory when the original model is trained poorly. That is, a fully-trained model is required to provide useful weight information. This may be time-consuming, and the pruning results are sensitive to the updating process of model parameters. In this paper, we propose a metric named Average Filter Information Entropy (AFIE) to measure the importance of each filter. It is calculated by three major steps, i.e., low-rank decomposition of the "input-output" matrix of each convolutional layer, normalization of the obtained eigenvalues, and calculation of filter importance based on information entropy. By leveraging the proposed AFIE, the proposed framework is able to yield a stable importance evaluation of each filter no matter whether the original model is trained fully. We implement our AFIE based on AlexNet, VGG-16, and ResNet-50, and test them on MNIST, CIFAR-10, and ImageNet, respectively. The experimental results are encouraging. We surprisingly observe that for our methods, even when the original model is only trained with one epoch, the importance evaluation of each filter keeps identical to the results when the model is fully-trained. This indicates that the proposed pruning strategy can perform effectively at the beginning stage of the training process for the original model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/09/2022

SBPF: Sensitiveness Based Pruning Framework For Convolutional Neural Network On Image Classification

Pruning techniques are used comprehensively to compress convolutional ne...
research
12/02/2021

Batch Normalization Tells You Which Filter is Important

The goal of filter pruning is to search for unimportant filters to remov...
research
03/19/2020

Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression

In this paper, we analyze two popular network compression techniques, i....
research
06/19/2017

An Entropy-based Pruning Method for CNN Compression

This paper aims to simultaneously accelerate and compress off-the-shelf ...
research
06/23/2020

PFGDF: Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration

The existence of a lot of redundant information in convolutional neural ...
research
11/06/2020

Channel Pruning via Multi-Criteria based on Weight Dependency

Channel pruning has demonstrated its effectiveness in compressing ConvNe...
research
05/13/2019

Implicit Filter Sparsification In Convolutional Neural Networks

We show implicit filter level sparsity manifests in convolutional neural...

Please sign up or login with your details

Forgot password? Click here to reset