Filter Pruning based on Information Capacity and Independence

03/07/2023
by   Xiaolong Tang, et al.
0

Filter pruning has been widely used in the compression and acceleration of convolutional neural networks (CNNs). However, most existing methods are still challenged by heavy compute cost and biased filter selection. Moreover, most designs for filter evaluation miss interpretability due to the lack of appropriate theoretical guidance. In this paper, we propose a novel filter pruning method which evaluates filters in a interpretable, multi-persepective and data-free manner. We introduce information capacity, a metric that represents the amount of information contained in a filter. Based on the interpretability and validity of information entropy, we propose to use that as a quantitative index of information quantity. Besides, we experimently show that the obvious correlation between the entropy of the feature map and the corresponding filter, so as to propose an interpretable, data-driven scheme to measure the information capacity of the filter. Further, we introduce information independence, another metric that represents the correlation among differrent filters. Consequently, the least impotant filters, which have less information capacity and less information independence, will be pruned. We evaluate our method on two benchmarks using multiple representative CNN architectures, including VGG-16 and ResNet. On CIFAR-10, we reduce 71.9 floating-point operations (FLOPs) and 69.4 0.28 operations (FLOPs) and 68.6 accuracy decrease, which outperforms the state-of-the-arts.

READ FULL TEXT

page 1

page 4

research
01/30/2021

Deep Model Compression based on the Training History

Deep Convolutional Neural Networks (DCNNs) have shown promising results ...
research
02/22/2022

HRel: Filter Pruning based on High Relevance between Activation Maps and Class Labels

This paper proposes an Information Bottleneck theory based filter prunin...
research
01/22/2020

Pruning CNN's with linear filter ensembles

Despite the promising results of convolutional neural networks (CNNs), a...
research
06/19/2017

An Entropy-based Pruning Method for CNN Compression

This paper aims to simultaneously accelerate and compress off-the-shelf ...
research
06/16/2023

Towards Better Orthogonality Regularization with Disentangled Norm in Training Deep CNNs

Orthogonality regularization has been developed to prevent deep CNNs fro...
research
10/29/2022

A pruning method based on the dissimilarity of angle among channels and filters

Convolutional Neural Network (CNN) is more and more widely used in vario...
research
11/26/2018

Leveraging Filter Correlations for Deep Model Compression

We present a filter correlation based model compression approach for dee...

Please sign up or login with your details

Forgot password? Click here to reset