Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

12/11/2018
by   Yuchao Li, et al.
0

Compressing convolutional neural networks (CNNs) has received ever-increasing research focus. However, most existing CNN compression methods do not interpret their inherent structures to distinguish the implicit redundancy. In this paper, we investigate the problem of CNN compression from a novel interpretable perspective. The relationship between the input feature maps and 2D kernels is revealed in a theoretical framework, based on which a kernel sparsity and entropy (KSE) indicator is proposed to quantitate the feature map importance in a feature-agnostic manner to guide model compression. Kernel clustering is further conducted based on the KSE indicator to accomplish high-precision CNN compression. KSE is capable of simultaneously compressing each layer in an efficient way, which is significantly faster compared to previous data-driven feature map pruning methods. We comprehensively evaluate the compression and speedup of the proposed method on CIFAR-10, SVHN and ImageNet 2012. Our method demonstrates superior performance gains over previous ones. In particular, it achieves 4.7 × FLOPs reduction and 2.9 × compression on ResNet-50 with only a Top-5 accuracy drop of 0.35 outperforms state-of-the-art methods.

READ FULL TEXT

page 4

page 9

research
05/24/2021

Towards Compact CNNs via Collaborative Compression

Channel pruning and tensor decomposition have received extensive attenti...
research
11/06/2018

Synaptic Strength For Convolutional Neural Network

Convolutional Neural Networks(CNNs) are both computation and memory inte...
research
12/10/2018

Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression

To compress deep convolutional neural networks (CNNs) with large memory ...
research
03/02/2017

Towards CNN Map Compression for camera relocalisation

This paper presents a study on the use of Convolutional Neural Networks ...
research
03/15/2018

Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression

While the research on convolutional neural networks (CNNs) is progressin...
research
02/03/2019

MICIK: MIning Cross-Layer Inherent Similarity Knowledge for Deep Model Compression

State-of-the-art deep model compression methods exploit the low-rank app...

Please sign up or login with your details

Forgot password? Click here to reset