Reliable Identification of Redundant Kernels for Convolutional Neural Network Compression

12/10/2018
by   Wei Wang, et al.
0

To compress deep convolutional neural networks (CNNs) with large memory footprint and long inference time, this paper proposes a novel pruning criterion using layer-wised Ln-norm of feature maps. Different from existing pruning criteria, which are mainly based on L1-norm of convolution kernels, the proposed method utilizes Ln-norm of output feature maps after non-linear activations, where n is a variable, increasing from 1 at the first convolution layer to inf at the last convolution layer. With the ability of accurately identifying unimportant convolution kernels, the proposed method achieves a good balance between model size and inference accuracy. The experiments on ImageNet and the successful application in railway surveillance system show that the proposed method outperforms existing kernel-norm-based methods and is generally applicable to any deep neural network with convolution operations.

READ FULL TEXT

page 4

page 5

page 8

research
03/15/2018

Exploring Linear Relationship in Feature Map Subspace for ConvNets Compression

While the research on convolutional neural networks (CNNs) is progressin...
research
04/16/2021

High Performance Convolution Using Sparsity and Patterns for Inference in Deep Convolutional Neural Networks

Deploying deep Convolutional Neural Networks (CNNs) is impacted by their...
research
07/25/2018

Crossbar-aware neural network pruning

Crossbar architecture based devices have been widely adopted in neural n...
research
06/19/2020

From Discrete to Continuous Convolution Layers

A basic operation in Convolutional Neural Networks (CNNs) is spatial res...
research
12/11/2018

Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression

Compressing convolutional neural networks (CNNs) has received ever-incre...
research
12/17/2019

ℓ_0 Regularized Structured Sparsity Convolutional Neural Networks

Deepening and widening convolutional neural networks (CNNs) significantl...
research
12/03/2020

MelGlow: Efficient Waveform Generative Network Based on Location-Variable Convolution

Recent neural vocoders usually use a WaveNet-like network to capture the...

Please sign up or login with your details

Forgot password? Click here to reset