Prune the Convolutional Neural Networks with Sparse Shrink

08/08/2017
by   Xin Li, et al.
0

Nowadays, it is still difficult to adapt Convolutional Neural Network (CNN) based models for deployment on embedded devices. The heavy computation and large memory footprint of CNN models become the main burden in real application. In this paper, we propose a "Sparse Shrink" algorithm to prune an existing CNN model. By analyzing the importance of each channel via sparse reconstruction, the algorithm is able to prune redundant feature maps accordingly. The resulting pruned model thus directly saves computational resource. We have evaluated our algorithm on CIFAR-100. As shown in our experiments, we can reduce 56.77 with only minor decrease in accuracy. These results have demonstrated the effectiveness of our "Sparse Shrink" algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2019

Compressing complex convolutional neural network based on an improved deep compression algorithm

Although convolutional neural network (CNN) has made great progress, lar...
research
08/15/2017

Learning with Rethinking: Recurrently Improving Convolutional Neural Networks through Feedback

Recent years have witnessed the great success of convolutional neural ne...
research
05/27/2019

SpecNet: Spectral Domain Convolutional Neural Network

The memory consumption of most Convolutional Neural Network (CNN) archit...
research
10/31/2022

Hybrid CNN -Interpreter: Interpret local and global contexts for CNN-based Models

Convolutional neural network (CNN) models have seen advanced improvement...
research
03/16/2021

SoWaF: Shuffling of Weights and Feature Maps: A Novel Hardware Intrinsic Attack (HIA) on Convolutional Neural Network (CNN)

Security of inference phase deployment of Convolutional neural network (...
research
09/28/2017

Improving Efficiency in Convolutional Neural Network with Multilinear Filters

The excellent performance of deep neural networks has enabled us to solv...
research
04/28/2018

Low-memory convolutional neural networks through incremental depth-first processing

We introduce an incremental processing scheme for convolutional neural n...

Please sign up or login with your details

Forgot password? Click here to reset