UCP: Uniform Channel Pruning for Deep Convolutional Neural Networks Compression and Acceleration

10/03/2020
by   Jingfei Chang, et al.
0

To apply deep CNNs to mobile terminals and portable devices, many scholars have recently worked on the compressing and accelerating deep convolutional neural networks. Based on this, we propose a novel uniform channel pruning (UCP) method to prune deep CNN, and the modified squeeze-and-excitation blocks (MSEB) is used to measure the importance of the channels in the convolutional layers. The unimportant channels, including convolutional kernels related to them, are pruned directly, which greatly reduces the storage cost and the number of calculations. There are two types of residual blocks in ResNet. For ResNet with bottlenecks, we use the pruning method with traditional CNN to trim the 3x3 convolutional layer in the middle of the blocks. For ResNet with basic residual blocks, we propose an approach to consistently prune all residual blocks in the same stage to ensure that the compact network structure is dimensionally correct. Considering that the network loses considerable information after pruning and that the larger the pruning amplitude is, the more information that will be lost, we do not choose fine-tuning but retrain from scratch to restore the accuracy of the network after pruning. Finally, we verified our method on CIFAR-10, CIFAR-100 and ILSVRC-2012 for image classification. The results indicate that the performance of the compact network after retraining from scratch, when the pruning rate is small, is better than the original network. Even when the pruning amplitude is large, the accuracy can be maintained or decreased slightly. On the CIFAR-100, when reducing the parameters and FLOPs up to 82 of VGG-19 even improved by 0.54

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/29/2020

Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks

In order to deploy deep convolutional neural networks (CNNs) on resource...
research
11/19/2019

Neural Network Pruning with Residual-Connections and Limited-Data

Filter level pruning is an effective method to accelerate the inference ...
research
11/22/2017

BlockDrop: Dynamic Inference Paths in Residual Networks

Very deep convolutional neural networks offer excellent recognition resu...
research
08/08/2019

Efficient Inference of CNNs via Channel Pruning

The deployment of Convolutional Neural Networks (CNNs) on resource const...
research
05/07/2022

Automatic Block-wise Pruning with Auxiliary Gating Structures for Deep Convolutional Neural Networks

Convolutional neural networks are prevailing in deep learning tasks. How...
research
02/01/2018

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

Model pruning has become a useful technique that improves the computatio...
research
02/23/2020

Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks

The enormous inference cost of deep neural networks can be scaled down b...

Please sign up or login with your details

Forgot password? Click here to reset