Dynamic Channel Pruning: Feature Boosting and Suppression

10/12/2018
by   Xitong Gao, et al.
0

Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we exploit the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can accelerate VGG-16 by 5× and improve the speed of ResNet-18 by 2×, both with less than 0.6% top-5 accuracy loss.

READ FULL TEXT
research
05/30/2022

Gator: Customizable Channel Pruning of Neural Networks with Gating

The rise of neural network (NN) applications has prompted an increased i...
research
08/31/2021

Pruning with Compensation: Efficient Channel Pruning for Deep Convolutional Neural Networks

Channel pruning is a promising technique to compress the parameters of d...
research
05/15/2019

Dynamic Neural Network Channel Execution for Efficient Training

Existing methods for reducing the computational burden of neural network...
research
02/21/2017

The Power of Sparsity in Convolutional Neural Networks

Deep convolutional networks are well-known for their high computational ...
research
11/04/2022

Soft Masking for Cost-Constrained Channel Pruning

Structured channel pruning has been shown to significantly accelerate in...
research
02/24/2022

Optimal channel selection with discrete QCQP

Reducing the high computational cost of large convolutional neural netwo...
research
02/01/2018

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

Model pruning has become a useful technique that improves the computatio...

Please sign up or login with your details

Forgot password? Click here to reset