DeepAI AI Chat
Log In Sign Up

Dynamic Channel Pruning: Feature Boosting and Suppression

by   Xitong Gao, et al.
University of Cambridge

Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we exploit the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can accelerate VGG-16 by 5× and improve the speed of ResNet-18 by 2×, both with less than 0.6% top-5 accuracy loss.


Gator: Customizable Channel Pruning of Neural Networks with Gating

The rise of neural network (NN) applications has prompted an increased i...

Pruning with Compensation: Efficient Channel Pruning for Deep Convolutional Neural Networks

Channel pruning is a promising technique to compress the parameters of d...

Dynamic Neural Network Channel Execution for Efficient Training

Existing methods for reducing the computational burden of neural network...

The Power of Sparsity in Convolutional Neural Networks

Deep convolutional networks are well-known for their high computational ...

Soft Masking for Cost-Constrained Channel Pruning

Structured channel pruning has been shown to significantly accelerate in...

Optimal channel selection with discrete QCQP

Reducing the high computational cost of large convolutional neural netwo...

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers

Model pruning has become a useful technique that improves the computatio...