Dynamic Structure Pruning for Compressing CNNs

by   Jun-Hyung Park, et al.

Structure pruning is an effective method to compress and accelerate neural networks. While filter and channel pruning are preferable to other structure pruning methods in terms of realistic acceleration and hardware compatibility, pruning methods with a finer granularity, such as intra-channel pruning, are expected to be capable of yielding more compact and computationally efficient networks. Typical intra-channel pruning methods utilize a static and hand-crafted pruning granularity due to a large search space, which leaves room for improvement in their pruning performance. In this work, we introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning. In contrast to existing intra-channel pruning methods, the proposed method automatically optimizes dynamic pruning granularities in each layer while training deep neural networks. To achieve this, we propose a differentiable group learning method designed to efficiently learn a pruning granularity based on gradient-based learning of filter groups. The experimental results show that dynamic structure pruning achieves state-of-the-art pruning performance and better realistic acceleration on a GPU compared with channel pruning. In particular, it reduces the FLOPs of ResNet50 by 71.85 degradation on the ImageNet dataset. Our code is available at https://github.com/irishev/DSP.


page 1

page 2

page 3

page 4


DMCP: Differentiable Markov Channel Pruning for Neural Networks

Recent works imply that the channel pruning can be regarded as searching...

AutoPruning for Deep Neural Network with Dynamic Channel Masking

Modern deep neural network models are large and computationally intensiv...

Revisiting Random Channel Pruning for Neural Network Compression

Channel (or 3D filter) pruning serves as an effective way to accelerate ...

1×N Block Pattern for Network Sparsity

Though network sparsity emerges as a promising direction to overcome the...

A Feature-map Discriminant Perspective for Pruning Deep Neural Networks

Network pruning has become the de facto tool to accelerate deep neural n...

Channel Pruning for Accelerating Very Deep Neural Networks

In this paper, we introduce a new channel pruning method to accelerate v...

AACP: Model Compression by Accurate and Automatic Channel Pruning

Channel pruning is formulated as a neural architecture search (NAS) prob...

Please sign up or login with your details

Forgot password? Click here to reset