Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

09/18/2019
by   Zhonghui You, et al.
0

Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). In this work, we propose a global filter pruning algorithm called Gate Decorator, which transforms a vanilla CNN module by multiplying its output by the channel-wise scaling factors, i.e. gate. When the scaling factor is set to zero, it is equivalent to removing the corresponding filter. We use Taylor expansion to estimate the change in the loss function caused by setting the scaling factor to zero and use the estimation for the global filter importance ranking. Then we prune the network by removing those unimportant filters. After pruning, we merge all the scaling factors into its original module, so no special operations or structures are introduced. Moreover, we propose an iterative pruning framework called Tick-Tock to improve pruning accuracy. The extensive experiments demonstrate the effectiveness of our approaches. For example, we achieve the state-of-the-art pruning ratio on ResNet-56 by reducing 70 noticeable loss in accuracy. For ResNet-50 on ImageNet, our pruned model with 40 Various datasets are used, including CIFAR-10, CIFAR-100, CUB-200, ImageNet ILSVRC-12 and PASCAL VOC 2011. Code is available at github.com/youzhonghui/gate-decorator-pruning

READ FULL TEXT
research
04/28/2019

LeGR: Filter Pruning via Learned Global Ranking

Filter pruning has shown to be effective for learning resource-constrain...
research
03/22/2019

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

Structured pruning of filters or neurons has received increased focus fo...
research
07/02/2023

A Proximal Algorithm for Network Slimming

As a popular channel pruning method for convolutional neural networks (C...
research
08/31/2021

AIP: Adversarial Iterative Pruning Based on Knowledge Transfer for Convolutional Neural Networks

With the increase of structure complexity, convolutional neural networks...
research
10/01/2018

Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks

Resource-efficient convolution neural networks enable not only the intel...
research
07/19/2017

Channel Pruning for Accelerating Very Deep Neural Networks

In this paper, we introduce a new channel pruning method to accelerate v...
research
03/24/2021

Dynamic Slimmable Network

Current dynamic networks and dynamic pruning methods have shown their pr...

Please sign up or login with your details

Forgot password? Click here to reset