Network Pruning via Feature Shift Minimization

07/06/2022
by   Yuanzhi Duan, et al.
0

Channel pruning is widely used to reduce the complexity of deep network models. Recent pruning methods usually identify which parts of the network to discard by proposing a channel importance criterion. However, recent studies have shown that these criteria do not work well in all conditions. In this paper, we propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters. Specifically, we first investigate the compression efficiency with some prevalent methods in different layer-depths and then propose the feature shift concept. Then, we introduce an approximation method to estimate the magnitude of the feature shift, since it is difficult to compute it directly. Besides, we present a distribution-optimization algorithm to compensate for the accuracy loss and improve the network compression efficiency. The proposed method yields state-of-the-art performance on various benchmark networks and datasets, verified by extensive experiments. The codes can be available at <https://github.com/lscgx/FSM>.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/10/2021

Network Compression via Central Filter

Neural network pruning has remarkable performance for reducing the compl...
research
08/16/2016

Dynamic Network Surgery for Efficient DNNs

Deep learning has become a ubiquitous technology to improve machine inte...
research
03/02/2021

Network Pruning via Resource Reallocation

Channel pruning is broadly recognized as an effective approach to obtain...
research
10/15/2021

Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices

For practical deep neural network design on mobile devices, it is essent...
research
10/24/2021

Exploring Gradient Flow Based Saliency for DNN Model Compression

Model pruning aims to reduce the deep neural network (DNN) model size or...
research
08/05/2022

Data-free Backdoor Removal based on Channel Lipschitzness

Recent studies have shown that Deep Neural Networks (DNNs) are vulnerabl...
research
03/14/2023

Sr-init: An interpretable layer pruning method

Despite the popularization of deep neural networks (DNNs) in many fields...

Please sign up or login with your details

Forgot password? Click here to reset