Channel Pruning for Accelerating Very Deep Neural Networks

07/19/2017
by   Yihui He, et al.
0

In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks.Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5x speed-up along with only 0.3 is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 Code has been made publicly available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

Pruning Very Deep Neural Network Channels for Efficient Inference

In this paper, we introduce a new channel pruning method to accelerate v...
research
10/28/2019

Layer Pruning for Accelerating Very Deep Neural Networks

In this paper, we propose an adaptive pruning method. This method can cu...
research
03/17/2023

Dynamic Structure Pruning for Compressing CNNs

Structure pruning is an effective method to compress and accelerate neur...
research
09/18/2019

Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks

Filter pruning is one of the most effective ways to accelerate and compr...
research
05/29/2018

A novel channel pruning method for deep neural network compression

In recent years, deep neural networks have achieved great success in the...
research
05/21/2020

CPOT: Channel Pruning via Optimal Transport

Recent advances in deep neural networks (DNNs) lead to tremendously grow...
research
01/17/2021

KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks

Pruning has become a promising technique used to compress and accelerate...

Please sign up or login with your details

Forgot password? Click here to reset