OICSR: Out-In-Channel Sparsity Regularization for Compact Deep Neural Networks

05/28/2019
by   Jiashi Li, et al.
0

Channel pruning can significantly accelerate and compress deep neural networks. Many channel pruning works utilize structured sparsity regularization to zero out all the weights in some channels and automatically obtain structure-sparse network in training stage. However, these methods apply structured sparsity regularization on each layer separately where the correlations between consecutive layers are omitted. In this paper, we first combine one out-channel in current layer and the corresponding in-channel in next layer as a regularization group, namely out-in-channel. Our proposed Out-In-Channel Sparsity Regularization (OICSR) considers correlations between successive layers to further retain predictive power of the compact network. Training with OICSR thoroughly transfers discriminative features into a fraction of out-in-channels. Correspondingly, OICSR measures channel importance based on statistics computed from two consecutive layers, not individual layer. Finally, a global greedy pruning algorithm is designed to remove redundant out-in-channels in an iterative way. Our method is comprehensively evaluated with various CNN architectures including CifarNet, AlexNet, ResNet, DenseNet and PreActSeNet on CIFAR-10, CIFAR-100 and ImageNet-1K datasets. Notably, on ImageNet-1K, we reduce 37.2 original model by 0.22

READ FULL TEXT
research
05/22/2020

PruneNet: Channel Pruning via Global Importance

Channel pruning is one of the predominant approaches for accelerating de...
research
08/12/2016

Learning Structured Sparsity in Deep Neural Networks

High demand for computation resources severely hinders deployment of lar...
research
01/09/2019

How Compact?: Assessing Compactness of Representations through Layer-Wise Pruning

Various forms of representations may arise in the many layers embedded i...
research
06/28/2022

Deep Neural Networks pruning via the Structured Perspective Regularization

In Machine Learning, Artificial Neural Networks (ANNs) are a very powerf...
research
09/10/2019

VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks

Improving weight sparsity is a common strategy for producing light-weigh...
research
07/14/2023

Learning Sparse Neural Networks with Identity Layers

The sparsity of Deep Neural Networks is well investigated to maximize th...
research
06/28/2020

Layer Sparsity in Neural Networks

Sparsity has become popular in machine learning, because it can save com...

Please sign up or login with your details

Forgot password? Click here to reset