Receding Neuron Importances for Structured Pruning

04/13/2022
by   Mihai Suteu, et al.
4

Structured pruning efficiently compresses networks by identifying and removing unimportant neurons. While this can be elegantly achieved by applying sparsity-inducing regularisation on BatchNorm parameters, an L1 penalty would shrink all scaling factors rather than just those of superfluous neurons. To tackle this issue, we introduce a simple BatchNorm variation with bounded scaling parameters, based on which we design a novel regularisation term that suppresses only neurons with low importance. Under our method, the weights of unnecessary neurons effectively recede, producing a polarised bimodal distribution of importances. We show that neural networks trained this way can be pruned to a larger extent and with less deterioration. We one-shot prune VGG and ResNet architectures at different ratios on CIFAR and ImagenNet datasets. In the case of VGG-style networks, our method significantly outperforms existing approaches particularly under a severe pruning regime.

READ FULL TEXT
research
11/18/2016

NoiseOut: A Simple Way to Prune Neural Networks

Neural networks are usually over-parameterized with significant redundan...
research
07/08/2022

SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance

The leap in performance in state-of-the-art computer vision methods is a...
research
02/07/2021

SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks

Deep neural networks include millions of learnable parameters, making th...
research
03/09/2022

Data-Efficient Structured Pruning via Submodular Optimization

Structured pruning is an effective approach for compressing large pre-tr...
research
06/23/2020

NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks

Deciding the amount of neurons during the design of a deep neural networ...
research
03/22/2019

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning

Structured pruning of filters or neurons has received increased focus fo...
research
06/16/2023

Magnificent Minified Models

This paper concerns itself with the task of taking a large trained neura...

Please sign up or login with your details

Forgot password? Click here to reset