SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance

07/08/2022
by   Edouard Yvinec, et al.
0

The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks. However it often comes at a computational price which may hinder their deployment. To alleviate this limitation, structured pruning is a well known technique which consists in removing channels, neurons or filters, and is commonly applied in order to produce more compact models. In most cases, the computations to remove are selected based on a relative importance criterion. At the same time, the need for explainable predictive models has risen tremendously and motivated the development of robust attribution methods that highlight the relative importance of pixels of an input image or feature map. In this work, we discuss the limitations of existing pruning heuristics, among which magnitude and gradient-based methods. We draw inspiration from attribution methods to design a novel integrated gradient pruning criterion, in which the relevance of each neuron is defined as the integral of the gradient variation on a path towards this neuron removal. Furthermore, we propose an entwined DNN pruning and fine-tuning flowchart to better preserve DNN accuracy while removing parameters. We show through extensive validation on several datasets, architectures as well as pruning scenarios that the proposed method, dubbed SInGE, significantly outperforms existing state-of-the-art DNN pruning methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/13/2022

Receding Neuron Importances for Structured Pruning

Structured pruning efficiently compresses networks by identifying and re...
research
06/06/2019

(Pen-) Ultimate DNN Pruning

DNN pruning reduces memory footprint and computational work of DNN-based...
research
11/25/2019

Explaining Neural Networks via Perturbing Important Learned Features

Attributing the output of a neural network to the contribution of given ...
research
10/28/2021

An Operator Theoretic Perspective on Pruning Deep Neural Networks

The discovery of sparse subnetworks that are able to perform as well as ...
research
08/09/2023

SAfER: Layer-Level Sensitivity Assessment for Efficient and Robust Neural Network Inference

Deep neural networks (DNNs) demonstrate outstanding performance across m...
research
03/31/2021

Neural Response Interpretation through the Lens of Critical Pathways

Is critical input information encoded in specific sparse pathways within...
research
05/23/2022

Gradient Hedging for Intensively Exploring Salient Interpretation beyond Neuron Activation

Hedging is a strategy for reducing the potential risks in various types ...

Please sign up or login with your details

Forgot password? Click here to reset