Selective Brain Damage: Measuring the Disparate Impact of Model Pruning

11/13/2019
by   Sara Hooker, et al.
20

Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by pruning. We find that certain examples, which we term pruning identified exemplars (PIEs), and classes are systematically more impacted by the introduction of sparsity. Removing PIE images from the test-set greatly improves top-1 accuracy for both pruned and non-pruned models. These hard-to-generalize-to images tend to be mislabelled, of lower image quality, depict multiple objects or require fine-grained classification. These findings shed light on previously unknown trade-offs, and suggest that a high degree of caution should be exercised before pruning is used in sensitive domains.

READ FULL TEXT

page 2

page 4

page 7

page 8

page 16

page 17

page 19

research
10/26/2019

Cross-Channel Intragroup Sparsity Neural Network

Modern deep neural network models generally build upon heavy over-parame...
research
06/07/2022

Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm

Pruning techniques have been successfully used in neural networks to tra...
research
01/16/2017

The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning

How much can pruning algorithms teach us about the fundamentals of learn...
research
09/18/2021

Structured Pattern Pruning Using Regularization

Iterative Magnitude Pruning (IMP) is a network pruning method that repea...
research
12/20/2021

Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

Deep neural networks (DNNs) have been proven to be effective in solving ...
research
03/09/2022

The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks

Neural networks tend to achieve better accuracy with training if they ar...
research
06/08/2015

Fast ConvNets Using Group-wise Brain Damage

We revisit the idea of brain damage, i.e. the pruning of the coefficient...

Please sign up or login with your details

Forgot password? Click here to reset