Dissecting Pruned Neural Networks

06/29/2019
by   Jonathan Frankle, et al.
0

Pruning is a standard technique for removing unnecessary structure from a neural network to reduce its storage footprint, computational demands, or energy consumption. Pruning can reduce the parameter-counts of many state-of-the-art neural networks by an order of magnitude without compromising accuracy, meaning these networks contain a vast amount of unnecessary structure. In this paper, we study the relationship between pruning and interpretability. Namely, we consider the effect of removing unnecessary structure on the number of hidden units that learn disentangled representations of human-recognizable concepts as identified by network dissection. We aim to evaluate how the interpretability of pruned neural networks changes as they are compressed. We find that pruning has no detrimental effect on this measure of interpretability until so few parameters remain that accuracy beings to drop. Resnet-50 models trained on ImageNet maintain the same number of interpretable concepts and units until more than 90

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/10/2021

Pruning of Convolutional Neural Networks Using Ising Energy Model

Pruning is one of the major methods to compress deep neural networks. In...
research
12/11/2021

CHAMP: Coherent Hardware-Aware Magnitude Pruning of Integrated Photonic Neural Networks

We propose a novel hardware-aware magnitude pruning technique for cohere...
research
02/03/2020

Automatic Pruning for Quantized Neural Networks

Neural network quantization and pruning are two techniques commonly used...
research
08/09/2019

Group Pruning using a Bounded-Lp norm for Group Gating and Regularization

Deep neural networks achieve state-of-the-art results on several tasks w...
research
01/17/2021

KCP: Kernel Cluster Pruning for Dense Labeling Neural Networks

Pruning has become a promising technique used to compress and accelerate...
research
07/27/2021

Experiments on Properties of Hidden Structures of Sparse Neural Networks

Sparsity in the structure of Neural Networks can lead to less energy con...
research
08/24/2020

Efficient Design of Neural Networks with Random Weights

Single layer feedforward networks with random weights are known for thei...

Please sign up or login with your details

Forgot password? Click here to reset