Detecting Dead Weights and Units in Neural Networks

06/15/2018
by   Utku Evci, et al.
0

Deep Neural Networks are highly over-parameterized and the size of the neural networks can be reduced significantly after training without any decrease in performance. One can clearly see this phenomenon in a wide range of architectures trained for various problems. Weight/channel pruning, distillation, quantization, matrix factorization are some of the main methods one can use to remove the redundancy to come up with smaller and faster models. This work starts with a short informative chapter, where we motivate the pruning idea and provide the necessary notation. In the second chapter, we compare various saliency scores in the context of parameter pruning. Using the insights obtained from this comparison and stating the problems it brings we motivate why pruning units instead of the individual parameters might be a better idea. We propose some set of definitions to quantify and analyze units that don't learn and create any useful information. We propose an efficient way for detecting dead units and use it to select which units to prune. We get 5x model size reduction through unit-wise pruning on MNIST.

READ FULL TEXT
research
02/10/2021

Pruning of Convolutional Neural Networks Using Ising Energy Model

Pruning is one of the major methods to compress deep neural networks. In...
research
07/21/2017

Neuron Pruning for Compressing Deep Networks using Maxout Architectures

This paper presents an efficient and robust approach for reducing the si...
research
07/22/2015

Data-free parameter pruning for Deep Neural Networks

Deep Neural nets (NNs) with millions of parameters are at the heart of m...
research
05/27/2019

CGaP: Continuous Growth and Pruning for Efficient Deep Learning

Today a canonical approach to reduce the computation cost of Deep Neural...
research
07/01/2020

Single Shot Structured Pruning Before Training

We introduce a method to speed up training by 2x and inference by 3x in ...
research
10/31/2020

Methods for Pruning Deep Neural Networks

This paper presents a survey of methods for pruning deep neural networks...
research
05/03/2022

Compact Neural Networks via Stacking Designed Basic Units

Unstructured pruning has the limitation of dealing with the sparse and i...

Please sign up or login with your details

Forgot password? Click here to reset