Neuron Pruning for Compressing Deep Networks using Maxout Architectures

07/21/2017
by   Fernando Moya Rueda, et al.
0

This paper presents an efficient and robust approach for reducing the size of deep neural networks by pruning entire neurons. It exploits maxout units for combining neurons into more complex convex functions and it makes use of a local relevance measurement that ranks neurons according to their activation on the training set for pruning them. Additionally, a parameter reduction comparison between neuron and weight pruning is shown. It will be empirically shown that the proposed neuron pruning reduces the number of parameters dramatically. The evaluation is performed on two tasks, the MNIST handwritten digit recognition and the LFW face verification, using a LeNet-5 and a VGG16 network architecture. The network size is reduced by up to 74% and 61%, respectively, without affecting the network's performance. The main advantage of neuron pruning is its direct influence on the size of the network architecture. Furthermore, it will be shown that neuron pruning can be combined with subsequent weight pruning, reducing the size of the LeNet-5 and VGG16 up to 92% and 80% respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/29/2017

Towards thinner convolutional neural networks through Gradually Global Pruning

Deep network pruning is an effective method to reduce the storage and co...
research
06/23/2020

NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep Neural Networks

Deciding the amount of neurons during the design of a deep neural networ...
research
10/16/2021

Neural Network Pruning Through Constrained Reinforcement Learning

Network pruning reduces the size of neural networks by removing (pruning...
research
07/03/2003

Generation of Explicit Knowledge from Empirical Data through Pruning of Trainable Neural Networks

This paper presents a generalized technology of extraction of explicit k...
research
07/12/2016

Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures

State-of-the-art neural networks are getting deeper and wider. While the...
research
06/28/2020

ESPN: Extremely Sparse Pruned Networks

Deep neural networks are often highly overparameterized, prohibiting the...
research
06/15/2018

Detecting Dead Weights and Units in Neural Networks

Deep Neural Networks are highly over-parameterized and the size of the n...

Please sign up or login with your details

Forgot password? Click here to reset