Automated Pruning for Deep Neural Network Compression

12/05/2017
by   Franco Manessi, et al.
0

In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14 state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2019

Joint Pruning on Activations and Weights for Efficient Neural Networks

With rapidly scaling up of deep neural networks (DNNs), extensive resear...
research
09/27/2022

Neural Network Panning: Screening the Optimal Sparse Network Before Training

Pruning on neural networks before training not only compresses the origi...
research
12/05/2018

DropPruning for Model Compression

Deep neural networks (DNNs) have dramatically achieved great success on ...
research
06/11/2018

DropBack: Continuous Pruning During Training

We introduce a technique that compresses deep neural networks both durin...
research
02/28/2020

Learned Threshold Pruning

This paper presents a novel differentiable method for unstructured weigh...
research
05/17/2022

Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey

State-of-the-art deep learning models have a parameter count that reache...
research
10/15/2021

Differentiable Network Pruning for Microcontrollers

Embedded and personal IoT devices are powered by microcontroller units (...

Please sign up or login with your details

Forgot password? Click here to reset