Pruning Deep Neural Networks from a Sparsity Perspective

02/11/2023
by   Enmao Diao, et al.
0

In recent years, deep network pruning has attracted significant attention in order to enable the rapid deployment of AI into small devices with computation and memory constraints. Pruning is often achieved by dropping redundant weights, neurons, or layers of a deep network while attempting to retain a comparable test performance. Many deep pruning algorithms have been proposed with impressive empirical success. However, existing approaches lack a quantifiable measure to estimate the compressibility of a sub-network during each pruning iteration and thus may under-prune or over-prune the model. In this work, we propose PQ Index (PQI) to measure the potential compressibility of deep neural networks and use this to develop a Sparsity-informed Adaptive Pruning (SAP) algorithm. Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting. Subsequently, PQI decreases again when the model collapse and significant deterioration in the performance of the model start to occur. Additionally, our experiments demonstrate that the proposed adaptive pruning algorithm with proper choice of hyper-parameters is superior to the iterative pruning algorithms such as the lottery ticket-based pruning methods, in terms of both compression efficiency and robustness.

READ FULL TEXT

page 16

page 17

page 18

page 19

research
06/11/2022

A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation

The goal of model compression is to reduce the size of a large neural ne...
research
06/04/2020

Weight Pruning via Adaptive Sparsity Loss

Pruning neural networks has regained interest in recent years as a means...
research
11/18/2022

A Fair Loss Function for Network Pruning

Model pruning can enable the deployment of neural networks in environmen...
research
06/09/2023

How Sparse Can We Prune A Deep Network: A Geometric Viewpoint

Overparameterization constitutes one of the most significant hallmarks o...
research
06/28/2020

ESPN: Extremely Sparse Pruned Networks

Deep neural networks are often highly overparameterized, prohibiting the...
research
05/22/2017

Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon

How to develop slim and accurate deep neural networks has become crucial...
research
10/11/2019

SiPPing Neural Networks: Sensitivity-informed Provable Pruning of Neural Networks

We introduce a pruning algorithm that provably sparsifies the parameters...

Please sign up or login with your details

Forgot password? Click here to reset