Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning

09/29/2022
by   Manas Gupta, et al.
0

Pruning neural networks has become popular in the last decade when it was shown that a large number of weights can be safely removed from modern neural networks without compromising accuracy. Numerous pruning methods have been proposed since then, each claiming to be better than the previous. Many state-of-the-art (SOTA) techniques today rely on complex pruning methodologies utilizing importance scores, getting feedback through back-propagation or having heuristics-based pruning rules amongst others. We question this pattern of introducing complexity in order to achieve better pruning results. We benchmark these SOTA techniques against Global Magnitude Pruning (Global MP), a naive pruning baseline, to evaluate whether complexity is really needed to achieve higher performance. Global MP ranks weights in order of their magnitudes and prunes the smallest ones. Hence, in its vanilla form, it is one of the simplest pruning techniques. Surprisingly, we find that vanilla Global MP outperforms all the other SOTA techniques and achieves a new SOTA result. It also achieves good performance on FLOPs sparsification, which we find is enhanced, when pruning is conducted in a gradual fashion. We also find that Global MP is generalizable across tasks, datasets and models with superior performance. Moreover, a common issue that many pruning algorithms run into at high sparsity rates, namely, layer-collapse, can be easily fixed in Global MP by setting a minimum threshold of weights to be retained in each layer. Lastly, unlike many other SOTA techniques, Global MP does not require any additional algorithm specific hyper-parameters and is very straightforward to tune and implement. We showcase our findings on various models (WRN-28-8, ResNet-32, ResNet-50, MobileNet-V1 and FastGRNN) and multiple datasets (CIFAR-10, ImageNet and HAR-2). Code is available at https://github.com/manasgupta-1/GlobalMP.

READ FULL TEXT

page 1

page 7

page 10

research
02/12/2020

Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning

Magnitude-based pruning is one of the simplest methods for pruning neura...
research
10/15/2020

A Deeper Look at the Layerwise Sparsity of Magnitude-based Pruning

Recent discoveries on neural network pruning reveal that, with a careful...
research
04/05/2021

Branch-and-Pruning Optimization Towards Global Optimality in Deep Learning

It has been attracting more and more attention to understand the global ...
research
03/10/2020

Channel Pruning via Optimal Thresholding

Structured pruning, especially channel pruning is widely used for the re...
research
06/20/2023

A Simple and Effective Pruning Approach for Large Language Models

As their size increases, Large Languages Models (LLMs) are natural candi...
research
10/12/2022

GMP*: Well-Tuned Global Magnitude Pruning Can Outperform Most BERT-Pruning Methods

We revisit the performance of the classic gradual magnitude pruning (GMP...
research
09/05/2020

FlipOut: Uncovering Redundant Weights via Sign Flipping

Modern neural networks, although achieving state-of-the-art results on m...

Please sign up or login with your details

Forgot password? Click here to reset