Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning

by   Huan Wang, et al.

The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3]. To standardize benchmarks, first, we need to answer: what kind of comparison setup is considered fair? This basic yet crucial question has barely been clarified in the community, unfortunately. Meanwhile, we observe several papers have used (severely) sub-optimal hyper-parameters in pruning experiments, while the reason behind them is also elusive. These sub-optimal hyper-parameters further exacerbate the distorted benchmarks, rendering the state of neural network pruning even more obscure. Two mysteries in pruning represent such a confusing status: the performance-boosting effect of a larger finetuning learning rate, and the no-value argument of inheriting pretrained weights in filter pruning. In this work, we attempt to explain the confusing state of network pruning by demystifying the two mysteries. Specifically, (1) we first clarify the fairness principle in pruning experiments and summarize the widely-used comparison setups; (2) then we unveil the two pruning mysteries and point out the central role of network trainability, which has not been well recognized so far; (3) finally, we conclude the paper and give some concrete suggestions regarding how to calibrate the pruning benchmarks in the future. Code: https://github.com/mingsun-tse/why-the-state-of-pruning-so-confusing.


Dynamical Isometry: The Missing Ingredient for Neural Network Pruning

Several recent works [40, 24] observed an interesting phenomenon in neur...

What is the State of Neural Network Pruning?

Neural network pruning—the task of reducing the size of a network by rem...

FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute Classification

Existing pruning techniques preserve deep neural networks' overall abili...

Neural Network Pruning Through Constrained Reinforcement Learning

Network pruning reduces the size of neural networks by removing (pruning...

Prune Responsibly

Irrespective of the specific definition of fairness in a machine learnin...

Channel Pruning via Optimal Thresholding

Structured pruning, especially channel pruning is widely used for the re...

AlphaGarden: Learning to Autonomously Tend a Polyculture Garden

This paper presents AlphaGarden: an autonomous polyculture garden that p...

Please sign up or login with your details

Forgot password? Click here to reset