Revisiting Loss Modelling for Unstructured Pruning

06/22/2020
by   César Laurent, et al.
0

By removing parameters from deep neural networks, unstructured pruning methods aim at cutting down memory footprint and computational cost, while maintaining prediction accuracy. In order to tackle this otherwise intractable problem, many of these methods model the loss landscape using first or second order Taylor expansions to identify which parameters can be discarded. We revisit loss modelling for unstructured pruning: we show the importance of ensuring locality of the pruning steps. We systematically compare first and second order Taylor expansions and empirically show that both can reach similar levels of performance. Finally, we show that better preserving the original network function does not necessarily transfer to better performing networks after fine-tuning, suggesting that only considering the impact of pruning on the loss might not be a sufficient objective to design good pruning criteria.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/26/2023

Exploring the Performance of Pruning Methods in Neural Networks: An Empirical Study of the Lottery Ticket Hypothesis

In this paper, we explore the performance of different pruning methods i...
research
09/30/2021

Prune Your Model Before Distill It

Unstructured pruning reduces a significant amount of weights of neural n...
research
08/12/2023

Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?

Pruning is a widely used technique for reducing the size of deep neural ...
research
07/08/2022

Pruning Early Exit Networks

Deep learning models that perform well often have high computational cos...
research
09/24/2020

A Gradient Flow Framework For Analyzing Network Pruning

Recent network pruning methods focus on pruning models early-on in train...
research
10/19/2021

SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning

Pruning neural networks reduces inference time and memory costs. On stan...
research
08/31/2020

Efficient and Sparse Neural Networks by Pruning Weights in a Multiobjective Learning Approach

Overparameterization and overfitting are common concerns when designing ...

Please sign up or login with your details

Forgot password? Click here to reset