Comparing Rewinding and Fine-tuning in Neural Network Pruning

03/05/2020
by   Alex Renda, et al.
0

Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate. In this paper, we compare fine-tuning to alternative retraining techniques. Weight rewinding (as proposed by Frankle et al., (2019)), rewinds unpruned weights to their values from earlier in training and retrains them from there using the original training schedule. Learning rate rewinding (which we propose) trains the unpruned weights from their final values using the same learning rate schedule as weight rewinding. Both rewinding techniques outperform fine-tuning, forming the basis of a network-agnostic pruning algorithm that matches the accuracy and compression ratios of several more network-specific state-of-the-art techniques.

READ FULL TEXT

page 21

page 22

page 23

page 25

page 27

page 29

research
05/07/2021

Network Pruning That Matters: A Case Study on Retraining Variants

Network pruning is an effective method to reduce the computational expen...
research
09/20/2021

Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural Network Pruning

Scope of reproducibility: We are reproducing Comparing Rewinding and Fin...
research
03/19/2021

Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning

We report, for the first time, on the cascade weight shedding phenomenon...
research
02/14/2020

Layer-wise Pruning and Auto-tuning of Layer-wise Learning Rates in Fine-tuning of Deep Networks

Existing fine-tuning methods use a single learning rate over all layers....
research
07/08/2021

Weight Reparametrization for Budget-Aware Network Pruning

Pruning seeks to design lightweight architectures by removing redundant ...
research
02/19/2021

Lottery Ticket Implies Accuracy Degradation, Is It a Desirable Phenomenon?

In deep model compression, the recent finding "Lottery Ticket Hypothesis...
research
12/24/2022

Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning

Most existing pruning works are resource-intensive, requiring retraining...

Please sign up or login with your details

Forgot password? Click here to reset