Cyclical Pruning for Sparse Neural Networks

02/02/2022
by   Suraj Srinivas, et al.
12

Current methods for pruning neural network weights iteratively apply magnitude-based pruning on the model weights and re-train the resulting model to recover lost accuracy. In this work, we show that such strategies do not allow for the recovery of erroneously pruned weights. To enable weight recovery, we propose a simple strategy called cyclical pruning which requires the pruning schedule to be periodic and allows for weights pruned erroneously in one cycle to recover in subsequent ones. Experimental results on both linear models and large-scale deep neural networks show that cyclical pruning outperforms existing pruning algorithms, especially at high sparsity ratios. Our approach is easy to tune and can be readily incorporated into existing pruning pipelines to boost performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2021

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

Introducing sparsity in a neural network has been an efficient way to re...
research
06/19/2020

Exploring Weight Importance and Hessian Bias in Model Pruning

Model pruning is an essential procedure for building compact and computa...
research
08/19/2023

To prune or not to prune : A chaos-causality approach to principled pruning of dense neural networks

Reducing the size of a neural network (pruning) by removing weights with...
research
05/31/2019

Learning Sparse Networks Using Targeted Dropout

Neural networks are easier to optimise when they have many more weights ...
research
09/27/2022

Neural Network Panning: Screening the Optimal Sparse Network Before Training

Pruning on neural networks before training not only compresses the origi...
research
06/07/2022

Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm

Pruning techniques have been successfully used in neural networks to tra...
research
02/16/2022

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

Pruning neural networks at initialization would enable us to find sparse...

Please sign up or login with your details

Forgot password? Click here to reset