When to Prune? A Policy towards Early Structural Pruning

10/22/2021
by   Maying Shen, et al.
0

Pruning enables appealing reductions in network memory footprint and time complexity. Conventional post-training pruning techniques lean towards efficient inference while overlooking the heavy computation for training. Recent exploration of pre-training pruning at initialization hints on training cost reduction via pruning, but suffers noticeable performance degradation. We attempt to combine the benefits of both directions and propose a policy that prunes as early as possible during training without hurting performance. Instead of pruning at initialization, our method exploits initial dense training for few epochs to quickly guide the architecture, while constantly evaluating dominant sub-networks via neuron importance ranking. This unveils dominant sub-networks whose structures turn stable, allowing conventional pruning to be pushed earlier into the training. To do this early, we further introduce an Early Pruning Indicator (EPI) that relies on sub-network architectural similarity and quickly triggers pruning when the sub-network's architecture stabilizes. Through extensive experiments on ImageNet, we show that EPI empowers a quick tracking of early training epochs suitable for pruning, offering same efficacy as an otherwise “oracle” grid-search that scans through epochs and requires orders of magnitude more compute. Our method yields 1.4% top-1 accuracy boost over state-of-the-art pruning counterparts, cuts down training cost on GPU by 2.4×, hence offers a new efficiency-accuracy boundary for network pruning during training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2021

Sparse Training via Boosting Pruning Plasticity with Neuroregeneration

Works on lottery ticket hypothesis (LTH) and single-shot network pruning...
research
07/08/2022

Pruning Early Exit Networks

Deep learning models that perform well often have high computational cos...
research
01/07/2021

Max-Affine Spline Insights Into Deep Network Pruning

In this paper, we study the importance of pruning in Deep Networks (DNs)...
research
01/25/2023

When Layers Play the Lottery, all Tickets Win at Initialization

Pruning is a standard technique for reducing the computational cost of d...
research
06/21/2022

Winning the Lottery Ahead of Time: Efficient Early Network Pruning

Pruning, the task of sparsifying deep neural networks, received increasi...
research
06/16/2020

Progressive Skeletonization: Trimming more fat from a network at initialization

Recent studies have shown that skeletonization (pruning parameters) of n...
research
06/22/2020

Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

As deep neural networks are growing in size and being increasingly deplo...

Please sign up or login with your details

Forgot password? Click here to reset