Rapid Structural Pruning of Neural Networks with Set-based Task-Adaptive Meta-Pruning

06/22/2020
by   Minyoung Song, et al.
2

As deep neural networks are growing in size and being increasingly deployed to more resource-limited devices, there has been a recent surge of interest in network pruning methods, which aim to remove less important weights or activations of a given network. A common limitation of most existing pruning techniques, is that they require pre-training of the network at least once before pruning, and thus we can benefit from reduction in memory and computation only at the inference time. However, reducing the training cost of neural networks with rapid structural pruning may be beneficial either to minimize monetary cost with cloud computing or to enable on-device learning on a resource-limited device. Recently introduced random-weight pruning approaches can eliminate the needs of pretraining, but they often obtain suboptimal performance over conventional pruning techniques and also does not allow for faster training since they perform unstructured pruning. To overcome their limitations, we propose Set-based Task-Adaptive Meta Pruning (STAMP), which task-adaptively prunes a network pretrained on a large reference dataset by generating a pruning mask on it as a function of the target dataset. To ensure maximum performance improvements on the target task, we meta-learn the mask generator over different subsets of the reference dataset, such that it can generalize well to any unseen datasets within a few gradient steps of training. We validate STAMP against recent advanced pruning methods on benchmark datasets, on which it not only obtains significantly improved compression rates over the baselines at similar accuracy, but also orders of magnitude faster training speed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/18/2021

Lottery Jackpots Exist in Pre-trained Models

Network pruning is an effective approach to reduce network complexity wi...
research
05/04/2020

Successfully Applying the Stabilized Lottery Ticket Hypothesis to the Transformer Architecture

Sparse models require less memory for storage and enable a faster infere...
research
06/28/2020

ESPN: Extremely Sparse Pruned Networks

Deep neural networks are often highly overparameterized, prohibiting the...
research
10/15/2021

Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations

Quantization and pruning are core techniques used to reduce the inferenc...
research
04/30/2021

Studying the Consistency and Composability of Lottery Ticket Pruning Masks

Magnitude pruning is a common, effective technique to identify sparse su...
research
10/22/2021

When to Prune? A Policy towards Early Structural Pruning

Pruning enables appealing reductions in network memory footprint and tim...
research
04/28/2020

Streamlining Tensor and Network Pruning in PyTorch

In order to contrast the explosion in size of state-of-the-art machine l...

Please sign up or login with your details

Forgot password? Click here to reset