Pruning from Scratch

09/27/2019
by   Yulong Wang, et al.
24

Network pruning is an important research field aiming at reducing computational costs of neural networks. Conventional approaches follow a fixed paradigm which first trains a large and redundant network, and then determines which units (e.g., channels) are less important and thus can be removed. In this work, we find that pre-training an over-parameterized model is not necessary for obtaining the target pruned structure. In fact, a fully-trained over-parameterized model will reduce the search space for the pruned structure. We empirically show that more diverse pruned structures can be directly pruned from randomly initialized weights, including potential models with better performance. Therefore, we propose a novel network pruning pipeline which allows pruning from scratch. In the experiments for compressing classification models on CIFAR10 and ImageNet datasets, our approach not only greatly reduces the pre-training burden of traditional pruning methods, but also achieves similar or even higher accuracy under the same computation budgets. Our results facilitate the community to rethink the effectiveness of existing techniques used for network pruning.

READ FULL TEXT

page 3

page 8

page 9

page 12

research
10/11/2018

Rethinking the Value of Network Pruning

Network pruning is widely used for reducing the heavy computational cost...
research
01/24/2019

Really should we pruning after model be totally trained? Pruning based on a small amount of training

Pre-training of models in pruning algorithms plays an important role in ...
research
01/26/2019

PruneTrain: Gradual Structured Pruning from Scratch for Faster Neural Network Training

Model pruning is a popular mechanism to make a network more efficient fo...
research
10/10/2018

Pruning neural networks: is it time to nip it in the bud?

Pruning is a popular technique for compressing a neural network: a large...
research
09/25/2010

Pattern Classification using Simplified Neural Networks

In recent years, many neural network models have been proposed for patte...
research
12/02/2020

An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution

We propose an efficient once-for-all budgeted pruning framework (OFARPru...
research
02/19/2020

Knapsack Pruning with Inner Distillation

Neural network pruning reduces the computational cost of an over-paramet...

Please sign up or login with your details

Forgot password? Click here to reset