Really should we pruning after model be totally trained? Pruning based on a small amount of training

01/24/2019
by   Li Yue, et al.
0

Pre-training of models in pruning algorithms plays an important role in pruning decision-making. We find that excessive pre-training is not necessary for pruning algorithms. According to this idea, we propose a pruning algorithm---Incremental pruning based on less training (IPLT). Compared with the traditional pruning algorithm based on a large number of pre-training, IPLT has competitive compression effect than the traditional pruning algorithm under the same simple pruning strategy. On the premise of ensuring accuracy, IPLT can achieve 8x-9x compression for VGG-19 on CIFAR-10 and only needs to pre-train few epochs. For VGG-19 on CIFAR-10, we can not only achieve 10 times test acceleration, but also about 10 times training acceleration. At present, the research mainly focuses on the compression and acceleration in the application stage of the model, while the compression and acceleration in the training stage are few. We newly proposed a pruning algorithm that can compress and accelerate in the training stage. It is novel to consider the amount of pre-training required by pruning algorithm. Our results have implications: Too much pre-training may be not necessary for pruning algorithms.

READ FULL TEXT
research
09/27/2019

Pruning from Scratch

Network pruning is an important research field aiming at reducing comput...
research
01/30/2022

Win the Lottery Ticket via Fourier Analysis: Frequencies Guided Network Pruning

With the remarkable success of deep learning recently, efficient network...
research
05/11/2022

Revisiting Random Channel Pruning for Neural Network Compression

Channel (or 3D filter) pruning serves as an effective way to accelerate ...
research
02/07/2020

Activation Density driven Energy-Efficient Pruning in Training

The process of neural network pruning with suitable fine-tuning and retr...
research
12/02/2020

An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution

We propose an efficient once-for-all budgeted pruning framework (OFARPru...
research
03/08/2023

InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning

Data pruning aims to obtain lossless performances as training on the ori...
research
03/12/2018

It was the training data pruning too!

We study the current best model (KDG) for question answering on tabular ...

Please sign up or login with your details

Forgot password? Click here to reset