Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning

12/24/2022
by   Dan Liu, et al.
0

Most existing pruning works are resource-intensive, requiring retraining or fine-tuning of the pruned models for accuracy. We propose a retraining-free pruning method based on hyperspherical learning and loss penalty terms. The proposed loss penalty term pushes some of the model weights far from zero, while the rest weight values are pushed near zero and can be safely pruned with no need for retraining and a negligible accuracy drop. In addition, our proposed method can instantly recover the accuracy of a pruned model by replacing the pruned values with their mean value. Our method obtains state-of-the-art results in retraining-free pruning and is evaluated on ResNet-18/50 and MobileNetV2 with ImageNet dataset. One can easily get a 50% pruned ResNet18 model with a 0.47% accuracy drop. With fine-tuning, the experiment results show that our method can significantly boost the accuracy of the pruned models compared with existing works. For example, the accuracy of a 70% pruned (except the first convolutional layer) MobileNetV2 model only drops 3.5%, much less than the 7% ∼ 10% accuracy drop with conventional methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset