Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning

04/08/2023
by   Shanglin Zhou, et al.
0

Network pruning is a widely used technique to reduce computation cost and model size for deep neural networks. However, the typical three-stage pipeline significantly increases the overall training time. In this paper, we develop a systematic weight-pruning optimization approach based on Surrogate Lagrangian relaxation, which is tailored to overcome difficulties caused by the discrete nature of the weight-pruning problem. We prove that our method ensures fast convergence of the model compression problem, and the convergence of the SLR is accelerated by using quadratic penalties. Model parameters obtained by SLR during the training phase are much closer to their optimal values as compared to those obtained by other state-of-the-art methods. We evaluate our method on image classification tasks using CIFAR-10 and ImageNet with state-of-the-art MLP-Mixer, Swin Transformer, and VGG-16, ResNet-18, ResNet-50 and ResNet-110, MobileNetV2. We also evaluate object detection and segmentation tasks on COCO, KITTI benchmark, and TuSimple lane detection dataset using a variety of models. Experimental results demonstrate that our SLR-based weight-pruning optimization approach achieves a higher compression rate than state-of-the-art methods under the same accuracy requirement and also can achieve higher accuracy under the same compression rate requirement. Under classification tasks, our SLR approach converges to the desired accuracy 3× faster on both of the datasets. Under object detection and segmentation tasks, SLR also converges 2× faster to the desired accuracy. Further, our SLR achieves high model accuracy even at the hard-pruning stage without retraining, which reduces the traditional three-stage pruning into a two-stage process. Given a limited budget of retraining epochs, our approach quickly recovers the model's accuracy.

READ FULL TEXT
research
12/18/2020

A Surrogate Lagrangian Relaxation-based Model Compression for Deep Neural Networks

Network pruning is a widely used technique to reduce computation cost an...
research
12/02/2020

An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution

We propose an efficient once-for-all budgeted pruning framework (OFARPru...
research
02/23/2020

Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural Networks

The enormous inference cost of deep neural networks can be scaled down b...
research
10/17/2018

Progressive Weight Pruning of Deep Neural Networks using ADMM

Deep neural networks (DNNs) although achieving human-level performance i...
research
02/05/2021

GNN-RL Compression: Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning

Model compression is an essential technique for deploying deep neural ne...
research
04/30/2021

Post-training deep neural network pruning via layer-wise calibration

We present a post-training weight pruning method for deep neural network...
research
11/20/2020

Continuous Pruning of Deep Convolutional Networks Using Selective Weight Decay

During the last decade, deep convolutional networks have become the refe...

Please sign up or login with your details

Forgot password? Click here to reset