Sr-init: An interpretable layer pruning method

03/14/2023
by   Hui Tang, et al.
0

Despite the popularization of deep neural networks (DNNs) in many fields, it is still challenging to deploy state-of-the-art models to resource-constrained devices due to high computational overhead. Model pruning provides a feasible solution to the aforementioned challenges. However, the interpretation of existing pruning criteria is always overlooked. To counter this issue, we propose a novel layer pruning method by exploring the Stochastic Re-initialization. Our SR-init method is inspired by the discovery that the accuracy drop due to stochastic re-initialization of layer parameters differs in various layers. On the basis of this observation, we come up with a layer pruning criterion, i.e., those layers that are not sensitive to stochastic re-initialization (low accuracy drop) produce less contribution to the model and could be pruned with acceptable loss. Afterward, we experimentally verify the interpretability of SR-init via feature visualization. The visual explanation demonstrates that SR-init is theoretically feasible, thus we compare it with state-of-the-art methods to further evaluate its practicability. As for ResNet56 on CIFAR-10 and CIFAR-100, SR-init achieves a great reduction in parameters (63.98 top-1 accuracy (-0.56 15.59 of 0.6 https://github.com/huitang-zjut/SRinit.

READ FULL TEXT
research
12/05/2018

DropPruning for Model Compression

Deep neural networks (DNNs) have dramatically achieved great success on ...
research
08/21/2023

Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks

In this paper, we propose a novel layer-adaptive weight-pruning approach...
research
06/09/2020

Pruning neural networks without any data by iteratively conserving synaptic flow

Pruning the parameters of deep neural networks has generated intense int...
research
09/17/2020

Holistic Filter Pruning for Efficient Deep Neural Networks

Deep neural networks (DNNs) are usually over-parameterized to increase t...
research
06/25/2019

Importance Estimation for Neural Network Pruning

Structural pruning of neural network parameters reduces computation, ene...
research
07/06/2022

Network Pruning via Feature Shift Minimization

Channel pruning is widely used to reduce the complexity of deep network ...
research
09/11/2023

Efficient Finite Initialization for Tensorized Neural Networks

We present a novel method for initializing layers of tensorized neural n...

Please sign up or login with your details

Forgot password? Click here to reset