Importance Estimation for Neural Network Pruning

06/25/2019
by   Pavlo Molchanov, et al.
0

Structural pruning of neural network parameters reduces computation, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and second-order Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93 computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods leads to an improvement over state-of-the-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40 the top-1 accuracy on ImageNet. Code is available at https://github.com/NVlabs/Taylor_pruning.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset