Learned Threshold Pruning

02/28/2020
by   Kambiz Azarian, et al.
6

This paper presents a novel differentiable method for unstructured weight pruning of deep neural networks. Our learned-threshold pruning (LTP) method enjoys a number of important advantages. First, it learns per-layer thresholds via gradient descent, unlike conventional methods where they are set as input. Making thresholds trainable also makes LTP computationally efficient, hence scalable to deeper networks. For example, it takes less than 30 epochs for LTP to prune most networks on ImageNet. This is in contrast to other methods that search for per-layer thresholds via a computationally intensive iterative pruning and fine-tuning process. Additionally, with a novel differentiable L_0 regularization, LTP is able to operate effectively on architectures with batch-normalization. This is important since L_1 and L_2 penalties lose their regularizing effect in networks with batch-normalization. Finally, LTP generates a trail of progressively sparser networks from which the desired pruned network can be picked based on sparsity and performance requirements. These features allow LTP to achieve state-of-the-art compression rates on ImageNet networks such as AlexNet (26.4× compression with 79.1% Top-5 accuracy) and ResNet50 (9.1× compression with 92.0% Top-5 accuracy). We also show that LTP effectively prunes newer architectures, such as EfficientNet, MobileNetV2 and MixNet.

READ FULL TEXT
research
09/10/2019

Differentiable Mask Pruning for Neural Networks

Pruning of neural networks is one of the well-known and promising model ...
research
04/05/2020

DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation

Budgeted pruning is the problem of pruning under resource constraints. I...
research
04/24/2019

Differentiable Pruning Method for Neural Networks

Architecture optimization is a promising technique to find an efficient ...
research
12/05/2017

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the curr...
research
05/29/2023

DiffRate : Differentiable Compression Rate for Efficient Vision Transformers

Token compression aims to speed up large-scale vision transformers (e.g....
research
02/08/2020

Soft Threshold Weight Reparameterization for Learnable Sparsity

Sparsity in Deep Neural Networks (DNNs) is studied extensively with the ...
research
07/08/2020

Operation-Aware Soft Channel Pruning using Differentiable Masks

We propose a simple but effective data-driven channel pruning algorithm,...

Please sign up or login with your details

Forgot password? Click here to reset