AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

04/14/2023
by   Abhisek Kundu, et al.
0

Sparse training is emerging as a promising avenue for reducing the computational cost of training neural networks. Several recent studies have proposed pruning methods using learnable thresholds to efficiently explore the non-uniform distribution of sparsity inherent within the models. In this paper, we propose Gradient Annealing (GA), where gradients of masked weights are scaled down in a non-linear manner. GA provides an elegant trade-off between sparsity and accuracy without the need for additional sparsity-inducing regularization. We integrated GA with the latest learnable pruning methods to create an automated sparse training algorithm called AutoSparse, which achieves better accuracy and/or training/inference FLOPS reduction than existing learnable pruning methods for sparse ResNet50 and MobileNetV1 on ImageNet-1K: AutoSparse achieves (2x, 7x) reduction in (training,inference) FLOPS for ResNet50 on ImageNet at 80 sparse-to-sparse SotA method MEST (uniform sparsity) for 80 with similar accuracy, where MEST uses 12 inference FLOPS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2020

Soft Threshold Weight Reparameterization for Learnable Sparsity

Sparsity in Deep Neural Networks (DNNs) is studied extensively with the ...
research
10/01/2021

Powerpropagation: A sparsity inducing weight reparameterisation

The training of sparse neural networks is becoming an increasingly impor...
research
09/15/2022

Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask

Sparsity has become one of the promising methods to compress and acceler...
research
12/02/2022

Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?

Turning the weights to zero when training a neural network helps in redu...
research
08/24/2020

Hierarchical Adaptive Lasso: Learning Sparse Neural Networks with Shrinkage via Single Stage Training

Deep neural networks achieve state-of-the-art performance in a variety o...
research
06/07/2021

Top-KAST: Top-K Always Sparse Training

Sparse neural networks are becoming increasingly important as the field ...
research
07/14/2023

Learning Sparse Neural Networks with Identity Layers

The sparsity of Deep Neural Networks is well investigated to maximize th...

Please sign up or login with your details

Forgot password? Click here to reset