Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey

05/17/2022
by   Paul Wimmer, et al.
0

State-of-the-art deep learning models have a parameter count that reaches into the billions. Training, storing and transferring such models is energy and time consuming, thus costly. A big part of these costs is caused by training the network. Model compression lowers storage and transfer costs, and can further make training more efficient by decreasing the number of computations in the forward and/or backward pass. Thus, compressing networks also at training time while maintaining a high performance is an important research topic. This work is a survey on methods which reduce the number of trained weights in deep learning models throughout the training. Most of the introduced methods set network parameters to zero which is called pruning. The presented pruning approaches are categorized into pruning at initialization, lottery tickets and dynamic sparse training. Moreover, we discuss methods that freeze parts of a network at its random initialization. By freezing weights, the number of trainable parameters is shrunken which reduces gradient computations and the dimensionality of the model's optimization space. In this survey we first propose dimensionality reduced training as an underlying mathematical model that covers pruning and freezing during training. Afterwards, we present and discuss different dimensionality reduced training methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2020

FreezeNet: Full Performance by Reduced Storage Costs

Pruning generates sparse networks by setting parameters to zero. In this...
research
02/24/2022

The rise of the lottery heroes: why zero-shot pruning is hard

Recent advances in deep learning optimization showed that just a subset ...
research
02/16/2022

Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients

Pruning neural networks at initialization would enable us to find sparse...
research
12/05/2017

Automated Pruning for Deep Neural Network Compression

In this work we present a method to improve the pruning step of the curr...
research
02/12/2019

Effective Network Compression Using Simulation-Guided Iterative Pruning

Existing high-performance deep learning models require very intensive co...
research
09/27/2019

Global Sparse Momentum SGD for Pruning Very Deep Neural Networks

Deep Neural Network (DNN) is powerful but computationally expensive and ...
research
11/24/2021

Accelerating Deep Learning with Dynamic Data Pruning

Deep learning's success has been attributed to the training of large, ov...

Please sign up or login with your details

Forgot password? Click here to reset