Layer Freezing Data Sieving: Missing Pieces of a Generic Framework for Sparse Training

09/22/2022
by   Geng Yuan, et al.
0

Recently, sparse training has emerged as a promising paradigm for efficient deep learning on edge devices. The current research mainly devotes efforts to reducing training costs by further increasing model sparsity. However, increasing sparsity is not always ideal since it will inevitably introduce severe accuracy degradation at an extremely high sparsity level. This paper intends to explore other possible directions to effectively and efficiently reduce sparse training costs while preserving accuracy. To this end, we investigate two techniques, namely, layer freezing and data sieving. First, the layer freezing approach has shown its success in dense model training and fine-tuning, yet it has never been adopted in the sparse training domain. Nevertheless, the unique characteristics of sparse training may hinder the incorporation of layer freezing techniques. Therefore, we analyze the feasibility and potentiality of using the layer freezing technique in sparse training and find it has the potential to save considerable training costs. Second, we propose a data sieving method for dataset-efficient training, which further reduces training costs by ensuring only a partial dataset is used throughout the entire training process. We show that both techniques can be well incorporated into the sparse training algorithm to form a generic framework, which we dub SpFDE. Our extensive experiments demonstrate that SpFDE can significantly reduce training costs while preserving accuracy from three dimensions: weight sparsity, layer freezing, and dataset sieving.

READ FULL TEXT

page 5

page 7

page 9

page 14

page 16

research
08/03/2023

Accurate Neural Network Pruning Requires Rethinking Sparse Optimization

Obtaining versions of deep neural networks that are both highly-accurate...
research
03/13/2015

Sparse Code Formation with Linear Inhibition

Sparse code formation in the primary visual cortex (V1) has been inspira...
research
01/09/2023

Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction

Despite impressive performance on a wide variety of tasks, deep neural n...
research
06/23/2021

AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks

The increasing computational requirements of deep neural networks (DNNs)...
research
12/20/2022

Efficient L2 Batch Posting Strategy on L1

We design efficient algorithms for the batch posting of Layer 2 chain ca...
research
01/31/2021

Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks

The growing energy and performance costs of deep learning have driven th...
research
11/29/2022

SPARTAN: Sparse Hierarchical Memory for Parameter-Efficient Transformers

Fine-tuning pre-trained language models (PLMs) achieves impressive perfo...

Please sign up or login with your details

Forgot password? Click here to reset