LayerPipe: Accelerating Deep Neural Network Training by Intra-Layer and Inter-Layer Gradient Pipelining and Multiprocessor Scheduling

08/14/2021
by   Nanda K. Unnikrishnan, et al.
0

The time required for training the neural networks increases with size, complexity, and depth. Training model parameters by backpropagation inherently creates feedback loops. These loops hinder efficient pipelining and scheduling of the tasks within the layer and between consecutive layers. Prior approaches, such as PipeDream, have exploited the use of delayed gradient to achieve inter-layer pipelining. However, these approaches treat the entire backpropagation as a single task; this leads to an increase in computation time and processor underutilization. This paper presents novel optimization approaches where the gradient computations with respect to the weights and the activation functions are considered independently; therefore, these can be computed in parallel. This is referred to as intra-layer optimization. Additionally, the gradient computation with respect to the activation function is further divided into two parts and distributed to two consecutive layers. This leads to balanced scheduling where the computation time of each layer is the same. This is referred to as inter-layer optimization. The proposed system, referred to as LayerPipe, reduces the number of clock cycles required for training while maximizing processor utilization with minimal inter-processor communication overhead. LayerPipe achieves an average speedup of 25 upwards of 80 compared to PipeDream.

READ FULL TEXT
research
10/03/2021

Scheduling Optimization Techniques for Neural Network Training

Neural network training requires a large amount of computation and thus ...
research
03/06/2023

Globally Optimal Training of Neural Networks with Threshold Activation Functions

Threshold activation functions are highly preferable in neural networks ...
research
06/26/2018

Gradient Acceleration in Activation Functions

Dropout has been one of standard approaches to train deep neural network...
research
06/13/2019

Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation

Backpropagation has been widely used in deep learning approaches, but it...
research
11/27/2020

Progressively Stacking 2.0: A Multi-stage Layerwise Training Method for BERT Training Speedup

Pre-trained language models, such as BERT, have achieved significant acc...
research
11/20/2017

More Than The Sum Of Its Parts: Exploiting Cross-Layer and Joint-Flow Information in MPTCP

Multipath TCP (MPTCP) is an extension to TCP which aggregates multiple p...

Please sign up or login with your details

Forgot password? Click here to reset