Decoupled Parallel Backpropagation with Convergence Guarantee

04/27/2018
by   Zhouyuan Huo, et al.
0

Backpropagation algorithm is indispensable for the training of feedforward neural networks. It requires propagating error gradients sequentially from the output layer all the way back to the input layer. The backward locking in backpropagation algorithm constrains us from updating network layers in parallel and fully leveraging the computing resources. Recently, several algorithms have been proposed for breaking the backward locking. However, their performances degrade seriously when networks are deep. In this paper, we propose decoupled parallel backpropagation algorithm for deep learning optimization with convergence guarantee. Firstly, we decouple the backpropagation algorithm using delayed gradients, and show that the backward locking is removed when we split the networks into multiple modules. Then, we utilize decoupled parallel backpropagation in two stochastic methods and prove that our method guarantees convergence to critical points for the non-convex problem. Finally, we perform experiments for training deep convolutional neural networks on benchmark datasets. The experimental results not only confirm our theoretical analysis, but also demonstrate that the proposed method can achieve significant speedup without loss of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/12/2018

Training Neural Networks Using Features Replay

Training a neural network using backpropagation algorithm requires passi...
research
06/21/2019

Fully Decoupled Neural Network Learning Using Delayed Gradients

Using the back-propagation (BP) to train neural networks requires a sequ...
research
03/31/2022

Stochastic Backpropagation: A Memory Efficient Strategy for Training Video Models

We propose a memory efficient method, named Stochastic Backpropagation (...
research
05/03/2018

Local Critic Training for Model-Parallel Learning of Deep Neural Networks

This paper proposes a novel approach to train deep neural networks in a ...
research
04/23/2021

GuideBP: Guiding Backpropagation Through Weaker Pathways of Parallel Logits

Convolutional neural networks often generate multiple logits and use sim...
research
09/05/2019

Diversely Stale Parameters for Efficient Training of CNNs

The backpropagation algorithm is the most popular algorithm training neu...
research
06/04/2018

Backdrop: Stochastic Backpropagation

We introduce backdrop, a flexible and simple-to-implement method, intuit...

Please sign up or login with your details

Forgot password? Click here to reset