Layer-Parallel Training of Residual Networks with Auxiliary-Variable Networks

12/10/2021
by   Qi Sun, et al.
0

Gradient-based methods for the distributed training of residual networks (ResNets) typically require a forward pass of the input data, followed by back-propagating the error gradient to update model parameters, which becomes time-consuming as the network goes deeper. To break the algorithmic locking and exploit synchronous module parallelism in both the forward and backward modes, auxiliary-variable methods have attracted much interest lately but suffer from significant communication overhead and lack of data augmentation. In this work, a novel joint learning framework for training realistic ResNets across multiple compute devices is established by trading off the storage and recomputation of external auxiliary variables. More specifically, the input data of each independent processor is generated from its low-capacity auxiliary network (AuxNet), which permits the use of data augmentation and realizes forward unlocking. The backward passes are then executed in parallel, each with a local loss function that originates from the penalty or augmented Lagrangian (AL) methods. Finally, the proposed AuxNet is employed to reproduce the updated auxiliary variables through an end-to-end training process. We demonstrate the effectiveness of our methods on ResNets and WideResNets across CIFAR-10, CIFAR-100, and ImageNet datasets, achieving speedup over the traditional layer-serial training method while maintaining comparable testing accuracy.

READ FULL TEXT
research
09/03/2020

Penalty and Augmented Lagrangian Methods for Layer-parallel Training of Residual Networks

Algorithms for training residual networks (ResNets) typically require fo...
research
07/22/2022

Layer-Wise Partitioning and Merging for Efficient and Scalable Deep Learning

Deep Neural Network (DNN) models are usually trained sequentially from o...
research
09/05/2019

Diversely Stale Parameters for Efficient Training of CNNs

The backpropagation algorithm is the most popular algorithm training neu...
research
01/27/2022

Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass

Supervised learning in artificial neural networks typically relies on ba...
research
04/01/2021

Optimizer Fusion: Efficient Training with Better Locality and Parallelism

Machine learning frameworks adopt iterative optimizers to train neural n...
research
10/07/2022

A δf PIC method with auxiliary Forward-Backward Lagrangian reconstructions

In this note we describe a δ f particle method where the bulk density is...
research
06/18/2022

PHN: Parallel heterogeneous network with soft gating for CTR prediction

The Click-though Rate (CTR) prediction task is a basic task in recommend...

Please sign up or login with your details

Forgot password? Click here to reset