Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences

03/07/2022
by   Gordon Euhyun Moon, et al.
0

Parallelizing Gated Recurrent Unit (GRU) networks is a challenging task, as the training procedure of GRU is inherently sequential. Prior efforts to parallelize GRU have largely focused on conventional parallelization strategies such as data-parallel and model-parallel training algorithms. However, when the given sequences are very long, existing approaches are still inevitably performance limited in terms of training time. In this paper, we present a novel parallel training scheme (called parallel-in-time) for GRU based on a multigrid reduction in time (MGRIT) solver. MGRIT partitions a sequence into multiple shorter sub-sequences and trains the sub-sequences on different processors in parallel. The key to achieving speedup is a hierarchical correction of the hidden state to accelerate end-to-end communication in both the forward and backward propagation phases of gradient descent. Experimental results on the HMDB51 dataset, where each video is an image sequence, demonstrate that the new parallel training scheme achieves up to 6.5× speedup over a serial approach. As efficiency of our new parallelization strategy is associated with the sequence length, our parallel GRU algorithm achieves significant performance improvement as the sequence length increases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2015

Single stream parallelization of generalized LSTM-like RNNs on a GPU

Recurrent neural networks (RNNs) have shown outstanding performance on p...
research
09/12/2017

Parallelizing Linear Recurrent Neural Nets Over Sequence Length

Recurrent neural networks (RNNs) are widely used to model sequential dat...
research
05/08/2017

Block-Parallel IDA* for GPUs (Extended Manuscript)

We investigate GPU-based parallelization of Iterative-Deepening A* (IDA*...
research
07/30/2019

Optimizing Multi-GPU Parallelization Strategies for Deep Learning Training

Deploying deep learning (DL) models across multiple compute devices to t...
research
01/31/2021

Parallel Iterated Extended and Sigma-point Kalman Smoothers

The problem of Bayesian filtering and smoothing in nonlinear models with...
research
05/26/2020

On the improvement of the in-place merge algorithm parallelization

In this paper, we present several improvements in the parallelization of...

Please sign up or login with your details

Forgot password? Click here to reset