Modern Distributed Data-Parallel Large-Scale Pre-training Strategies For NLP models

06/13/2022
by   Hao Bai, et al.
0

Distributed deep learning is becoming increasingly popular due to the expanding demand for computing resources for deep learning models with a larger amount of parameters. Different from traditional training approaches, data-parallel training allows multiple compute nodes to train large deep learning models simultaneously in order to boost the training efficiency. In this paper, we present and compare six strategies for data-parallel training using PyTorch on the language model GPT-2 with 100M parameters using a qualitative approach. These strategies are Single GPU, Single Parameter Server, Distributed Parameter Server, Horovod, Distributed Parameter Server with Apex mixed-precision strategy, and Horovod with Apex mixed-precision strategy. We also analyze the quantitative experiment results from each strategy. In the end, we draw the conclusion that the Distributed Parameter Server with Apex mixedprecision strategy has the best performance on single node training, while Horovod with Apex is the most robust approach to use when we have single or multiple nodes.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/07/2018

MiMatrix: A Massively Distributed Deep Learning Framework on a Petascale High-density Heterogeneous Cluster

In this paper, we present a co-designed petascale high-density GPU clust...
research
12/02/2020

Ship Detection: Parameter Server Variant

Deep learning ship detection in satellite optical imagery suffers from f...
research
11/18/2019

Distributed Low Precision Training Without Mixed Precision

Low precision training is one of the most popular strategies for deployi...
research
05/19/2022

Nebula-I: A General Framework for Collaboratively Training Deep Learning Models on Low-Bandwidth Cloud Clusters

The ever-growing model size and scale of compute have attracted increasi...
research
02/10/2020

Learning@home: Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts

Many recent breakthroughs in deep learning were achieved by training inc...
research
07/28/2023

Empirical Study of Straggler Problem in Parameter Server on Iterative Convergent Distributed Machine Learning

The purpose of this study is to test the effectiveness of current stragg...
research
10/06/2019

Distributed filtered hyperinterpolation for noisy data on the sphere

Problems in astrophysics, space weather research and geophysics usually ...

Please sign up or login with your details

Forgot password? Click here to reset