Megatron-LM: Training Multi-Billion Parameter Language Models Using GPU Model Parallelism

09/17/2019
by   Mohammad Shoeybi, et al.
0

Recent work in unsupervised language modeling demonstrates that training large neural language models advances the state of the art in Natural Language Processing applications. However, for very large models, memory constraints limit the size of models that can be practically trained. Model parallelism allows us to train larger models, because the parameters can be split across multiple processors. In this work, we implement a simple, efficient intra-layer model parallel approach that enables training state of the art transformer language models with billions of parameters. Our approach does not require a new compiler or library changes, is orthogonal and complimentary to pipeline model parallelism, and can be fully implemented with the insertion of a few communication operations in native PyTorch. We illustrate this approach by converging an 8.3 billion parameter transformer language model using 512 GPUs, making it the largest transformer model ever trained at 24x times the size of BERT and 5.6x times the size of GPT-2. We sustain up to 15.1 PetaFLOPs per second across the entire application with 76 strong single processor baseline that sustains 39 TeraFLOPs per second, which is 30 ZettaFLOPs over 9.2 days to converge. Transferring this language model achieves state of the art (SOTA) results on the WikiText103 (10.8 compared to SOTA perplexity of 16.4) and LAMBADA (66.5 datasets. We release training and evaluation code, as well as the weights of our smaller portable model, for reproducibility.

READ FULL TEXT

page 5

page 9

research
09/17/2019

Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism

Recent work in unsupervised language modeling demonstrates that training...
research
11/09/2021

FPM: A Collection of Large-scale Foundation Pre-trained Language Models

Recent work in language modeling has shown that training large-scale Tra...
research
05/30/2021

Maximizing Parallelism in Distributed Training for Huge Neural Networks

The recent Natural Language Processing techniques have been refreshing t...
research
07/17/2023

Retentive Network: A Successor to Transformer for Large Language Models

In this work, we propose Retentive Network (RetNet) as a foundation arch...
research
02/05/2021

PipeTransformer: Automated Elastic Pipelining for Distributed Training of Transformers

The size of Transformer models is growing at an unprecedented pace. It h...
research
08/01/2020

Multi-node Bert-pretraining: Cost-efficient Approach

Recently, large scale Transformer-based language models such as BERT, GP...
research
05/09/2023

Investigating the effect of sub-word segmentation on the performance of transformer language models

We would like to explore how morphemes can affect the performance of a l...

Please sign up or login with your details

Forgot password? Click here to reset