Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training

05/25/2023
by   Shengwei Li, et al.
0

Deep learning is experiencing a rise in foundation models that are expected to lead in various fields. The massive number of parameters necessitates the use of tensor model parallelism (TMP) in foundation model training. However, TMP requires frequent communication operations which significantly reduces the training efficiency. In this paper, we present Oases, an automated TMP method with overlapped communication to accelerate foundation model training. Oases proposes a fine-grained training schedule to maximize overlapping communication and computation operations that have data dependence. Additionally, we design the Oases planner that searches for the best model parallel strategy to achieve further accelerations. Unlike existing methods, Oases planner is specifically tailored to model the cost of overlapped communication-computation operations. We evaluate Oases on various model settings and train environments, and compare Oases to four stat-of-the-art implementations. Experimental results demonstrate that Oases achieves speedups of 1.01–1.48X over the fastest baseline, and speedups of up to 1.9X over Megatron-LM.

READ FULL TEXT
research
06/10/2022

Merak: An Efficient Distributed DNN Training Framework with Automated 3D Parallelism for Giant Foundation Models

Foundation models are becoming the dominant deep learning technologies. ...
research
01/20/2023

ATP: Adaptive Tensor Parallelism for Foundation Models

Foundation models have impressive performance and generalization capabil...
research
06/02/2022

Decentralized Training of Foundation Models in Heterogeneous Environments

Training foundation models, such as GPT-3 and PaLM, can be extremely exp...
research
06/28/2020

PyTorch Distributed: Experiences on Accelerating Data Parallel Training

This paper presents the design, implementation, and evaluation of the Py...
research
07/25/2022

Dive into Big Model Training

The increasing scale of model size and continuous improvement of perform...
research
12/14/2021

HET: Scaling out Huge Embedding Model Training via Cache-enabled Distributed Framework

Embedding models have been an effective learning paradigm for high-dimen...
research
09/05/2023

TensorBank:Tensor Lakehouse for Foundation Model Training

Storing and streaming high dimensional data for foundation model trainin...

Please sign up or login with your details

Forgot password? Click here to reset