ATP: Adaptive Tensor Parallelism for Foundation Models

01/20/2023
by   Shenggan Cheng, et al.
0

Foundation models have impressive performance and generalization capabilities across a wide range of applications. The increasing size of the models introduces great challenges for the training. Tensor parallelism is a critical technique that is currently used in almost all foundation model training and has a significant impact on overall training performance. However, current tensor parallelism in machine learning frameworks misses optimization opportunities in fitting various interconnection topologies. In this work, we present ATP, an adaptive tensor parallelism framework for foundation models, which can automatically select the optimal parallel strategy on different interconnections. We propose column- and row-first tensor parallelism based on 2D device meshes and construct a search space. Combined with the hierarchical communication matrix, ATP can identify the optimal strategy in the search space. We also propose chunk-based overlapping to reduce communication overhead. Our evaluations show ATP consistently outperforms the state-of-the-art approaches for various model sizes and interconnects, achieving end-to-end training performance improvements of up to 37-64 specific interconnects. Based on our theoretical model, the communication overhead of ATP decreases with scaling, indicating a qualitative leap forward.

READ FULL TEXT

page 9

page 11

research
05/25/2023

Automated Tensor Model Parallelism with Overlapped Communication for Efficient Foundation Model Training

Deep learning is experiencing a rise in foundation models that are expec...
research
02/01/2023

TAP: Accelerating Large-Scale DNN Training Through Tensor Automatic Parallelisation

Model parallelism has become necessary to train large neural networks. H...
research
06/28/2019

Category-Theoretic Foundations of "STCLang: State Thread Composition as a Foundation for Monadic Dataflow Parallelism"

This manuscript gives a category-theoretic foundation to the composition...
research
07/05/2023

Improving Automatic Parallel Training via Balanced Memory Workload Optimization

Transformer models have emerged as the leading approach for achieving st...
research
05/30/2021

2.5-dimensional distributed model training

Data parallelism does a good job in speeding up the training. However, w...
research
07/31/2023

UniAP: Unifying Inter- and Intra-Layer Automatic Parallelism by Mixed Integer Quadratic Programming

Deep learning models have demonstrated impressive performance in various...
research
11/10/2021

Amazon SageMaker Model Parallelism: A General and Flexible Framework for Large Model Training

With deep learning models rapidly growing in size, systems-level solutio...

Please sign up or login with your details

Forgot password? Click here to reset