MCR-DL: Mix-and-Match Communication Runtime for Deep Learning

03/15/2023
by   Quentin Anthony, et al.
0

In recent years, the training requirements of many state-of-the-art Deep Learning (DL) models have scaled beyond the compute and memory capabilities of a single processor, and necessitated distribution among processors. Training such massive models necessitates advanced parallelism strategies to maintain efficiency. However, such distributed DL parallelism strategies require a varied mixture of collective and point-to-point communication operations across a broad range of message sizes and scales. Examples of models using advanced parallelism strategies include Deep Learning Recommendation Models (DLRM) and Mixture-of-Experts (MoE). Communication libraries' performance varies wildly across different communication operations, scales, and message sizes. We propose MCR-DL: an extensible DL communication framework that supports all point-to-point and collective operations while enabling users to dynamically mix-and-match communication backends for a given operation without deadlocks. MCR-DL also comes packaged with a tuning suite for dynamically selecting the best communication backend for a given input tensor. We select DeepSpeed-MoE and DLRM as candidate DL models and demonstrate a 31 throughput on 256 V100 GPUs on the Lassen HPC system. Further, we achieve a 20 throughput improvement in a dense Megatron-DeepSpeed model and a 25 improvement in DLRM on 32 A100 GPUs with the Theta-GPU HPC system.

READ FULL TEXT

page 1

page 5

page 6

page 9

research
04/05/2021

GPU Domain Specialization via Composable On-Package Architecture

As GPUs scale their low precision matrix math throughput to boost deep l...
research
06/30/2020

Efficient Communication Acceleration for Next-GenScale-up Deep Learning Training Platforms

Deep Learning (DL) training platforms are built by interconnecting multi...
research
03/31/2022

Efficient and Eventually Consistent Collective Operations

Collective operations are common features of parallel programming models...
research
09/03/2023

Saturn: An Optimized Data System for Large Model Deep Learning Workloads

Large language models such as GPT-3 ChatGPT have transformed deep le...
research
06/30/2020

Efficient Communication Acceleration for Next-Gen Scale-up Deep Learning Training Platforms

Deep Learning (DL) training platforms are built by interconnecting multi...
research
05/28/2020

Brief Announcement: On the Limits of Parallelizing Convolutional Neural Networks on GPUs

GPUs are currently the platform of choice for training neural networks. ...
research
08/08/2019

TensorDIMM: A Practical Near-Memory Processing Architecture for Embeddings and Tensor Operations in Deep Learning

Recent studies from several hyperscalars pinpoint to embedding layers as...

Please sign up or login with your details

Forgot password? Click here to reset