GSPMD: General and Scalable Parallelization for ML Computation Graphs

05/10/2021
by   Yuanzhong Xu, et al.
4

We present GSPMD, an automatic, compiler-based parallelization system for common machine learning computation graphs. It allows users to write programs in the same way as for a single device, then give hints through a few annotations on how to distribute tensors, based on which GSPMD will parallelize the computation. Its representation of partitioning is simple yet general, allowing it to express different or mixed paradigms of parallelism on a wide variety of models. GSPMD infers the partitioning for every operator in the graph based on limited user annotations, making it convenient to scale up existing single-device programs. It solves several technical challenges for production usage, such as static shape constraints, uneven partitioning, exchange of halo data, and nested operator partitioning. These techniques allow GSPMD to achieve 50 up to one trillion parameters. GSPMD produces a single program for all devices, which adjusts its behavior based on a run-time partition ID, and uses collective operators for cross-device communication. This property allows the system itself to be scalable: the compilation time stays constant with increasing number of devices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2023

Auto-Parallelizing Large Models with Rhino: A Systematic Approach on Production AI Platform

We present Rhino, a system for accelerating tensor programs with automat...
research
09/11/2018

Multidevice mobile sessions: A first look

The increasing number of users with multiple mobile devices underscores ...
research
07/24/2018

Supporting Very Large Models using Automatic Dataflow Graph Partitioning

There is a trend towards using very large deep neural networks (DNN) to ...
research
12/30/2013

Petuum: A New Platform for Distributed Machine Learning on Big Data

What is a systematic way to efficiently apply a wide spectrum of advance...
research
08/22/2023

Automatic Task Parallelization of Dataflow Graphs in ML/DL models

Several methods exist today to accelerate Machine Learning(ML) or Deep-L...
research
05/22/2021

Automatic task-based parallelization of C++ applications by source-to-source transformations

Currently, multi/many-core CPUs are considered standard in most types of...
research
07/24/2019

A graphical heuristic for reduction and partitioning of large datasets for scalable supervised training

A scalable graphical method is presented for selecting, and partitioning...

Please sign up or login with your details

Forgot password? Click here to reset