Automatic Discovery of Composite SPMD Partitioning Strategies in PartIR

10/07/2022
by   Sami Alabed, et al.
0

Large neural network models are commonly trained through a combination of advanced parallelism strategies in a single program, multiple data (SPMD) paradigm. For example, training large transformer models requires combining data, model, and pipeline partitioning; and optimizer sharding techniques. However, identifying efficient combinations for many model architectures and accelerator systems requires significant manual analysis. In this work, we present an automatic partitioner that identifies these combinations through a goal-oriented search. Our key findings are that a Monte Carlo Tree Search-based partitioner leveraging partition-specific compiler analysis directly into the search and guided goals matches expert-level strategies for various models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2021

Automap: Towards Ergonomic Automated Parallelism for ML Models

The rapid rise in demand for training large neural network architectures...
research
11/25/2022

Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism

Transformer models have achieved state-of-the-art performance on various...
research
07/05/2023

Improving Automatic Parallel Training via Balanced Memory Workload Optimization

Transformer models have emerged as the leading approach for achieving st...
research
03/30/2021

Automatic Graph Partitioning for Very Large-scale Deep Learning

This work proposes RaNNC (Rapid Neural Network Connector) as middleware ...
research
10/07/2021

Multi-objective Optimization by Learning Space Partitions

In contrast to single-objective optimization (SOO), multi-objective opti...
research
02/16/2023

Auto-Parallelizing Large Models with Rhino: A Systematic Approach on Production AI Platform

We present Rhino, a system for accelerating tensor programs with automat...

Please sign up or login with your details

Forgot password? Click here to reset