Memory-efficient array redistribution through portable collective communication

12/02/2021
by   Norman A. Rink, et al.
0

Modern large-scale deep learning workloads highlight the need for parallel execution across many devices in order to fit model data into hardware accelerator memories. In these settings, array redistribution may be required during a computation, but can also become a bottleneck if not done efficiently. In this paper we address the problem of redistributing multi-dimensional array data in SPMD computations, the most prevalent form of parallelism in deep learning. We present a type-directed approach to synthesizing array redistributions as sequences of MPI-style collective operations. We prove formally that our synthesized redistributions are memory-efficient and perform no excessive data transfers. Array redistribution for SPMD computations using collective operations has also been implemented in the context of the XLA SPMD partitioner, a production-grade tool for partitioning programs across accelerator systems. We evaluate our approach against the XLA implementation and find that our approach delivers a geometric mean speedup of 1.22×, with maximum speedups as a high as 5.7×, while offering provable memory guarantees, making our system particularly appealing for large-scale models.

READ FULL TEXT

page 4

page 7

research
11/02/2018

CapsAcc: An Efficient Hardware Accelerator for CapsuleNets with Data Reuse

Deep Neural Networks (DNNs) have been widely deployed for many Machine L...
research
04/27/2020

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

Modern deep learning models have high memory and computation cost. To ma...
research
02/18/2019

A parallel Fortran framework for neural networks and deep learning

This paper describes neural-fortran, a parallel Fortran framework for ne...
research
08/19/2020

Synthesizing Optimal Collective Algorithms

Collective communication algorithms are an important component of distri...
research
03/31/2022

Efficient and Eventually Consistent Collective Operations

Collective operations are common features of parallel programming models...
research
11/28/2022

RAMP: A Flat Nanosecond Optical Network and MPI Operations for Distributed Deep Learning Systems

Distributed deep learning (DDL) systems strongly depend on network perfo...
research
03/22/2021

hep_tables: Heterogeneous Array Programming for HEP

Array operations are one of the most concise ways of expressing common f...

Please sign up or login with your details

Forgot password? Click here to reset