Memory-efficient array redistribution through portable collective communication

12/02/2021
by   Norman A. Rink, et al.
0

Modern large-scale deep learning workloads highlight the need for parallel execution across many devices in order to fit model data into hardware accelerator memories. In these settings, array redistribution may be required during a computation, but can also become a bottleneck if not done efficiently. In this paper we address the problem of redistributing multi-dimensional array data in SPMD computations, the most prevalent form of parallelism in deep learning. We present a type-directed approach to synthesizing array redistributions as sequences of MPI-style collective operations. We prove formally that our synthesized redistributions are memory-efficient and perform no excessive data transfers. Array redistribution for SPMD computations using collective operations has also been implemented in the context of the XLA SPMD partitioner, a production-grade tool for partitioning programs across accelerator systems. We evaluate our approach against the XLA implementation and find that our approach delivers a geometric mean speedup of 1.22×, with maximum speedups as a high as 5.7×, while offering provable memory guarantees, making our system particularly appealing for large-scale models.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

11/02/2018

CapsAcc: An Efficient Hardware Accelerator for CapsuleNets with Data Reuse

Deep Neural Networks (DNNs) have been widely deployed for many Machine L...
04/27/2020

FlexSA: Flexible Systolic Array Architecture for Efficient Pruned DNN Model Training

Modern deep learning models have high memory and computation cost. To ma...
02/18/2019

A parallel Fortran framework for neural networks and deep learning

This paper describes neural-fortran, a parallel Fortran framework for ne...
08/19/2020

Synthesizing Optimal Collective Algorithms

Collective communication algorithms are an important component of distri...
07/27/2018

FPGA-Based CNN Inference Accelerator Synthesized from Multi-Threaded C Software

A deep-learning inference accelerator is synthesized from a C-language s...
06/22/2019

Adaptive Precision CNN Accelerator Using Radix-X Parallel Connected Memristor Crossbars

Neural processor development is reducing our reliance on remote server a...
03/22/2021

hep_tables: Heterogeneous Array Programming for HEP

Array operations are one of the most concise ways of expressing common f...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.