Pathways: Asynchronous Distributed Dataflow for ML

03/23/2022
by   Paul Barham, et al.
14

We present the design of a new large scale orchestration layer for accelerators. Our system, Pathways, is explicitly designed to enable exploration of new systems and ML research ideas, while retaining state of the art performance for current models. Pathways uses a sharded dataflow graph of asynchronous operators that consume and produce futures, and efficiently gang-schedules heterogeneous parallel computations on thousands of accelerators while coordinating data transfers over their dedicated interconnects. Pathways makes use of a novel asynchronous distributed dataflow design that lets the control plane execute in parallel despite dependencies in the data plane. This design, with careful engineering, allows Pathways to adopt a single-controller model that makes it easier to express complex new parallelism patterns. We demonstrate that Pathways can achieve performance parity ( 100 utilization) with state-of-the-art systems when running SPMD computations over 2048 TPUs, while also delivering throughput comparable to the SPMD case for Transformer models that are pipelined across 16 stages, or sharded across two islands of accelerators connected over a data center network.

READ FULL TEXT
research
09/29/2022

A Multiagent Framework for the Asynchronous and Collaborative Extension of Multitask ML Systems

The traditional ML development methodology does not enable a large numbe...
research
05/19/2022

Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis

Graph neural networks (GNNs) are among the most powerful tools in deep l...
research
06/29/2020

Efficient Algorithms for Device Placement of DNN Graph Operators

Modern machine learning workloads use large models, with complex structu...
research
03/24/2022

GX-Plug: a Middleware for Plugging Accelerators to Distributed Graph Processing

Recently, research communities highlight the necessity of formulating a ...
research
09/06/2022

EnergonAI: An Inference System for 10-100 Billion Parameter Transformer Models

Large transformer models display promising performance on a wide range o...
research
12/18/2019

Design Considerations for Efficient Deep Neural Networks on Processing-in-Memory Accelerators

This paper describes various design considerations for deep neural netwo...
research
12/06/2018

K-Pg: Shared State in Differential Dataflows

Many of the most popular scalable data-processing frameworks are fundame...

Please sign up or login with your details

Forgot password? Click here to reset