Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

05/24/2021
by   John Thorpe, et al.
17

A graph neural network (GNN) enables deep learning on structured graph data. There are two major GNN training obstacles: 1) it relies on high-end servers with many GPUs which are expensive to purchase and maintain, and 2) limited memory on GPUs cannot scale to today's billion-edge graphs. This paper presents Dorylus: a distributed system for training GNNs. Uniquely, Dorylus can take advantage of serverless computing to increase scalability at a low cost. The key insight guiding our design is computation separation. Computation separation makes it possible to construct a deep, bounded-asynchronous pipeline where graph and tensor parallel tasks can fully overlap, effectively hiding the network latency incurred by Lambdas. With the help of thousands of Lambda threads, Dorylus scales GNN training to billion-edge graphs. Currently, for large graphs, CPU servers offer the best performance-per-dollar over GPU servers. Just using Lambdas on top of CPU servers offers up to 2.75x more performance-per-dollar than training only with CPU servers. Concretely, Dorylus is 1.22x faster and 4.83x cheaper than GPU servers for massive sparse graphs. Dorylus is up to 3.8x faster and 10.7x cheaper compared to existing sampling-based systems.

READ FULL TEXT

page 11

page 15

page 16

page 17

page 18

page 19

page 20

page 21

research
12/31/2021

Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs

Graph neural networks (GNN) have shown great success in learning from gr...
research
05/18/2023

Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload Awareness

Systems for serving inference requests on graph neural networks (GNN) mu...
research
01/20/2021

PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses

With the increasing adoption of graph neural networks (GNNs) in the mach...
research
04/18/2022

Characterizing and Understanding Distributed GNN Training on GPUs

Graph neural network (GNN) has been demonstrated to be a powerful model ...
research
06/20/2023

Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs

In-time particle trajectory reconstruction in the Large Hadron Collider ...
research
02/28/2019

Speeding up Deep Learning with Transient Servers

Distributed training frameworks, like TensorFlow, have been proposed as ...
research
06/26/2020

Hybrid Models for Learning to Branch

A recent Graph Neural Network (GNN) approach for learning to branch has ...

Please sign up or login with your details

Forgot password? Click here to reset