Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

05/24/2021
by   John Thorpe, et al.
17

A graph neural network (GNN) enables deep learning on structured graph data. There are two major GNN training obstacles: 1) it relies on high-end servers with many GPUs which are expensive to purchase and maintain, and 2) limited memory on GPUs cannot scale to today's billion-edge graphs. This paper presents Dorylus: a distributed system for training GNNs. Uniquely, Dorylus can take advantage of serverless computing to increase scalability at a low cost. The key insight guiding our design is computation separation. Computation separation makes it possible to construct a deep, bounded-asynchronous pipeline where graph and tensor parallel tasks can fully overlap, effectively hiding the network latency incurred by Lambdas. With the help of thousands of Lambda threads, Dorylus scales GNN training to billion-edge graphs. Currently, for large graphs, CPU servers offer the best performance-per-dollar over GPU servers. Just using Lambdas on top of CPU servers offers up to 2.75x more performance-per-dollar than training only with CPU servers. Concretely, Dorylus is 1.22x faster and 4.83x cheaper than GPU servers for massive sparse graphs. Dorylus is up to 3.8x faster and 10.7x cheaper compared to existing sampling-based systems.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 15

page 16

page 17

page 18

page 19

page 20

page 21

12/31/2021

Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs

Graph neural networks (GNN) have shown great success in learning from gr...
04/18/2022

Characterizing and Understanding Distributed GNN Training on GPUs

Graph neural network (GNN) has been demonstrated to be a powerful model ...
01/20/2021

PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses

With the increasing adoption of graph neural networks (GNNs) in the mach...
06/26/2020

Hybrid Models for Learning to Branch

A recent Graph Neural Network (GNN) approach for learning to branch has ...
02/28/2019

Speeding up Deep Learning with Transient Servers

Distributed training frameworks, like TensorFlow, have been proposed as ...
02/20/2019

Competitive Concurrent Distributed Scheduling

We introduce a new scheduling problem in distributed computing that we c...
08/26/2020

FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems

Graph neural networks (GNNs) are gaining increasing popularity as a prom...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.