Characterizing and Understanding Distributed GNN Training on GPUs

04/18/2022
by   Haiyang Lin, et al.
0

Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs. To scale GNN training for large graphs, a widely adopted approach is distributed training which accelerates training using multiple computing nodes. Maximizing the performance is essential, but the execution of distributed GNN training remains preliminarily understood. In this work, we provide an in-depth analysis of distributed GNN training on GPUs, revealing several significant observations and providing useful guidelines for both software optimization and hardware optimization.

READ FULL TEXT
research
11/10/2022

A Comprehensive Survey on Distributed Training of Graph Neural Networks

Graph neural networks (GNNs) have been demonstrated to be a powerful alg...
research
11/01/2022

Distributed Graph Neural Network Training: A Survey

Graph neural networks (GNNs) are a type of deep learning models that lea...
research
08/09/2022

Characterizing and Understanding HGNNs on GPUs

Heterogeneous graph neural networks (HGNNs) deliver powerful capacity in...
research
05/24/2021

Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

A graph neural network (GNN) enables deep learning on structured graph d...
research
04/21/2021

GraphTheta: A Distributed Graph Neural Network Learning System With Flexible Training Strategy

Graph neural networks (GNNs) have been demonstrated as a powerful tool f...
research
08/02/2023

Tango: rethinking quantization for graph neural network training on GPUs

Graph Neural Networks (GNNs) are becoming increasingly popular due to th...
research
05/18/2023

Quiver: Supporting GPUs for Low-Latency, High-Throughput GNN Serving with Workload Awareness

Systems for serving inference requests on graph neural networks (GNN) mu...

Please sign up or login with your details

Forgot password? Click here to reset