ABC: Aggregation before Communication, a Communication Reduction Framework for Distributed Graph Neural Network Training and Effective Partition

12/11/2022
by   Junwei Su, et al.
0

Graph Neural Networks(GNNs) are a family of neural models tailored for graph-structure data and have shown superior performance in learning representations for graph-structured data. However, training GNNs on large graphs remains challenging and a promising direction is distributed GNN training, which is to partition the input graph and distribute the workload across multiple machines. The key bottleneck of the existing distributed GNNs training framework is the across-machine communication induced by the dependency on the graph data and aggregation operator of GNNs. In this paper, we study the communication complexity during distributed GNNs training and propose a simple lossless communication reduction method, termed the Aggregation before Communication (ABC) method. ABC method exploits the permutation-invariant property of the GNNs layer and leads to a paradigm where vertex-cut is proved to admit a superior communication performance than the currently popular paradigm (edge-cut). In addition, we show that the new partition paradigm is particularly ideal in the case of dynamic graphs where it is infeasible to control the edge placement due to the unknown stochastic of the graph-changing process.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/01/2022

Distributed Graph Neural Network Training: A Survey

Graph neural networks (GNNs) are a type of deep learning models that lea...
research
08/29/2023

An Experimental Comparison of Partitioning Strategies for Distributed Graph Neural Network Training

Recently, graph neural networks (GNNs) have gained much attention as a g...
research
05/05/2021

Scalable Graph Neural Network Training: The Case for Sampling

Graph Neural Networks (GNNs) are a new and increasingly popular family o...
research
02/25/2023

RETEXO: Scalable Neural Network Training over Distributed Graphs

Graph neural networks offer a promising approach to supervised learning ...
research
05/04/2023

Communication-Efficient Graph Neural Networks with Probabilistic Neighborhood Expansion Analysis and Caching

Training and inference with graph neural networks (GNNs) on massive grap...
research
05/17/2023

Simplifying Distributed Neural Network Training on Massive Graphs: Randomized Partitions Improve Model Aggregation

Distributed training of GNNs enables learning on massive graphs (e.g., s...
research
12/16/2022

Learnable Commutative Monoids for Graph Neural Networks

Graph neural networks (GNNs) have been shown to be highly sensitive to t...

Please sign up or login with your details

Forgot password? Click here to reset