Log In Sign Up

Scalable Graph Neural Network Training: The Case for Sampling

by   Marco Serafini, et al.

Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs. Training them efficiently is challenging due to the irregular nature of graph data. The problem becomes even more challenging when scaling to large graphs that exceed the capacity of single devices. Standard approaches to distributed DNN training, such as data and model parallelism, do not directly apply to GNNs. Instead, two different approaches have emerged in the literature: whole-graph and sample-based training. In this paper, we review and compare the two approaches. Scalability is challenging with both approaches, but we make a case that research should focus on sample-based training since it is a more promising approach. Finally, we review recent systems supporting sample-based training.


page 1

page 2

page 3

page 4


Theory of Graph Neural Networks: Representation and Learning

Graph Neural Networks (GNNs), neural network architectures targeted to l...

Toward the Analysis of Graph Neural Networks

Graph Neural Networks (GNNs) have recently emerged as a robust framework...

Scaling Graph Neural Networks with Approximate PageRank

Graph neural networks (GNNs) have emerged as a powerful approach for sol...

Graph-Conditioned MLP for High-Dimensional Tabular Biomedical Data

Genome-wide studies leveraging recent high-throughput sequencing technol...

GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training

Normalization plays an important role in the optimization of deep neural...

Unlearning Nonlinear Graph Classifiers in the Limited Training Data Regime

As the demand for user privacy grows, controlled data removal (machine u...