Scaling Graph Neural Networks with Approximate PageRank

07/03/2020
by   Aleksandar Bojchevski, et al.
0

Graph neural networks (GNNs) have emerged as a powerful approach for solving many network mining tasks. However, learning on large graphs remains a challenge - many recently proposed scalable GNN approaches rely on an expensive message-passing procedure to propagate information through the graph. We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs resulting in significant speed gains while maintaining state-of-the-art prediction performance. In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings. We demonstrate that PPRGo outperforms baselines in both distributed and single-machine training environments on a number of commonly used academic graphs. To better analyze the scalability of large-scale graph learning methods, we introduce a novel benchmark graph with 12.4 million nodes, 173 million edges, and 2.8 million node features. We show that training PPRGo from scratch and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph. We discuss the practical application of PPRGo to solve large-scale node classification problems at Google.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2022

Improving Graph Neural Networks at Scale: Combining Approximate PageRank and CoreRank

Graph Neural Networks (GNNs) have achieved great successes in many learn...
research
11/11/2022

A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs

We introduce a new benchmark dataset, Placenta, for node classification ...
research
09/07/2023

Filtration Surfaces for Dynamic Graph Classification

Existing approaches for classifying dynamic graphs either lift graph ker...
research
10/12/2021

Scalable Consistency Training for Graph Neural Networks via Self-Ensemble Self-Distillation

Consistency training is a popular method to improve deep learning models...
research
11/11/2022

DistGNN-MB: Distributed Large-Scale Graph Neural Network Training on x86 via Minibatch Sampling

Training Graph Neural Networks, on graphs containing billions of vertice...
research
05/15/2019

Can Graph Neural Networks Go "Online"? An Analysis of Pretraining and Inference

Large-scale graph data in real-world applications is often not static bu...
research
05/05/2021

Scalable Graph Neural Network Training: The Case for Sampling

Graph Neural Networks (GNNs) are a new and increasingly popular family o...

Please sign up or login with your details

Forgot password? Click here to reset