Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs

06/11/2021
by   Jialin Dong, et al.
0

Graph neural networks (GNNs) are powerful tools for learning from graph data and are widely used in various applications such as social network recommendation, fraud detection, and graph search. The graphs in these applications are typically large, usually containing hundreds of millions of nodes. Training GNN models on such large graphs efficiently remains a big challenge. Despite a number of sampling-based methods have been proposed to enable mini-batch training on large graphs, these methods have not been proved to work on truly industry-scale graphs, which require GPUs or mixed-CPU-GPU training. The state-of-the-art sampling-based methods are usually not optimized for these real-world hardware setups, in which data movement between CPUs and GPUs is a bottleneck. To address this issue, we propose Global Neighborhood Sampling that aims at training GNNs on giant graphs specifically for mixed-CPU-GPU training. The algorithm samples a global cache of nodes periodically for all mini-batches and stores them in GPUs. This global cache allows in-GPU importance sampling of mini-batches, which drastically reduces the number of nodes in a mini-batch, especially in the input layer, to reduce data copy between CPU and GPU and mini-batch computation without compromising the training convergence rate or model accuracy. We provide a highly efficient implementation of this method and show that our implementation outperforms an efficient node-wise neighbor sampling baseline by a factor of 2X-4X on giant graphs. It outperforms an efficient implementation of LADIES with small layers by a factor of 2X-14X while achieving much higher accuracy than LADIES.We also theoretically analyze the proposed algorithm and show that with cached node data of a proper size, it enjoys a comparable convergence rate as the underlying node-wise sampling method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/31/2021

Distributed Hybrid CPU and GPU training for Graph Neural Networks on Billion-Scale Graphs

Graph neural networks (GNN) have shown great success in learning from gr...
research
02/04/2022

Marius++: Large-Scale Training of Graph Neural Networks on a Single Machine

Graph Neural Networks (GNNs) have emerged as a powerful model for ML ove...
research
10/16/2021

Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining

Improving the training and inference performance of graph neural network...
research
04/29/2019

Advancing GraphSAGE with A Data-Driven Node Sampling

As an efficient and scalable graph neural network, GraphSAGE has enabled...
research
02/02/2023

LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence

The message passing-based graph neural networks (GNNs) have achieved gre...
research
12/18/2022

Influence-Based Mini-Batching for Graph Neural Networks

Using graph neural networks for large graphs is challenging since there ...
research
03/12/2018

Flipout: Efficient Pseudo-Independent Weight Perturbations on Mini-Batches

Stochastic neural net weights are used in a variety of contexts, includi...

Please sign up or login with your details

Forgot password? Click here to reset