BiFeat: Supercharge GNN Training via Graph Feature Quantization

07/29/2022
by   Yuxin Ma, et al.
0

Graph Neural Networks (GNNs) is a promising approach for applications with nonEuclidean data. However, training GNNs on large scale graphs with hundreds of millions nodes is both resource and time consuming. Different from DNNs, GNNs usually have larger memory footprints, and thus the GPU memory capacity and PCIe bandwidth are the main resource bottlenecks in GNN training. To address this problem, we present BiFeat: a graph feature quantization methodology to accelerate GNN training by significantly reducing the memory footprint and PCIe bandwidth requirement so that GNNs can take full advantage of GPU computing capabilities. Our key insight is that unlike DNN, GNN is less prone to the information loss of input features caused by quantization. We identify the main accuracy impact factors in graph feature quantization and theoretically prove that BiFeat training converges to a network where the loss is within ϵ of the optimal loss of uncompressed network. We perform extensive evaluation of BiFeat using several popular GNN models and datasets, including GraphSAGE on MAG240M, the largest public graph dataset. The results demonstrate that BiFeat achieves a compression ratio of more than 30 and improves GNN training speed by 200 particular, BiFeat achieves a record by training GraphSAGE on MAG240M within one hour using only four GPUs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/20/2021

PyTorch-Direct: Enabling GPU Centric Data Access for Very Large Graph Neural Network Training with Irregular Accesses

With the increasing adoption of graph neural networks (GNNs) in the mach...
research
07/09/2020

SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization

With the increasing popularity of graph-based learning, Graph Neural Net...
research
12/16/2021

BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing

Graph neural networks (GNNs) have extended the success of deep neural ne...
research
05/10/2021

Accelerating Large Scale Real-Time GNN Inference using Channel Pruning

Graph Neural Networks (GNNs) are proven to be powerful models to generat...
research
08/02/2023

Tango: rethinking quantization for graph neural network training on GPUs

Graph Neural Networks (GNNs) are becoming increasingly popular due to th...
research
03/02/2023

Boosting Distributed Full-graph GNN Training with Asynchronous One-bit Communication

Training Graph Neural Networks (GNNs) on large graphs is challenging due...
research
04/20/2023

Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One

Graph neural networks (GNN) suffer from severe inefficiency. It is mainl...

Please sign up or login with your details

Forgot password? Click here to reset