Tango: rethinking quantization for graph neural network training on GPUs

08/02/2023
by   Shiyang Chen, et al.
0

Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks. While quantization is widely used to accelerate GNN computation, quantized training faces unprecedented challenges. Current quantized GNN training systems often have longer training times than their full-precision counterparts for two reasons: (i) addressing the accuracy challenge leads to excessive overhead, and (ii) the optimization potential exposed by quantization is not adequately leveraged. This paper introduces Tango which re-thinks quantization challenges and opportunities for graph neural network training on GPUs with three contributions: Firstly, we introduce efficient rules to maintain accuracy during quantized GNN training. Secondly, we design and implement quantization-aware primitives and inter-primitive optimizations that can speed up GNN training. Finally, we integrate Tango with the popular Deep Graph Library (DGL) system and demonstrate its superior performance over state-of-the-art approaches on various GNN models and datasets.

READ FULL TEXT
research
08/29/2023

Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation

Graph Neural Network (GNN) training and inference involve significant ch...
research
07/29/2022

BiFeat: Supercharge GNN Training via Graph Feature Quantization

Graph Neural Networks (GNNs) is a promising approach for applications wi...
research
06/09/2022

Towards Understanding Graph Neural Networks: An Algorithm Unrolling Perspective

The graph neural network (GNN) has demonstrated its superior performance...
research
04/18/2022

Characterizing and Understanding Distributed GNN Training on GPUs

Graph neural network (GNN) has been demonstrated to be a powerful model ...
research
07/09/2020

SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization

With the increasing popularity of graph-based learning, Graph Neural Net...
research
08/19/2022

Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching

Recently, Graph Neural Networks (GNNs) have been receiving a spotlight a...
research
12/21/2019

Towards Efficient Training for Neural Network Quantization

Quantization reduces computation costs of neural networks but suffers fr...

Please sign up or login with your details

Forgot password? Click here to reset