Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation

08/29/2023
by   Shuang Wang, et al.
0

Graph Neural Network (GNN) training and inference involve significant challenges of scalability with respect to both model sizes and number of layers, resulting in degradation of efficiency and accuracy for large and deep GNNs. We present an end-to-end solution that aims to address these challenges for efficient GNNs in resource constrained environments while avoiding the oversmoothing problem in deep GNNs. We introduce a quantization based approach for all stages of GNNs, from message passing in training to node classification, compressing the model and enabling efficient processing. The proposed GNN quantizer learns quantization ranges and reduces the model size with comparable accuracy even under low-bit quantization. To scale with the number of layers, we devise a message propagation mechanism in training that controls layer-wise changes of similarities between neighboring nodes. This objective is incorporated into a Lagrangian function with constraints and a differential multiplier method is utilized to iteratively find optimal embeddings. This mitigates oversmoothing and suppresses the quantization error to a bound. Significant improvements are demonstrated over state-of-the-art quantization methods and deep GNN approaches in both full-precision and quantized models. The proposed quantizer demonstrates superior performance in INT2 configurations across all stages of GNN, achieving a notable level of accuracy. In contrast, existing quantization approaches fail to generate satisfactory accuracy levels. Finally, the inference with INT2 and INT4 representations exhibits a speedup of 5.11 × and 4.70 × compared to full precision counterparts, respectively.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/02/2023

Tango: rethinking quantization for graph neural network training on GPUs

Graph Neural Networks (GNNs) are becoming increasingly popular due to th...
research
10/27/2021

VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization

Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a f...
research
02/01/2023

A^2Q: Aggregation-Aware Quantization for Graph Neural Networks

As graph data size increases, the vast latency and memory consumption du...
research
08/11/2020

Degree-Quant: Quantization-Aware Training for Graph Neural Networks

Graph neural networks (GNNs) have demonstrated strong performance on a w...
research
07/09/2020

SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization

With the increasing popularity of graph-based learning, Graph Neural Net...
research
05/25/2022

BRIGHT – Graph Neural Networks in Real-Time Fraud Detection

Detecting fraudulent transactions is an essential component to control r...
research
06/02/2023

Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training

Distributed full-graph training of Graph Neural Networks (GNNs) over lar...

Please sign up or login with your details

Forgot password? Click here to reset