VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization

10/27/2021
by   Mucong Ding, et al.
31

Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2023

Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation

Graph Neural Network (GNN) training and inference involve significant ch...
research
02/02/2023

LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence

The message passing-based graph neural networks (GNNs) have achieved gre...
research
02/13/2020

ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system description)

We describe an implementation of gradient boosting and neural guidance o...
research
08/04/2023

VQGraph: Graph Vector-Quantization for Bridging GNNs and MLPs

Graph Neural Networks (GNNs) conduct message passing which aggregates lo...
research
11/10/2021

Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction

Presently with technology node scaling, an accurate prediction model at ...
research
10/24/2022

(LA)yer-neigh(BOR) Sampling: Defusing Neighborhood Explosion in GNNs

Graph Neural Networks have recently received a significant attention, ho...
research
12/30/2021

Measuring and Sampling: A Metric-guided Subgraph Learning Framework for Graph Neural Network

Graph neural network (GNN) has shown convincing performance in learning ...

Please sign up or login with your details

Forgot password? Click here to reset