DeepAI AI Chat
Log In Sign Up

VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using Vector Quantization

10/27/2021
by   Mucong Ding, et al.
31

Most state-of-the-art Graph Neural Networks (GNNs) can be defined as a form of graph convolution which can be realized by message passing between direct neighbors or beyond. To scale such GNNs to large graphs, various neighbor-, layer-, or subgraph-sampling techniques are proposed to alleviate the "neighbor explosion" problem by considering only a small subset of messages passed to the nodes in a mini-batch. However, sampling-based methods are difficult to apply to GNNs that utilize many-hops-away or global context each layer, show unstable performance for different tasks and datasets, and do not speed up model inference. We propose a principled and fundamentally different approach, VQ-GNN, a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance. In contrast to sampling-based techniques, our approach can effectively preserve all the messages passed to a mini-batch of nodes by learning and updating a small number of quantized reference vectors of global node representations, using VQ within each GNN layer. Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix. We show that such a compact low-rank version of the gigantic convolution matrix is sufficient both theoretically and experimentally. In company with VQ, we design a novel approximated message passing algorithm and a nontrivial back-propagation rule for our framework. Experiments on various types of GNN backbones demonstrate the scalability and competitive performance of our framework on large-graph node classification and link prediction benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/02/2023

LMC: Fast Training of GNNs via Subgraph Sampling with Provable Convergence

The message passing-based graph neural networks (GNNs) have achieved gre...
10/02/2022

Gradient Gating for Deep Multi-Rate Learning on Graphs

We present Gradient Gating (G^2), a novel framework for improving the pe...
02/13/2020

ENIGMA Anonymous: Symbol-Independent Inference Guiding Machine (system description)

We describe an implementation of gradient boosting and neural guidance o...
12/02/2020

Deep Graph Neural Networks with Shallow Subgraph Samplers

While Graph Neural Networks (GNNs) are powerful models for learning repr...
11/28/2022

DGI: Easy and Efficient Inference for GNNs

While many systems have been developed to train Graph Neural Networks (G...
02/12/2021

Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks

Most graph neural networks (GNN) perform poorly in graphs where neighbor...
11/10/2021

Generalizable Cross-Graph Embedding for GNN-based Congestion Prediction

Presently with technology node scaling, an accurate prediction model at ...