GNNAutoScale: Scalable and Expressive Graph Neural Networks via Historical Embeddings

06/10/2021
by   Matthias Fey, et al.
9

We present GNNAutoScale (GAS), a framework for scaling arbitrary message-passing GNNs to large graphs. GAS prunes entire sub-trees of the computation graph by utilizing historical embeddings from prior training iterations, leading to constant GPU memory consumption in respect to input node size without dropping any data. While existing solutions weaken the expressive power of message passing due to sub-sampling of edges or non-trainable propagations, our approach is provably able to maintain the expressive power of the original GNN. We achieve this by providing approximation error bounds of historical embeddings and show how to tighten them in practice. Empirically, we show that the practical realization of our framework, PyGAS, an easy-to-use extension for PyTorch Geometric, is both fast and memory-efficient, learns expressive node representations, closely resembles the performance of their non-scaling counterparts, and reaches state-of-the-art performance on large-scale graphs.

READ FULL TEXT
research
04/05/2021

Improving the Expressive Power of Graph Neural Network with Tinhofer Algorithm

In recent years, Graph Neural Network (GNN) has bloomly progressed for i...
research
05/15/2021

Neural Trees for Learning on Graphs

Graph Neural Networks (GNNs) have emerged as a flexible and powerful app...
research
06/16/2021

A unifying point of view on expressive power of GNNs

Graph Neural Networks (GNNs) are a wide class of connectionist models fo...
research
11/27/2022

Beyond 1-WL with Local Ego-Network Encodings

Identifying similar network structures is key to capture graph isomorphi...
research
09/29/2022

Provably expressive temporal graph networks

Temporal graph networks (TGNs) have gained prominence as models for embe...
research
03/17/2022

On the expressive power of message-passing neural networks as global feature map transformers

We investigate the power of message-passing neural networks (MPNNs) in t...
research
11/11/2021

Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs

We present the Sequential Aggregation and Rematerialization (SAR) scheme...

Please sign up or login with your details

Forgot password? Click here to reset