Very Deep Graph Neural Networks Via Noise Regularisation

by   Jonathan Godwin, et al.

Graph Neural Networks (GNNs) perform learned message passing over an input graph, but conventional wisdom says performing more than handful of steps makes training difficult and does not yield improved performance. Here we show the contrary. We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results on two challenging molecular property prediction benchmarks, Open Catalyst 2020 IS2RE and QM9. Our approach depends crucially on a novel but simple regularisation method, which we call “Noisy Nodes”, in which we corrupt the input graph with noise and add an auxiliary node autoencoder loss if the task is graph property prediction. Our results show this regularisation method allows the model to monotonically improve in performance with increased message passing steps. Our work opens new opportunities for reaping the benefits of deep neural networks in the space of graph and other structured prediction problems.


Is Rewiring Actually Helpful in Graph Neural Networks?

Graph neural networks compute node representations by performing multipl...

Molecule Property Prediction and Classification with Graph Hypernetworks

Graph neural networks are currently leading the performance charts in le...

Learning Graph Neural Networks with Noisy Labels

We study the robustness to symmetric label noise of GNNs training proced...

Affinity-Aware Graph Networks

Graph Neural Networks (GNNs) have emerged as a powerful technique for le...

Graph Networks with Spectral Message Passing

Graph Neural Networks (GNNs) are the subject of intense focus by the mac...

Asynchronous Algorithmic Alignment with Cocycles

State-of-the-art neural algorithmic reasoners make use of message passin...

Neural Execution of Graph Algorithms

Graph Neural Networks (GNNs) are a powerful representational tool for so...

Please sign up or login with your details

Forgot password? Click here to reset