Trainability for Universal GNNs Through Surgical Randomness

12/08/2021
by   Billy Joe Franks, et al.
0

Message passing neural networks (MPNN) have provable limitations, which can be overcome by universal networks. However, universal networks are typically impractical. The only exception is random node initialization (RNI), a data augmentation method that results in provably universal networks. Unfortunately, RNI suffers from severe drawbacks such as slow convergence and high sensitivity to changes in hyperparameters. We transfer powerful techniques from the practical world of graph isomorphism testing to MPNNs, resolving these drawbacks. This culminates in individualization-refinement node initialization (IRNI). We replace the indiscriminate and haphazard randomness used in RNI by a surgical incision of only a few random bits at well-selected nodes. Our novel non-intrusive data-augmentation scheme maintains the networks' universality while resolving the trainability issues. We formally prove the claimed universality and corroborate experimentally – on synthetic benchmarks sets previously explicitly designed for that purpose – that IRNI overcomes the limitations of MPNNs. We also verify the practical efficacy of our approach on the standard benchmark data sets PROTEINS and NCI1.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2020

The Surprising Power of Graph Neural Networks with Random Node Initialization

Graph neural networks (GNNs) are effective models for representation lea...
research
04/21/2022

SoftEdge: Regularizing Graph Classification with Random Soft Edges

Graph data augmentation plays a vital role in regularizing Graph Neural ...
research
12/12/2019

Coloring graph neural networks for node disambiguation

In this paper, we show that a simple coloring scheme can improve, both t...
research
03/17/2022

Graph Representation Learning with Individualization and Refinement

Graph Neural Networks (GNNs) have emerged as prominent models for repres...
research
09/10/2023

Distance-Restricted Folklore Weisfeiler-Leman GNNs with Provable Cycle Counting Power

The ability of graph neural networks (GNNs) to count certain graph subst...

Please sign up or login with your details

Forgot password? Click here to reset