
A Survey on The Expressive Power of Graph Neural Networks
Graph neural networks (GNNs) are effective machine learning models for v...
read it

Counting Substructures with HigherOrder Graph Neural Networks: Possibility and Impossibility Results
While massage passing based Graph Neural Networks (GNNs) have become inc...
read it

Graph Neural Networks with Local Graph Parameters
Various recent proposals increase the distinguishing power of Graph Neur...
read it

Distance Encoding – Design Provably More Powerful Graph Neural Networks for Structural Representation Learning
Learning structural representations of node sets from graphstructured d...
read it

DegreeQuant: QuantizationAware Training for Graph Neural Networks
Graph neural networks (GNNs) have demonstrated strong performance on a w...
read it

Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
While graph kernels (GKs) are easy to train and enjoy provable theoretic...
read it

Graph Information Vanishing Phenomenon inImplicit Graph Neural Networks
One of the key problems of GNNs is how to describe the importance of nei...
read it
The Surprising Power of Graph Neural Networks with Random Node Initialization
Graph neural networks (GNNs) are effective models for representation learning on graphstructured data. However, standard GNNs are limited in their expressive power, as they cannot distinguish graphs beyond the capability of the WeisfeilerLeman (1WL) graph isomorphism heuristic. This limitation motivated a large body of work, including higherorder GNNs, which are provably more powerful models. To date, higherorder invariant and equivariant networks are the only models with known universality results, but these results are practically hindered by prohibitive computational complexity. Thus, despite their limitations, standard GNNs are commonly used, due to their strong practical performance. In practice, GNNs have shown a promising performance when enhanced with random node initialization (RNI), where the idea is to train and run the models with randomized initial node features. In this paper, we analyze the expressive power of GNNs with RNI, and pose the following question: are GNNs with RNI more expressive than GNNs? We prove that this is indeed the case, by showing that GNNs with RNI are universal, a first such result for GNNs not relying on computationally demanding higherorder properties. We then empirically analyze the effect of RNI on GNNs, based on carefully constructed datasets. Our empirical findings support the superior performance of GNNs with RNI over standard GNNs. In fact, we demonstrate that the performance of GNNs with RNI is often comparable with or better than that of higherorder GNNs, while keeping the much lower memory requirements of standard GNNs. However, this improvement typically comes at the cost of slower model convergence. Somewhat surprisingly, we found that the convergence rate and the accuracy of the models can be improved by using only a partial random initialization regime.
READ FULL TEXT
Comments
There are no comments yet.