How does over-squashing affect the power of GNNs?

06/06/2023
by   Francesco Di Giovanni, et al.
0

Graph Neural Networks (GNNs) are the state-of-the-art model for machine learning on graph-structured data. The most popular class of GNNs operate by exchanging information between adjacent nodes, and are known as Message Passing Neural Networks (MPNNs). Given their widespread use, understanding the expressive power of MPNNs is a key question. However, existing results typically consider settings with uninformative node features. In this paper, we provide a rigorous analysis to determine which function classes of node features can be learned by an MPNN of a given capacity. We do so by measuring the level of pairwise interactions between nodes that MPNNs allow for. This measure provides a novel quantitative characterization of the so-called over-squashing effect, which is observed to occur when a large volume of messages is aggregated into fixed-size vectors. Using our measure, we prove that, to guarantee sufficient communication between pairs of nodes, the capacity of the MPNN must be large enough, depending on properties of the input graph structure, such as commute times. For many relevant scenarios, our analysis results in impossibility statements in practice, showing that over-squashing hinders the expressive power of MPNNs. We validate our theoretical findings through extensive controlled experiments and ablation studies.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2023

Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure

Graph Neural Networks (GNNs) are popular models for graph learning probl...
research
06/16/2021

A unifying point of view on expressive power of GNNs

Graph Neural Networks (GNNs) are a wide class of connectionist models fo...
research
01/05/2023

Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks

Randomized smoothing is one of the most promising frameworks for certify...
research
05/24/2023

What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding

We aim to deepen the theoretical understanding of Graph Neural Networks ...
research
02/05/2021

Graph Joint Attention Networks

Graph attention networks (GATs) have been recognized as powerful tools f...
research
06/16/2021

Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages

Nowadays, Graph Neural Networks (GNNs) following the Message Passing par...
research
09/13/2022

Characterizing Graph Datasets for Node Classification: Beyond Homophily-Heterophily Dichotomy

Homophily is a graph property describing the tendency of edges to connec...

Please sign up or login with your details

Forgot password? Click here to reset