Investigating the Interplay between Features and Structures in Graph Learning

08/18/2023
by   Daniele Castellana, et al.
0

In the past, the dichotomy between homophily and heterophily has inspired research contributions toward a better understanding of Deep Graph Networks' inductive bias. In particular, it was believed that homophily strongly correlates with better node classification predictions of message-passing methods. More recently, however, researchers pointed out that such dichotomy is too simplistic as we can construct node classification tasks where graphs are completely heterophilic but the performances remain high. Most of these works have also proposed new quantitative metrics to understand when a graph structure is useful, which implicitly or explicitly assume the correlation between node features and target labels. Our work empirically investigates what happens when this strong assumption does not hold, by formalising two generative processes for node classification tasks that allow us to build and study ad-hoc problems. To quantitatively measure the influence of the node features on the target labels, we also use a metric we call Feature Informativeness. We construct six synthetic tasks and evaluate the performance of six models, including structure-agnostic ones. Our findings reveal that previously defined metrics are not adequate when we relax the above assumption. Our contribution to the workshop aims at presenting novel research findings that could help advance our understanding of the field.

READ FULL TEXT
research
05/14/2023

Addressing Heterophily in Node Classification with Graph Echo State Networks

Node classification tasks on graphs are addressed via fully-trained deep...
research
10/27/2022

Beyond Homophily with Graph Echo State Networks

Graph Echo State Networks (GESN) have already demonstrated their efficac...
research
10/22/2020

Joint Use of Node Attributes and Proximity for Semi-Supervised Classification on Graphs

The node classification problem is to infer unknown node labels in a gra...
research
01/04/2022

Graph Decipher: A transparent dual-attention graph neural network to understand the message-passing mechanism for the node classification

Graph neural networks can be effectively applied to find solutions for m...
research
04/20/2023

ID-MixGCL: Identity Mixup for Graph Contrastive Learning

Recently developed graph contrastive learning (GCL) approaches compare t...

Please sign up or login with your details

Forgot password? Click here to reset