A Neural Collapse Perspective on Feature Evolution in Graph Neural Networks

07/04/2023
by   Vignesh Kothapalli, et al.
0

Graph neural networks (GNNs) have become increasingly popular for classification tasks on graph-structured data. Yet, the interplay between graph topology and feature evolution in GNNs is not well understood. In this paper, we focus on node-wise classification, illustrated with community detection on stochastic block model graphs, and explore the feature evolution through the lens of the "Neural Collapse" (NC) phenomenon. When training instance-wise deep classifiers (e.g. for image classification) beyond the zero training error point, NC demonstrates a reduction in the deepest features' within-class variability and an increased alignment of their class means to certain symmetric structures. We start with an empirical study that shows that a decrease in within-class variability is also prevalent in the node-wise classification setting, however, not to the extent observed in the instance-wise case. Then, we theoretically study this distinction. Specifically, we show that even an "optimistic" mathematical model requires that the graphs obey a strict structural condition in order to possess a minimizer with exact collapse. Interestingly, this condition is viable also for heterophilic graphs and relates to recent empirical studies on settings with improved GNNs' generalization. Furthermore, by studying the gradient dynamics of the theoretical model, we provide reasoning for the partial collapse observed empirically. Finally, we present a study on the evolution of within- and between-class feature variability across layers of a well-trained GNN and contrast the behavior with spectral methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/02/2023

Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?

Recent studies on Graph Neural Networks(GNNs) provide both empirical and...
research
05/10/2021

Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth

Graph Neural Networks (GNNs) have been studied through the lens of expre...
research
01/30/2023

Zero-One Laws of Graph Neural Networks

Graph neural networks (GNNs) are de facto standard deep learning archite...
research
01/25/2023

Graph Neural Tangent Kernel: Convergence on Large Graphs

Graph neural networks (GNNs) achieve remarkable performance in graph mac...
research
12/18/2022

Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs

Graph neural networks (GNNs), as the de-facto model class for representa...
research
09/04/2023

Layer-wise training for self-supervised learning on graphs

End-to-end training of graph neural networks (GNN) on large graphs prese...
research
11/11/2022

Graph-Conditioned MLP for High-Dimensional Tabular Biomedical Data

Genome-wide studies leveraging recent high-throughput sequencing technol...

Please sign up or login with your details

Forgot password? Click here to reset