Graph Information Vanishing Phenomenon inImplicit Graph Neural Networks

03/02/2021
by   Haifeng Li, et al.
0

One of the key problems of GNNs is how to describe the importance of neighbor nodes in the aggregation process for learning node representations. A class of GNNs solves this problem by learning implicit weights to represent the importance of neighbor nodes, which we call implicit GNNs such as Graph Attention Network. The basic idea of implicit GNNs is to introduce graph information with special properties followed by Learnable Transformation Structures (LTS) which encode the importance of neighbor nodes via a data-driven way. In this paper, we argue that LTS makes the special properties of graph information disappear during the learning process, resulting in graph information unhelpful for learning node representations. We call this phenomenon Graph Information Vanishing (GIV). Also, we find that LTS maps different graph information into highly similar results. To validate the above two points, we design two sets of 70 random experiments on five Implicit GNNs methods and seven benchmark datasets by using a random permutation operator to randomly disrupt the order of graph information and replacing graph information with random values. We find that randomization does not affect the model performance in 93% of the cases, with about 7 percentage causing an average 0.5% accuracy loss. And the cosine similarity of output results, generated by LTS mapping different graph information, over 99% with an 81% proportion. The experimental results provide evidence to support the existence of GIV in Implicit GNNs and imply that the existing methods of Implicit GNNs do not make good use of graph information. The relationship between graph information and LTS should be rethought to ensure that graph information is used in node representation.

READ FULL TEXT
research
01/03/2022

Two-level Graph Neural Network

Graph Neural Networks (GNNs) are recently proposed neural network struct...
research
03/19/2022

Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with Heterophily

Due to the homophily assumption of graph convolution networks, a common ...
research
05/27/2022

Personalized PageRank Graph Attention Networks

There has been a rising interest in graph neural networks (GNNs) for rep...
research
11/19/2020

Scalable Graph Neural Networks for Heterogeneous Graphs

Graph neural networks (GNNs) are a popular class of parametric model for...
research
10/29/2021

GBK-GNN: Gated Bi-Kernel Graph Neural Networks for Modeling Both Homophily and Heterophily

Graph Neural Networks (GNNs) are widely used on a variety of graph-based...
research
09/12/2019

GResNet: Graph Residual Network for Reviving Deep GNNs from Suspended Animation

The existing graph neural networks (GNNs) based on the spectral graph co...
research
09/13/2022

Characterizing Graph Datasets for Node Classification: Beyond Homophily-Heterophily Dichotomy

Homophily is a graph property describing the tendency of edges to connec...

Please sign up or login with your details

Forgot password? Click here to reset