Improving Attention Mechanism in Graph Neural Networks via Cardinality Preservation

07/04/2019
by   Shuo Zhang, et al.
0

Graph Neural Networks (GNNs) are powerful to learn the representation of graph-structured data. Most of the GNNs use the message-passing scheme, where the embedding of a node is iteratively updated by aggregating the information of its neighbors. To achieve a better expressive capability of node influences, attention mechanism has grown to become a popular way to assign trainable weights of a node's neighbors in the aggregation. However, though the attention-based GNNs have achieved state-of-the-art results on several tasks, a clear understanding of their discriminative capacities is missing. In this work, we present a theoretical analysis of the representational properties of the GNN that adopts attention mechanism as an aggregator. In the analysis, we show all of the cases when those GNNs always fail to distinguish distinct structures. The finding shows existing attention-based aggregators fail to preserve the cardinality of the multiset of node feature vectors in the aggregation, thus limits their discriminative ability. To improve the performance of attention-based GNNs, we propose two cardinality preserved modifications that can be applied to any kind of attention mechanisms. We evaluate them in our GNN framework on benchmark datasets for graph classification. The results validate the improvements and show the competitive performance of our models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/05/2021

Graph Joint Attention Networks

Graph attention networks (GATs) have been recognized as powerful tools f...
research
05/25/2023

Demystifying Oversmoothing in Attention-Based Graph Neural Networks

Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon w...
research
08/15/2022

MM-GNN: Mix-Moment Graph Neural Network towards Modeling Neighborhood Feature Distribution

Graph Neural Networks (GNNs) have shown expressive performance on graph ...
research
02/25/2021

Stochastic Aggregation in Graph Neural Networks

Graph neural networks (GNNs) manifest pathologies including over-smoothi...
research
05/22/2023

Causal-Based Supervision of Attention in Graph Neural Network: A Better and Simpler Choice towards Powerful Attention

In recent years, attention mechanisms have demonstrated significant pote...
research
06/15/2020

Fast Graph Attention Networks Using Effective Resistance Based Graph Sparsification

The attention mechanism has demonstrated superior performance for infere...
research
05/12/2023

Fisher Information Embedding for Node and Graph Learning

Attention-based graph neural networks (GNNs), such as graph attention ne...

Please sign up or login with your details

Forgot password? Click here to reset