DeepAI AI Chat
Log In Sign Up

Demystifying Graph Neural Network Explanations

by   Anna Himmelhuber, et al.
Siemens AG

Graph neural networks (GNNs) are quickly becoming the standard approach for learning on graph structured data across several domains, but they lack transparency in their decision-making. Several perturbation-based approaches have been developed to provide insights into the decision making process of GNNs. As this is an early research area, the methods and data used to evaluate the generated explanations lack maturity. We explore these existing approaches and identify common pitfalls in three main areas: (1) synthetic data generation process, (2) evaluation metrics, and (3) the final presentation of the explanation. For this purpose, we perform an empirical study to explore these pitfalls along with their unintended consequences and propose remedies to mitigate their effects.


page 1

page 2

page 3

page 4


GANExplainer: GAN-based Graph Neural Networks Explainer

With the rapid deployment of graph neural networks (GNNs) based techniqu...

Combining Sub-Symbolic and Symbolic Methods for Explainability

Similarly to other connectionist models, Graph Neural Networks (GNNs) la...

Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks

While graph neural networks (GNNs) have shown a great potential in vario...

Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic and Sub-Symbolic Methods

Machine learning (ML) on graph-structured data has recently received dee...

Learning to Explain Graph Neural Networks

Graph Neural Networks (GNNs) are a popular class of machine learning mod...

Structural Explanations for Graph Neural Networks using HSIC

Graph neural networks (GNNs) are a type of neural model that tackle grap...

Explaining Bayesian Neural Networks

To make advanced learning machines such as Deep Neural Networks (DNNs) m...