Demystifying Graph Neural Network Explanations

11/25/2021
by   Anna Himmelhuber, et al.
0

Graph neural networks (GNNs) are quickly becoming the standard approach for learning on graph structured data across several domains, but they lack transparency in their decision-making. Several perturbation-based approaches have been developed to provide insights into the decision making process of GNNs. As this is an early research area, the methods and data used to evaluate the generated explanations lack maturity. We explore these existing approaches and identify common pitfalls in three main areas: (1) synthetic data generation process, (2) evaluation metrics, and (3) the final presentation of the explanation. For this purpose, we perform an empirical study to explore these pitfalls along with their unintended consequences and propose remedies to mitigate their effects.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/30/2022

GANExplainer: GAN-based Graph Neural Networks Explainer

With the rapid deployment of graph neural networks (GNNs) based techniqu...
research
12/03/2021

Combining Sub-Symbolic and Symbolic Methods for Explainability

Similarly to other connectionist models, Graph Neural Networks (GNNs) la...
research
04/21/2020

Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks

While graph neural networks (GNNs) have shown a great potential in vario...
research
12/23/2022

Detection, Explanation and Filtering of Cyber Attacks Combining Symbolic and Sub-Symbolic Methods

Machine learning (ML) on graph-structured data has recently received dee...
research
05/07/2019

Are Graph Neural Networks Miscalibrated?

Graph Neural Networks (GNNs) have proven to be successful in many classi...
research
06/07/2023

Empowering Counterfactual Reasoning over Graph Neural Networks through Inductivity

Graph neural networks (GNNs) have various practical applications, such a...

Please sign up or login with your details

Forgot password? Click here to reset