Towards Automated Evaluation of Explanations in Graph Neural Networks

06/22/2021
by   Vanya BK, et al.
7

Explaining Graph Neural Networks predictions to end users of AI applications in easily understandable terms remains an unsolved problem. In particular, we do not have well developed methods for automatically evaluating explanations, in ways that are closer to how users consume those explanations. Based on recent application trends and our own experiences in real world problems, we propose automatic evaluation approaches for GNN Explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/19/2022

Evaluating Explainability for Graph Neural Networks

As post hoc explanations are increasingly used to understand the behavio...
research
04/14/2023

KS-GNNExplainer: Global Model Interpretation Through Instance Explanations On Histopathology images

Instance-level graph neural network explainers have proven beneficial fo...
research
05/25/2023

Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies

Despite the increasing relevance of explainable AI, assessing the qualit...
research
11/19/2021

Explaining GNN over Evolving Graphs using Information Flow

Graphs are ubiquitous in many applications, such as social networks, kno...
research
06/07/2022

EiX-GNN : Concept-level eigencentrality explainer for graph neural networks

Explaining is a human knowledge transfer process regarding a phenomenon ...
research
08/17/2023

Interpretable Graph Neural Networks for Tabular Data

Data in tabular format is frequently occurring in real-world application...
research
05/19/2022

Minimal Explanations for Neural Network Predictions

Explaining neural network predictions is known to be a challenging probl...

Please sign up or login with your details

Forgot password? Click here to reset