Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies

05/25/2023
by   Jonas Teufel, et al.
0

Despite the increasing relevance of explainable AI, assessing the quality of explanations remains a challenging issue. Due to the high costs associated with human-subject experiments, various proxy metrics are often used to approximately quantify explanation quality. Generally, one possible interpretation of the quality of an explanation is its inherent value for teaching a related concept to a student. In this work, we extend artificial simulatability studies to the domain of graph neural networks. Instead of costly human trials, we use explanation-supervisable graph neural networks to perform simulatability studies to quantify the inherent usefulness of attributional graph explanations. We perform an extensive ablation study to investigate the conditions under which the proposed analyses are most meaningful. We additionally validate our methods applicability on real-world graph classification and regression datasets. We find that relevant explanations can significantly boost the sample efficiency of graph neural networks and analyze the robustness towards noise and bias in the explanations. We believe that the notion of usefulness obtained from our proposed simulatability analysis provides a dimension of explanation quality that is largely orthogonal to the common practice of faithfulness and has great potential to expand the toolbox of explanation quality assessments, specifically for graph explanations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/22/2021

Towards Automated Evaluation of Explanations in Graph Neural Networks

Explaining Graph Neural Networks predictions to end users of AI applicat...
research
11/23/2022

MEGAN: Multi-Explanation Graph Attention Network

Explainable artificial intelligence (XAI) methods are expected to improv...
research
06/07/2022

EiX-GNN : Concept-level eigencentrality explainer for graph neural networks

Explaining is a human knowledge transfer process regarding a phenomenon ...
research
04/14/2023

KS-GNNExplainer: Global Model Interpretation Through Instance Explanations On Histopathology images

Instance-level graph neural network explainers have proven beneficial fo...
research
06/16/2020

How Much Can I Trust You? – Quantifying Uncertainties in Explaining Neural Networks

Explainable AI (XAI) aims to provide interpretations for predictions mad...
research
08/04/2022

Explaining Classifiers Trained on Raw Hierarchical Multiple-Instance Data

Learning from raw data input, thus limiting the need for feature enginee...
research
12/13/2020

Explanation from Specification

Explainable components in XAI algorithms often come from a familiar set ...

Please sign up or login with your details

Forgot password? Click here to reset