MEGAN: Multi-Explanation Graph Attention Network

11/23/2022
by   Jonas Teufel, et al.
0

Explainable artificial intelligence (XAI) methods are expected to improve trust during human-AI interactions, provide tools for model analysis and extend human understanding of complex problems. Explanation-supervised training allows to improve explanation quality by training self-explaining XAI models on ground truth or human-generated explanations. However, existing explanation methods have limited expressiveness and interoperability due to the fact that only single explanations in form of node and edge importance are generated. To that end we propose the novel multi-explanation graph attention network (MEGAN). Our fully differentiable, attention-based model features multiple explanation channels, which can be chosen independently of the task specifications. We first validate our model on a synthetic graph regression dataset. We show that for the special single explanation case, our model significantly outperforms existing post-hoc and explanation-supervised baseline methods. Furthermore, we demonstrate significant advantages when using two explanations, both in quantitative explanation measures as well as in human interpretability. Finally, we demonstrate our model's capabilities on multiple real-world datasets. We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks and at the same time matches state-of-the-art graph neural networks in predictive performance, indicating that explanations and accuracy are not necessarily a trade-off.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/26/2022

Attention cannot be an Explanation

Attention based explanations (viz. saliency maps), by providing interpre...
research
05/25/2023

Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies

Despite the increasing relevance of explainable AI, assessing the qualit...
research
04/29/2021

Correcting Classification: A Bayesian Framework Using Explanation Feedback to Improve Classification Abilities

Neural networks (NNs) have shown high predictive performance, however, w...
research
06/25/2020

Explainable CNN-attention Networks (C-Attention Network) for Automated Detection of Alzheimer's Disease

In this work, we propose three explainable deep learning architectures t...
research
12/08/2022

Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations

Natural language explanations promise to offer intuitively understandabl...
research
06/28/2023

Increasing Performance And Sample Efficiency With Model-agnostic Interactive Feature Attributions

Model-agnostic feature attributions can provide local insights in comple...
research
09/14/2020

SCOUTER: Slot Attention-based Classifier for Explainable Image Recognition

Explainable artificial intelligence is gaining attention. However, most ...

Please sign up or login with your details

Forgot password? Click here to reset