Semantic Interpretation and Validation of Graph Attention-based Explanations for GNN Models

08/08/2023
by   Efimia Panagiotaki, et al.
0

In this work, we propose a methodology for investigating the application of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models, introducing semantically-informed perturbations and establishing a correlation between predicted feature-importance weights and model accuracy. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to concisely describe complex features and relationships. As traditional explainability methods used in eXplainable AI (XAI) cannot be directly applied to such structures, graph-specific approaches are introduced. Attention mechanisms have demonstrated their efficacy in estimating the importance of input features in deep learning models and thus have been previously employed to provide feature-based explanations for GNN predictions. Building upon these insights, we extend existing attention-based graph-explainability methods investigating the use of attention weights as importance indicators of semantically sorted feature sets. Through analysing the behaviour of predicted attention-weights distribution in correlation with model accuracy, we gain valuable insights into feature importance with respect to the behaviour of the GNN model. We apply our methodology to a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance effectively generating reliable post-hoc semantic explanations.

READ FULL TEXT
research
01/28/2022

Rethinking Attention-Model Explainability through Faithfulness Violation Test

Attention mechanisms are dominating the explainability of deep models. T...
research
08/07/2023

SEM-GAT: Explainable Semantic Pose Estimation using Learned Graph Attention

This paper proposes a GNN-based method for exploiting semantics and loca...
research
06/23/2021

Reimagining GNN Explanations with ideas from Tabular Data

Explainability techniques for Graph Neural Networks still have a long wa...
research
06/14/2023

Explaining Explainability: Towards Deeper Actionable Insights into Deep Learning through Second-order Explainability

Explainability plays a crucial role in providing a more comprehensive un...
research
05/07/2021

Order in the Court: Explainable AI Methods Prone to Disagreement

In Natural Language Processing, feature-additive explanation methods qua...
research
09/01/2021

Joint Graph Learning and Matching for Semantic Feature Correspondence

In recent years, powered by the learned discriminative representation vi...
research
04/28/2020

Towards Prediction Explainability through Sparse Communication

Explainability is a topic of growing importance in NLP. In this work, we...

Please sign up or login with your details

Forgot password? Click here to reset