Exploring Explainability Methods for Graph Neural Networks

11/03/2022
by   Harsh Patel, et al.
0

With the growing use of deep learning methods, particularly graph neural networks, which encode intricate interconnectedness information, for a variety of real tasks, there is a necessity for explainability in such settings. In this paper, we demonstrate the applicability of popular explainability approaches on Graph Attention Networks (GAT) for a graph-based super-pixel image classification task. We assess the qualitative and quantitative performance of these techniques on three different datasets and describe our findings. The results shed a fresh light on the notion of explainability in GNNs, particularly GATs.

READ FULL TEXT

page 2

page 3

page 9

page 10

research
06/02/2023

A Survey on Explainability of Graph Neural Networks

Graph neural networks (GNNs) are powerful graph-based deep-learning mode...
research
12/31/2020

Explainability in Graph Neural Networks: A Taxonomic Survey

Deep learning methods are achieving ever-increasing performance on many ...
research
11/25/2020

Quantifying Explainers of Graph Neural Networks in Computational Pathology

Explainability of deep learning methods is imperative to facilitate thei...
research
08/30/2022

EchoGNN: Explainable Ejection Fraction Estimation with Graph Neural Networks

Ejection fraction (EF) is a key indicator of cardiac function, allowing ...
research
05/31/2019

Explainability Techniques for Graph Convolutional Networks

Graph Networks are used to make decisions in potentially complex scenari...
research
06/20/2022

GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks

As one of the most popular machine learning models today, graph neural n...
research
04/05/2021

Explainability-aided Domain Generalization for Image Classification

Traditionally, for most machine learning settings, gaining some degree o...

Please sign up or login with your details

Forgot password? Click here to reset