Reimagining GNN Explanations with ideas from Tabular Data

06/23/2021
by   Anjali Singh, et al.
14

Explainability techniques for Graph Neural Networks still have a long way to go compared to explanations available for both neural and decision decision tree-based models trained on tabular data. Using a task that straddles both graphs and tabular data, namely Entity Matching, we comment on key aspects of explainability that are missing in GNN model explanations.

READ FULL TEXT

page 2

page 3

research
08/19/2022

Evaluating Explainability for Graph Neural Networks

As post hoc explanations are increasingly used to understand the behavio...
research
11/01/2021

Edge-Level Explanations for Graph Neural Networks by Extending Explainability Methods for Convolutional Neural Networks

Graph Neural Networks (GNNs) are deep learning models that take graph da...
research
05/26/2022

DT+GNN: A Fully Explainable Graph Neural Network using Decision Trees

We propose the fully explainable Decision Tree Graph Neural Network (DT+...
research
02/24/2022

Explanatory Paradigms in Neural Networks

In this article, we present a leap-forward expansion to the study of exp...
research
11/29/2021

Multi-objective Explanations of GNN Predictions

Graph Neural Network (GNN) has achieved state-of-the-art performance in ...
research
12/03/2021

Combining Sub-Symbolic and Symbolic Methods for Explainability

Similarly to other connectionist models, Graph Neural Networks (GNNs) la...
research
08/08/2023

Semantic Interpretation and Validation of Graph Attention-based Explanations for GNN Models

In this work, we propose a methodology for investigating the application...

Please sign up or login with your details

Forgot password? Click here to reset