Faithful and Consistent Graph Neural Network Explanations with Rationale Alignment

01/07/2023
by   Tianxiang Zhao, et al.
8

Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. provide interpretations of GNN's behavior. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and failing to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons for spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a new explanation framework with an auxiliary alignment loss, which is theoretically proven to be optimizing a more faithful explanation objective intrinsically. Concretely for this alignment loss, a set of different perspectives are explored: anchor-based alignment, distributional alignment based on Gaussian mixture models, mutual-information-based alignment, etc. A comprehensive study is conducted both on the effectiveness of this new framework in terms of explanation faithfulness/consistency and on the advantages of these variants.

READ FULL TEXT
research
05/27/2022

On Consistency in Graph Neural Network Interpretation

Uncovering rationales behind predictions of graph neural networks (GNNs)...
research
03/29/2022

OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks

This paper proposes a new eXplanation framework, called OrphicX, for gen...
research
01/04/2023

CI-GNN: A Granger Causality-Inspired Graph Neural Network for Interpretable Brain Network-Based Psychiatric Diagnosis

There is a recent trend to leverage the power of graph neural networks (...
research
02/01/2022

MotifExplainer: a Motif-based Graph Neural Network Explainer

We consider the explanation problem of Graph Neural Networks (GNNs). Mos...
research
01/21/2022

Deconfounding to Explanation Evaluation in Graph Neural Networks

Explainability of graph neural networks (GNNs) aims to answer “Why the G...
research
12/18/2021

Towards the Explanation of Graph Neural Networks in Digital Pathology with Information Flows

As Graph Neural Networks (GNNs) are widely adopted in digital pathology,...
research
03/25/2021

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation

Prior works on formalizing explanations of a graph neural network (GNN) ...

Please sign up or login with your details

Forgot password? Click here to reset