Deconfounding to Explanation Evaluation in Graph Neural Networks

01/21/2022
by   Ying Xin, et al.
0

Explainability of graph neural networks (GNNs) aims to answer “Why the GNN made a certain prediction?”, which is crucial to interpret the model prediction. The feature attribution framework distributes a GNN's prediction to its input features (e.g., edges), identifying an influential subgraph as the explanation. When evaluating the explanation (i.e., subgraph importance), a standard way is to audit the model prediction based on the subgraph solely. However, we argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem. Furthermore, with an in-depth causal analysis, we find the OOD effect acts as the confounder, which brings spurious associations between the subgraph importance and model prediction, making the evaluation less reliable. In this work, we propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction. While the distribution shift is generally intractable, we employ the front-door adjustment and introduce a surrogate variable of the subgraphs. Specifically, we devise a generative model to generate the plausible surrogates that conform to the data distribution, thus approaching the unbiased estimation of subgraph importance. Empirical results demonstrate the effectiveness of DSE in terms of explanation fidelity.

READ FULL TEXT
research
04/23/2022

Reinforced Causal Explainer for Graph Neural Networks

Explainability is crucial for probing graph neural networks (GNNs), answ...
research
12/18/2021

Towards the Explanation of Graph Neural Networks in Digital Pathology with Information Flows

As Graph Neural Networks (GNNs) are widely adopted in digital pathology,...
research
05/24/2022

Faithful Explanations for Deep Graph Models

This paper studies faithful explanations for Graph Neural Networks (GNNs...
research
02/09/2021

On Explainability of Graph Neural Networks via Subgraph Explorations

We consider the problem of explaining the predictions of graph neural ne...
research
12/30/2021

Deconfounded Training for Graph Neural Networks

Learning powerful representations is one central theme of graph neural n...
research
03/29/2022

OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks

This paper proposes a new eXplanation framework, called OrphicX, for gen...
research
01/07/2023

Faithful and Consistent Graph Neural Network Explanations with Rationale Alignment

Uncovering rationales behind predictions of graph neural networks (GNNs)...

Please sign up or login with your details

Forgot password? Click here to reset