DeepAI
Log In Sign Up

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

02/05/2021
by   Ana Lucic, et al.
11

Graph neural networks (GNNs) have shown increasing promise in real-world applications, which has caused an increased interest in understanding their predictions. However, existing methods for explaining predictions from GNNs do not provide an opportunity for recourse: given a prediction for a particular instance, we want to understand how the prediction can be changed. We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs, i.e., the minimal perturbations to the input graph data such that the prediction changes. Using only edge deletions, we find that we are able to generate counterfactual examples for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94 primarily removes edges that are crucial for the original predictions, resulting in minimal counterfactual examples.

READ FULL TEXT

page 1

page 2

page 3

page 4

07/08/2021

Robust Counterfactual Explanations on Graph Neural Networks

Massive deployment of Graph Neural Networks (GNNs) in high-stake applica...
10/21/2022

Global Counterfactual Explainer for Graph Neural Networks

Graph neural networks (GNNs) find applications in various domains such a...
05/19/2022

Minimal Explanations for Neural Network Predictions

Explaining neural network predictions is known to be a challenging probl...
05/27/2022

On Consistency in Graph Neural Network Interpretation

Uncovering rationales behind predictions of graph neural networks (GNNs)...
04/23/2022

Reinforced Causal Explainer for Graph Neural Networks

Explainability is crucial for probing graph neural networks (GNNs), answ...
08/31/2022

Formalising the Robustness of Counterfactual Explanations for Neural Networks

The use of counterfactual explanations (CFXs) is an increasingly popular...