Log In Sign Up

CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

by   Ana Lucic, et al.

Graph neural networks (GNNs) have shown increasing promise in real-world applications, which has caused an increased interest in understanding their predictions. However, existing methods for explaining predictions from GNNs do not provide an opportunity for recourse: given a prediction for a particular instance, we want to understand how the prediction can be changed. We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs, i.e., the minimal perturbations to the input graph data such that the prediction changes. Using only edge deletions, we find that we are able to generate counterfactual examples for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94 primarily removes edges that are crucial for the original predictions, resulting in minimal counterfactual examples.


page 1

page 2

page 3

page 4


Robust Counterfactual Explanations on Graph Neural Networks

Massive deployment of Graph Neural Networks (GNNs) in high-stake applica...

Global Counterfactual Explainer for Graph Neural Networks

Graph neural networks (GNNs) find applications in various domains such a...

Minimal Explanations for Neural Network Predictions

Explaining neural network predictions is known to be a challenging probl...

On Consistency in Graph Neural Network Interpretation

Uncovering rationales behind predictions of graph neural networks (GNNs)...

Reinforced Causal Explainer for Graph Neural Networks

Explainability is crucial for probing graph neural networks (GNNs), answ...

Formalising the Robustness of Counterfactual Explanations for Neural Networks

The use of counterfactual explanations (CFXs) is an increasingly popular...