Multi-objective Explanations of GNN Predictions

11/29/2021
by   Yifei Liu, et al.
6

Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks, but multiple layers of aggregations on graphs with irregular structures make GNN a less interpretable model. Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction. The two families of approaches aim at two distinct objectives, "simulatability" and "counterfactual relevance", but it is not clear how the objectives can jointly influence the human understanding of an explanation. We design a user study to investigate such joint effects and use the findings to design a multi-objective optimization (MOO) algorithm to find Pareto optimal explanations that are well-balanced in simulatability and counterfactual. Since the target model can be of any GNN variants and may not be accessible due to privacy concerns, we design a search algorithm using zeroth-order information without accessing the architecture and parameters of the target model. Quantitative experiments on nine graphs from four applications demonstrate that the Pareto efficient explanations dominate single-objective baselines that use first-order continuous optimization or discrete combinatorial search. The explanations are further evaluated in robustness and sensitivity to show their capability of revealing convincing causes while being cautious about the possible confounders. The diverse dominating counterfactuals can certify the feasibility of algorithmic recourse, that can potentially promote algorithmic fairness where humans are participating in the decision-making using GNN.

READ FULL TEXT
research
04/23/2020

Multi-Objective Counterfactual Explanations

Counterfactual explanations are one of the most popular methods to make ...
research
06/23/2021

Reimagining GNN Explanations with ideas from Tabular Data

Explainability techniques for Graph Neural Networks still have a long wa...
research
02/01/2022

MotifExplainer: a Motif-based Graph Neural Network Explainer

We consider the explanation problem of Graph Neural Networks (GNNs). Mos...
research
02/02/2017

Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications Using HyperMapper

In this paper we investigate an emerging application, 3D scene understan...
research
03/25/2021

Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation

Prior works on formalizing explanations of a graph neural network (GNN) ...
research
08/13/2019

Scalable Explanation of Inferences on Large Graphs

Probabilistic inferences distill knowledge from graphs to aid human make...

Please sign up or login with your details

Forgot password? Click here to reset