Toward Multiple Specialty Learners for Explaining GNNs via Online Knowledge Distillation

10/20/2022
by   Tien-Cuong Bui, et al.
0

Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions, especially when making critical decisions. However, explaining GNNs is challenging due to the complexity of graph data and model execution. Despite additional computational costs, post-hoc explanation approaches have been widely adopted due to the generality of their architectures. Intrinsically interpretable models provide instant explanations but are usually model-specific, which can only explain particular GNNs. Therefore, we propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions. SCALE trains multiple specialty learners to explain GNNs since constructing one powerful explainer to examine attributions of interactions in input graphs is complicated. In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm. In the explanation phase, explanations of predictions are provided by multiple explainers corresponding to trained learners. Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions, respectively. A feature attribution module provides overall summaries and instance-level feature contributions. We compare SCALE with state-of-the-art baselines via quantitative and qualitative experiments to prove its explanation correctness and execution performance. We also conduct a series of ablation studies to understand the strengths and weaknesses of the proposed framework.

READ FULL TEXT

page 1

page 3

research
11/03/2022

INGREX: An Interactive Explanation Framework for Graph Neural Networks

Graph Neural Networks (GNNs) are widely used in many modern applications...
research
03/17/2023

Distill n' Explain: explaining graph neural networks using simple surrogates

Explaining node predictions in graph neural networks (GNNs) often boils ...
research
08/05/2022

PGX: A Multi-level GNN Explanation Framework Based on Separate Knowledge Distillation Processes

Graph Neural Networks (GNNs) are widely adopted in advanced AI systems d...
research
02/16/2022

Task-Agnostic Graph Explanations

Graph Neural Networks (GNNs) have emerged as powerful tools to encode gr...
research
10/31/2022

PAGE: Prototype-Based Model-Level Explanations for Graph Neural Networks

Aside from graph neural networks (GNNs) catching significant attention a...
research
02/02/2022

Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners

Post-hoc explanations for black box models have been studied extensively...
research
11/19/2021

Explaining GNN over Evolving Graphs using Information Flow

Graphs are ubiquitous in many applications, such as social networks, kno...

Please sign up or login with your details

Forgot password? Click here to reset