GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks

01/17/2020
by   Qiang Huang, et al.
22

Graph structured data has wide applicability in various domains such as physics, chemistry, biology, computer vision, and social networks, to name a few. Recently, graph neural networks (GNN) were shown to be successful in effectively representing graph structured data because of their good performance and generalization ability. GNN is a deep learning based method that learns a node representation by combining specific nodes and the structural/topological information of a graph. However, like other deep models, explaining the effectiveness of GNN models is a challenging task because of the complex nonlinear transformations made over the iterations. In this paper, we propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear feature selection method. GraphLIME is a generic GNN-model explanation framework that learns a nonlinear interpretable model locally in the subgraph of the node being explained. More specifically, to explain a node, we generate a nonlinear interpretable model from its N-hop neighborhood and then compute the K most representative features as the explanations of its prediction using HSIC Lasso. Through experiments on two real-world datasets, the explanations of GraphLIME are found to be of extraordinary degree and more descriptive in comparison to the existing explanation methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/10/2019

GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

Graph Neural Networks (GNNs) are a powerful tool for machine learning on...
research
02/04/2023

Structural Explanations for Graph Neural Networks using HSIC

Graph neural networks (GNNs) are a type of neural model that tackle grap...
research
08/04/2022

Explaining Classifiers Trained on Raw Hierarchical Multiple-Instance Data

Learning from raw data input, thus limiting the need for feature enginee...
research
10/26/2020

Contrastive Graph Neural Network Explanation

Graph Neural Networks achieve remarkable results on problems with struct...
research
02/08/2022

Simplified Graph Convolution with Heterophily

Graph convolutional networks (GCNs) (Kipf Welling, 2017) attempt to ...
research
11/19/2021

Explaining GNN over Evolving Graphs using Information Flow

Graphs are ubiquitous in many applications, such as social networks, kno...
research
10/08/2022

SlenderGNN: Accurate, Robust, and Interpretable GNN, and the Reasons for its Success

Can we design a GNN that is accurate and interpretable at the same time?...

Please sign up or login with your details

Forgot password? Click here to reset