Towards the Explanation of Graph Neural Networks in Digital Pathology with Information Flows

by   Junchi Yu, et al.

As Graph Neural Networks (GNNs) are widely adopted in digital pathology, there is increasing attention to developing explanation models (explainers) of GNNs for improved transparency in clinical decisions. Existing explainers discover an explanatory subgraph relevant to the prediction. However, such a subgraph is insufficient to reveal all the critical biological substructures for the prediction because the prediction will remain unchanged after removing that subgraph. Hence, an explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions for the explanation. Such explanation requires a measurement of information transferred from different input subgraphs to the predictive output, which we define as information flow. In this work, we address these key challenges and propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs. To evaluate the information flow within GNN's prediction, we first propose a novel notion of predictiveness, named f-information, which is directional and incorporates the realistic capacity of the GNN model. Based on it, IFEXPLAINER generates the explanatory subgraph with maximal information flow to the prediction. Meanwhile, it minimizes the information flow from the input to the predictive result after removing the explanation. Thus, the produced explanation is necessarily important to the prediction and sufficient to reveal the most crucial substructures. We evaluate IFEXPLAINER to interpret GNN's predictions on breast cancer subtyping. Experimental results on the BRACS dataset show the superior performance of the proposed method.


Deconfounding to Explanation Evaluation in Graph Neural Networks

Explainability of graph neural networks (GNNs) aims to answer “Why the G...

Robust Counterfactual Explanations on Graph Neural Networks

Massive deployment of Graph Neural Networks (GNNs) in high-stake applica...

Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis

Graph neural networks (GNNs) have been utilized to create multi-layer gr...

Towards Explanation for Unsupervised Graph-Level Representation Learning

Due to the superior performance of Graph Neural Networks (GNNs) in vario...

Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks

With the ever-increasing popularity and applications of graph neural net...

Faithful and Consistent Graph Neural Network Explanations with Rationale Alignment

Uncovering rationales behind predictions of graph neural networks (GNNs)...

On the Probability of Necessity and Sufficiency of Explaining Graph Neural Networks: A Lower Bound Optimization Approach

Explainability of Graph Neural Networks (GNNs) is critical to various GN...

Please sign up or login with your details

Forgot password? Click here to reset