Discovering Invariant Rationales for Graph Neural Networks

01/30/2022
by   Ying-Xin Wu, et al.
0

Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features – rationale – which guides the model prediction. Unfortunately, the leading rationalization models often rely on data biases, especially shortcut features, to compose rationales and make predictions without probing the critical and causal patterns. Moreover, such data biases easily change outside the training distribution. As a result, these models suffer from a huge drop in interpretability and predictive performance on out-of-distribution data. In this work, we propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs. It conducts interventions on the training distribution to create multiple interventional distributions. Then it approaches the causal rationales that are invariant across different distributions while filtering out the spurious patterns that are unstable. Experiments on both synthetic and real-world datasets validate the superiority of our DIR in terms of interpretability and generalization ability on graph classification over the leading baselines. Code and datasets are available at https://github.com/Wuyxin/DIR-GNN.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/20/2021

Generalizing Graph Neural Networks on Out-Of-Distribution Graphs

Graph Neural Networks (GNNs) are proposed without considering the agnost...
research
11/25/2022

Interpreting Unfairness in Graph Neural Networks via Training Node Attribution

Graph Neural Networks (GNNs) have emerged as the leading paradigm for so...
research
03/27/2023

Mind the Label Shift of Augmentation-based Graph OOD Generalization

Out-of-distribution (OOD) generalization is an important issue for Graph...
research
12/30/2021

Deconfounded Training for Graph Neural Networks

Learning powerful representations is one central theme of graph neural n...
research
09/28/2022

Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure

Most Graph Neural Networks (GNNs) predict the labels of unseen graphs by...
research
05/01/2023

Discover and Cure: Concept-aware Mitigation of Spurious Correlation

Deep neural networks often rely on spurious correlations to make predict...
research
11/26/2022

Distribution Free Prediction Sets for Node Classification

Graph Neural Networks (GNNs) are able to achieve high classification acc...

Please sign up or login with your details

Forgot password? Click here to reset