GNNInterpreter: A Probabilistic Generative Model-Level Explanation for Graph Neural Networks
Recently, Graph Neural Networks (GNNs) have significantly advanced the performance of machine learning tasks on graphs. However, this technological breakthrough makes people wonder: how does a GNN make such decisions, and can we trust its prediction with high confidence? When it comes to some critical fields such as biomedicine, where making wrong decisions can have severe consequences, interpreting the inner working mechanisms of GNNs before applying them is crucial. In this paper, we propose a novel model-agnostic model-level explanation method for different GNNs that follow the message passing scheme, GNNInterpreter, to explain the high-level decision-making process of the GNN model. More specifically, with continuous relaxation of graphs and the reparameterization trick, GNNInterpreter learns a probabilistic generative graph distribution which produces the most representative graph for the target prediction in the eye of the GNN model. Compared with the only existing work, GNNInterpreter is more computationally efficient and more flexible in generating explanation graphs with different types of node features and edge features, without introducing another blackbox to explain the GNN and without requiring domain-specific knowledge. Additionally, the experimental studies conducted on four different datasets demonstrate that the explanation graph generated by GNNInterpreter can match the desired graph pattern when the model is ideal and reveal potential model pitfalls if there exist any.
READ FULL TEXT