Towards Explanation for Unsupervised Graph-Level Representation Learning

05/20/2022
by   Qinghua Zheng, et al.
1

Due to the superior performance of Graph Neural Networks (GNNs) in various domains, there is an increasing interest in the GNN explanation problem "which fraction of the input graph is the most crucial to decide the model's decision?" Existing explanation methods focus on the supervised settings, , node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored. The opaqueness of the graph representations may lead to unexpected risks when deployed for high-stake decision-making scenarios. In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, Unsupervised Subgraph Information Bottleneck (USIB). We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the expressiveness and robustness of representations benefit the fidelity of explanatory subgraphs. Experimental results on both synthetic and real-world datasets demonstrate the superiority of our developed explainer and the validity of our theoretical analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset