Log In Sign Up

ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions

by   Bo Kang, et al.

Networks are powerful data structures, but are challenging to work with for conventional machine learning methods. Network Embedding (NE) methods attempt to resolve this by learning vector representations for the nodes, for subsequent use in downstream machine learning tasks. Link Prediction (LP) is one such downstream machine learning task that is an important use case and popular benchmark for NE methods. Unfortunately, while NE methods perform exceedingly well at this task, they are lacking in transparency as compared to simpler LP approaches. We introduce ExplaiNE, an approach to offer counterfactual explanations for NE-based LP methods, by identifying existing links in the network that explain the predicted links. ExplaiNE is applicable to a broad class of NE algorithms. An extensive empirical evaluation for the NE method `Conditional Network Embedding' in particular demonstrates its accuracy and scalability.


page 17

page 18


Network Embedding: An Overview

Networks are one of the most powerful structures for modeling problems i...

Adversarial Robustness of Probabilistic Network Embedding for Link Prediction

In today's networked society, many real-world problems can be formalized...

Network Representation Learning: Consolidation and Renewed Bearing

Graphs are a natural abstraction for many problems where nodes represent...

Multiple Kernel Representation Learning on Networks

Learning representations of nodes in a low dimensional space is a crucia...

Flashlight: Scalable Link Prediction with Effective Decoders

Link prediction (LP) has been recognized as an important task in graph l...

Asymptotics of Network Embeddings Learned via Subsampling

Network data are ubiquitous in modern machine learning, with tasks of in...

Neural Embedding Propagation on Heterogeneous Networks

Classification is one of the most important problems in machine learning...