Explaining Link Predictions in Knowledge Graph Embedding Models with Influential Examples

12/05/2022
by   Adrianna Janik, et al.
0

We study the problem of explaining link predictions in the Knowledge Graph Embedding (KGE) models. We propose an example-based approach that exploits the latent space representation of nodes and edges in a knowledge graph to explain predictions. We evaluated the importance of identified triples by observing progressing degradation of model performance upon influential triples removal. Our experiments demonstrate that this approach to generate explanations outperforms baselines on KGE models for two publicly available datasets.

READ FULL TEXT
research
09/13/2022

Subsampling for Knowledge Graph Embedding Explained

In this article, we explain the recent advance of subsampling methods in...
research
10/14/2020

Explaining Creative Artifacts

Human creativity is often described as the mental process of combining a...
research
05/18/2021

Learning Embeddings from Knowledge Graphs With Numeric Edge Attributes

Numeric values associated to edges of a knowledge graph have been used t...
research
07/02/2019

Knowledge Graph Embedding for Ecotoxicological Effect Prediction

Exploring the effects a chemical compound has on a species takes a consi...
research
05/04/2022

Explainable Knowledge Graph Embedding: Inference Reconciliation for Knowledge Inferences Supporting Robot Actions

Learned knowledge graph representations supporting robots contain a weal...
research
09/25/2019

On Understanding Knowledge Graph Representation

Many methods have been developed to represent knowledge graph data, whic...
research
10/28/2021

Explaining Latent Representations with a Corpus of Examples

Modern machine learning models are complicated. Most of them rely on con...

Please sign up or login with your details

Forgot password? Click here to reset