Inducing Interpretability in Knowledge Graph Embeddings

12/10/2017
by   Chandrahas, et al.
0

We study the problem of inducing interpretability in KG embeddings. Specifically, we explore the Universal Schema (Riedel et al., 2013) and propose a method to induce interpretability. There have been many vector space models proposed for the problem, however, most of these methods don't address the interpretability (semantics) of individual dimensions. In this work, we study this problem and propose a method for inducing interpretability in KG embeddings using entity co-occurrence statistics. The proposed method significantly improves the interpretability, while maintaining comparable performance in other KG tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/12/2021

HopfE: Knowledge Graph Representation Learning using Inverse Hopf Fibrations

Recently, several Knowledge Graph Embedding (KGE) approaches have been d...
research
04/30/2020

Knowledge Graph Embeddings and Explainable AI

Knowledge graph embeddings are now a widely adopted approach to knowledg...
research
07/20/2017

DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning

We study the problem of learning to reason in large scale knowledge grap...
research
12/03/2022

Intermediate Entity-based Sparse Interpretable Representation Learning

Interpretable entity representations (IERs) are sparse embeddings that a...
research
09/10/2019

Improving the Interpretability of Neural Sentiment Classifiers via Data Augmentation

Recent progress of neural network models has achieved remarkable perform...
research
09/23/2018

Learning and Evaluating Sparse Interpretable Sentence Embeddings

Previous research on word embeddings has shown that sparse representatio...

Please sign up or login with your details

Forgot password? Click here to reset