Knowledge Base Completion: Baselines Strike Back

05/30/2017
by   Rudolf Kadlec, et al.
0

Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline - our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural changes as opposed to hyper-parameter tuning or different training objectives. This should prompt future research to re-consider how the performance of models is evaluated and reported.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2020

Knowledge Base Completion: Baseline strikes back (Again)

Knowledge Base Completion has been a very active area recently, where mu...
research
10/17/2018

On Evaluating Embedding Models for Knowledge Base Completion

Knowledge bases contribute to many artificial intelligence tasks, yet th...
research
04/19/2017

An Interpretable Knowledge Transfer Model for Knowledge Base Completion

Knowledge bases are important resources for a variety of natural languag...
research
11/15/2021

The Choice of Knowledge Base in Automated Claim Checking

Automated claim checking is the task of determining the veracity of a cl...
research
06/19/2018

Canonical Tensor Decomposition for Knowledge Base Completion

The problem of Knowledge Base Completion can be framed as a 3rd-order bi...
research
08/31/2022

Incorporating Task-specific Concept Knowledge into Script Learning

In this paper, we present Tetris, a new task of Goal-Oriented Script Com...

Please sign up or login with your details

Forgot password? Click here to reset