Commonsense mining as knowledge base completion? A study on the impact of novelty

04/24/2018
by   Stanisław Jastrzębski, et al.
0

Commonsense knowledge bases such as ConceptNet represent knowledge in the form of relational triples. Inspired by the recent work by Li et al., we analyse if knowledge base completion models can be used to mine commonsense knowledge from raw text. We propose novelty of predicted triples with respect to the training set as an important factor in interpreting results. We critically analyse the difficulty of mining novel commonsense knowledge, and show that a simple baseline method outperforms the previous state of the art on predicting more novel.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/08/2020

Mining Commonsense Facts from the Physical World

Textual descriptions of the physical world implicitly mention commonsens...
research
09/02/2019

Commonsense Knowledge Mining from Pretrained Models

Inferring commonsense knowledge is a key challenge in natural language p...
research
02/18/2022

Selection Strategies for Commonsense Knowledge

Selection strategies are broadly used in first-order logic theorem provi...
research
10/17/2019

BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge

We introduce a simple yet effective method of integrating contextual emb...
research
10/27/2020

DualTKB: A Dual Learning Bridge between Text and Knowledge Base

In this work, we present a dual learning approach for unsupervised text ...
research
05/05/2021

Commonsense Knowledge Base Construction in the Age of Big Data

Compiling commonsense knowledge is traditionally an AI topic approached ...
research
08/31/2022

Incorporating Task-specific Concept Knowledge into Script Learning

In this paper, we present Tetris, a new task of Goal-Oriented Script Com...

Please sign up or login with your details

Forgot password? Click here to reset