Mining Commonsense Facts from the Physical World

02/08/2020
by   Yanyan Zou, et al.
0

Textual descriptions of the physical world implicitly mention commonsense facts, while the commonsense knowledge bases explicitly represent such facts as triples. Compared to dramatically increased text data, the coverage of existing knowledge bases is far away from completion. Most of the prior studies on populating knowledge bases mainly focus on Freebase. To automatically complete commonsense knowledge bases to improve their coverage is under-explored. In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world. We build an effective new model that fuses information from both sequence text and existing knowledge base resource. Then we create two large annotated datasets each with approximate 200k instances for commonsense knowledge base completion. Empirical results demonstrate that our model significantly outperforms baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/24/2018

Commonsense mining as knowledge base completion? A study on the impact of novelty

Commonsense knowledge bases such as ConceptNet represent knowledge in th...
research
03/03/2021

CogNet: Bridging Linguistic Knowledge, World Knowledge and Commonsense Knowledge

In this paper, we present CogNet, a knowledge base (KB) dedicated to int...
research
10/27/2020

DualTKB: A Dual Learning Bridge between Text and Knowledge Base

In this work, we present a dual learning approach for unsupervised text ...
research
07/01/2021

Essence of Factual Knowledge

Knowledge bases are collections of domain-specific and commonsense facts...
research
06/11/2020

A Probabilistic Model with Commonsense Constraints for Pattern-based Temporal Fact Extraction

Textual patterns (e.g., Country's president Person) are specified and/or...
research
02/18/2022

Selection Strategies for Commonsense Knowledge

Selection strategies are broadly used in first-order logic theorem provi...
research
10/17/2019

BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge

We introduce a simple yet effective method of integrating contextual emb...

Please sign up or login with your details

Forgot password? Click here to reset