BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge

10/17/2019
by   Jeff Da, et al.
0

We introduce a simple yet effective method of integrating contextual embeddings with commonsense graph embeddings, dubbed BERT Infused Graphs: Matching Over Other embeDdings. First, we introduce a preprocessing method to improve the speed of querying knowledge bases. Then, we develop a method of creating knowledge embeddings from each knowledge base. We introduce a method of aligning tokens between two misaligned tokenization methods. Finally, we contribute a method of contextualizing BERT after combining with knowledge base embeddings. We also show BERTs tendency to correct lower accuracy question types. Our model achieves a higher accuracy than BERT, and we score fifth on the official leaderboard of the shared task and score the highest without any additional language model pretraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2019

Cracking the Contextual Commonsense Code: Understanding Commonsense Reasoning Aptitude of Deep Contextual Representations

Pretrained deep contextual representations have advanced the state-of-th...
research
04/24/2018

Commonsense mining as knowledge base completion? A study on the impact of novelty

Commonsense knowledge bases such as ConceptNet represent knowledge in th...
research
04/14/2021

Static Embeddings as Efficient Knowledge Bases?

Recent research investigates factual knowledge stored in large pretraine...
research
02/08/2020

Mining Commonsense Facts from the Physical World

Textual descriptions of the physical world implicitly mention commonsens...
research
02/18/2022

Selection Strategies for Commonsense Knowledge

Selection strategies are broadly used in first-order logic theorem provi...
research
03/03/2021

CogNet: Bridging Linguistic Knowledge, World Knowledge and Commonsense Knowledge

In this paper, we present CogNet, a knowledge base (KB) dedicated to int...
research
06/27/2016

Lifted Rule Injection for Relation Embeddings

Methods based on representation learning currently hold the state-of-the...

Please sign up or login with your details

Forgot password? Click here to reset