Why Do Masked Neural Language Models Still Need Common Sense Knowledge?

11/08/2019
by   Sunjae Kwon, et al.
0

Currently, contextualized word representations are learned by intricate neural network models, such as masked neural language models (MNLMs). The new representations significantly enhanced the performance in automated question answering by reading paragraphs. However, identifying the detailed knowledge trained in the MNLMs is difficult owing to numerous and intermingled parameters. This paper provides empirical but insightful analyses on the pretrained MNLMs with respect to common sense knowledge. First, we propose a test that measures what types of common sense knowledge do pretrained MNLMs understand. From the test, we observed that MNLMs partially understand various types of common sense knowledge but do not accurately understand the semantic meaning of relations. In addition, based on the difficulty of the question-answering task problems, we observed that pretrained MLM-based models are still vulnerable to problems that require common sense knowledge. We also experimentally demonstrated that we can elevate existing MNLM-based models by combining knowledge from an external common sense repository.

READ FULL TEXT

page 3

page 4

page 7

research
09/01/2022

Why Do Neural Language Models Still Need Commonsense Knowledge to Handle Semantic Variations in Question Answering?

Many contextualized word representations are now learned by intricate ne...
research
10/11/2020

Do Language Embeddings Capture Scales?

Pretrained Language Models (LMs) have been shown to possess significant ...
research
09/01/2021

Does Knowledge Help General NLU? An Empirical Study

It is often observed in knowledge-centric tasks (e.g., common sense ques...
research
01/19/2022

Evaluating Machine Common Sense via Cloze Testing

Language models (LMs) show state of the art performance for common sense...
research
06/08/2017

Dynamic Integration of Background Knowledge in Neural NLU Systems

Common-sense or background knowledge is required to understand natural l...
research
11/18/2017

Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates

Spatial understanding is a fundamental problem with wide-reaching real-w...
research
05/24/2020

Common Sense or World Knowledge? Investigating Adapter-Based Knowledge Injection into Pretrained Transformers

Following the major success of neural language models (LMs) such as BERT...

Please sign up or login with your details

Forgot password? Click here to reset