Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

06/17/2021
by   Boxi Cao, et al.
0

Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source. In this paper, we conduct a rigorous study to explore the underlying predicting mechanisms of MLMs over different extraction paradigms. By investigating the behaviors of MLMs, we find that previous decent performance mainly owes to the biased prompts which overfit dataset artifacts. Furthermore, incorporating illustrative cases and external contexts improve knowledge prediction mainly due to entity type guidance and golden answer leakage. Our findings shed light on the underlying predicting mechanisms of MLMs, and strongly question the previous conclusion that current MLMs can potentially serve as reliable factual knowledge bases.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/10/2021

Language Models As or For Knowledge Bases

Pre-trained language models (LMs) have recently gained attention for the...
10/01/2020

CoLAKE: Contextualized Language and Knowledge Embedding

With the emerging branch of incorporating factual knowledge into pre-tra...
08/20/2020

Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries

Pretrained language models have been suggested as a possible alternative...
07/15/2019

Myers-Briggs Personality Classification and Personality-Specific Language Generation Using Pre-trained Language Models

The Myers-Briggs Type Indicator (MBTI) is a popular personality metric t...
04/12/2022

A Review on Language Models as Knowledge Bases

Recently, there has been a surge of interest in the NLP community on the...
05/12/2021

How Reliable are Model Diagnostics?

In the pursuit of a deeper understanding of a model's behaviour, there i...