Language Models as Knowledge Bases?

09/03/2019
by   Fabio Petroni, et al.
0

Recent progress in pretraining language models on large textual corpora led to a surge of improvements for downstream NLP tasks. Whilst learning linguistic knowledge, these models may also be storing relational knowledge present in the training data, and may be able to answer queries structured as "fill-in-the-blank" cloze statements. Language models have many advantages over structured knowledge bases: they require no schema engineering, allow practitioners to query about an open class of relations, are easy to extend to more data, and require no human supervision to train. We present an in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models. We find that (i) without fine-tuning, BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge, (ii) BERT also does remarkably well on open-domain question answering against a supervised baseline, and (iii) certain types of factual knowledge are learned much more readily than others by standard language model pretraining approaches. The surprisingly strong ability of these models to recall factual knowledge without any fine-tuning demonstrates their potential as unsupervised open-domain QA systems. The code to reproduce our analysis is available at https://github.com/facebookresearch/LAMA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/14/2023

MatSci-NLP: Evaluating Scientific Language Models on Materials Science Language Tasks Using Text-to-Schema Modeling

We present MatSci-NLP, a natural language benchmark for evaluating the p...
research
04/16/2021

Editing Factual Knowledge in Language Models

The factual knowledge acquired during pretraining and stored in the para...
research
04/12/2021

Relational world knowledge representation in contextual language models: A review

Relational knowledge bases (KBs) are established tools for world knowled...
research
02/06/2021

Does He Wink or Does He Nod? A Challenging Benchmark for Evaluating Word Understanding of Language Models

Recent progress in pretraining language models on large corpora has resu...
research
05/24/2020

Common Sense or World Knowledge? Investigating Adapter-Based Knowledge Injection into Pretrained Transformers

Following the major success of neural language models (LMs) such as BERT...
research
08/04/2021

How to Query Language Models?

Large pre-trained language models (LMs) are capable of not only recoveri...
research
03/08/2021

Language Models have a Moral Dimension

Artificial writing is permeating our lives due to recent advances in lar...

Please sign up or login with your details

Forgot password? Click here to reset