Negated LAMA: Birds cannot fly

11/08/2019
by   Nora Kassner, et al.
0

Pretrained language models have achieved remarkable improvements in a broad range of natural language processing tasks, including question answering (QA). To analyze pretrained language model performance on QA, we extend the LAMA (Petroni et al., 2019) evaluation framework by a component that is focused on negation. We find that pretrained language models are equally prone to generate facts ("birds can fly") and their negation ("birds cannot fly"). This casts doubt on the claim that pretrained language models have adequately learned factual knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2020

BERT-kNN: Adding a kNN Search Component to Pretrained Language Models for Better QA

Khandelwal et al. (2020) show that a k-nearest-neighbor (kNN) component ...
research
09/06/2023

HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models

Large Language Models (LLMs) pretrained on massive corpora exhibit remar...
research
06/01/2021

Parameter-Efficient Neural Question Answering Models via Graph-Enriched Document Representations

As the computational footprint of modern NLP systems grows, it becomes i...
research
08/03/2023

Baby's CoThought: Leveraging Large Language Models for Enhanced Reasoning in Compact Models

Large Language Models (LLMs) demonstrate remarkable performance on a var...
research
12/10/2020

Infusing Finetuning with Semantic Dependencies

For natural language processing systems, two kinds of evidence support t...
research
10/07/2022

Calibrating Factual Knowledge in Pretrained Language Models

Previous literature has proved that Pretrained Language Models (PLMs) ca...

Please sign up or login with your details

Forgot password? Click here to reset