Calibrating Factual Knowledge in Pretrained Language Models

10/07/2022
by   Qingxiu Dong, et al.
9

Previous literature has proved that Pretrained Language Models (PLMs) can store factual knowledge. However, we find that facts stored in the PLMs are not always correct. It motivates us to explore a fundamental question: How do we calibrate factual knowledge in PLMs without re-training from scratch? In this work, we propose a simple and lightweight method CaliNet to achieve this goal. To be specific, we first detect whether PLMs can learn the right facts via a contrastive score between right and fake facts. If not, we then use a lightweight method to add and adapt new parameters to specific factual texts. Experiments on the knowledge probing task show the calibration effectiveness and efficiency. In addition, through closed-book question answering, we find that the calibrated PLM possesses knowledge generalization ability after fine-tuning. Beyond the calibration performance, we further investigate and visualize the knowledge calibration mechanism.

READ FULL TEXT

page 6

page 7

research
06/07/2023

Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering

Large Language Models (LLMs) are capable of performing zero-shot closed-...
research
06/29/2021

Time-Aware Language Models as Temporal Knowledge Bases

Many facts come with an expiration date, from the name of the President ...
research
04/18/2021

Knowledge Neurons in Pretrained Transformers

Large-scale pretrained language models are surprisingly good at recallin...
research
11/08/2019

Negated LAMA: Birds cannot fly

Pretrained language models have achieved remarkable improvements in a br...
research
09/07/2023

DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

Despite their impressive capabilities, large language models (LLMs) are ...
research
12/02/2020

How Can We Know When Language Models Know?

Recent works have shown that language models (LM) capture different type...
research
07/02/2020

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

Massive language models are the core of modern NLP modeling and have bee...

Please sign up or login with your details

Forgot password? Click here to reset