Language Models (Mostly) Know What They Know

07/11/2022
by   Saurav Kadavath, et al.
12

We study whether language models can evaluate the validity of their own claims and predict which questions they will be able to answer correctly. We first show that larger models are well-calibrated on diverse multiple choice and true/false questions when they are provided in the right format. Thus we can approach self-evaluation on open-ended sampling tasks by asking models to first propose answers, and then to evaluate the probability "P(True)" that their answers are correct. We find encouraging performance, calibration, and scaling for P(True) on a diverse array of tasks. Performance at self-evaluation further improves when we allow models to consider many of their own samples before predicting the validity of one specific possibility. Next, we investigate whether models can be trained to predict "P(IK)", the probability that "I know" the answer to a question, without reference to any particular proposed answer. Models perform well at predicting P(IK) and partially generalize across tasks, though they struggle with calibration of P(IK) on new tasks. The predicted P(IK) probabilities also increase appropriately in the presence of relevant source materials in the context, and in the presence of hints towards the solution of mathematical word problems. We hope these observations lay the groundwork for training more honest models, and for investigating how honesty generalizes to cases where models are trained on objectives other than the imitation of human writing.

READ FULL TEXT

page 6

page 15

page 17

page 32

page 40

12/02/2020

How Can We Know When Language Models Know?

Recent works have shown that language models (LM) capture different type...
03/21/2022

Teaching language models to support answers with verified quotes

Recent large language models often answer factual questions correctly. B...
02/19/2021

Calibrate Before Use: Improving Few-Shot Performance of Language Models

GPT-3 can perform numerous tasks when provided a natural language prompt...
09/08/2021

TruthfulQA: Measuring How Models Mimic Human Falsehoods

We propose a benchmark to measure whether a language model is truthful i...
05/28/2022

Teaching Models to Express Their Uncertainty in Words

We show that a GPT-3 model can learn to express uncertainty about its ow...
04/05/2021

What's the best place for an AI conference, Vancouver or ______: Why completing comparative questions is difficult

Although large neural language models (LMs) like BERT can be finetuned t...
05/21/2022

An Empirical Investigation of Commonsense Self-Supervision with Knowledge Graphs

Self-supervision based on the information extracted from large knowledge...