TruthfulQA: Measuring How Models Mimic Human Falsehoods

09/08/2021
by   Stephanie Lin, et al.
0

We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58 performance was 94 misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. For example, the 6B-parameter GPT-J model was 17 other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

READ FULL TEXT

page 3

page 6

page 18

page 21

page 23

page 27

research
12/17/2021

WebGPT: Browser-assisted question-answering with human feedback

We fine-tune GPT-3 to answer long-form questions using a text-based web-...
research
05/28/2020

Language Models are Few-Shot Learners

Recent work has demonstrated substantial gains on many NLP tasks and ben...
research
01/12/2019

Semi-interactive Attention Network for Answer Understanding in Reverse-QA

Question answering (QA) is an important natural language processing (NLP...
research
07/11/2022

Language Models (Mostly) Know What They Know

We study whether language models can evaluate the validity of their own ...
research
05/24/2023

Chain-of-Questions Training with Latent Answers for Robust Multistep Question Answering

We train a language model (LM) to robustly answer multistep questions by...
research
04/04/2019

Answer-based Adversarial Training for Generating Clarification Questions

We present an approach for generating clarification questions with the g...
research
06/03/2021

PsyQA: A Chinese Dataset for Generating Long Counseling Text for Mental Health Support

Great research interests have been attracted to devise AI services that ...

Please sign up or login with your details

Forgot password? Click here to reset