DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models

09/07/2023
by   Yung-Sung Chuang, et al.
0

Despite their impressive capabilities, large language models (LLMs) are prone to hallucinations, i.e., generating content that deviates from facts seen during pretraining. We propose a simple decoding strategy for reducing hallucinations with pretrained LLMs that does not require conditioning on retrieved external knowledge nor additional fine-tuning. Our approach obtains the next-token distribution by contrasting the differences in logits obtained from projecting the later layers versus earlier layers to the vocabulary space, exploiting the fact that factual knowledge in an LLMs has generally been shown to be localized to particular transformer layers. We find that this Decoding by Contrasting Layers (DoLa) approach is able to better surface factual knowledge and reduce the generation of incorrect facts. DoLa consistently improves the truthfulness across multiple choices tasks and open-ended generation tasks, for example improving the performance of LLaMA family models on TruthfulQA by 12-17 generate truthful facts.

READ FULL TEXT
research
12/01/2020

Modifying Memories in Transformer Models

Large Transformer models have achieved impressive performance in many na...
research
05/24/2023

Mitigating Temporal Misalignment by Discarding Outdated Facts

While large language models are able to retain vast amounts of world kno...
research
03/06/2022

Leashing the Inner Demons: Self-Detoxification for Language Models

Language models (LMs) can reproduce (or amplify) toxic language seen dur...
research
10/07/2022

Calibrating Factual Knowledge in Pretrained Language Models

Previous literature has proved that Pretrained Language Models (PLMs) ca...
research
03/15/2023

SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models

Generative Large Language Models (LLMs) such as GPT-3 are capable of gen...
research
06/29/2021

Time-Aware Language Models as Temporal Knowledge Bases

Many facts come with an expiration date, from the name of the President ...
research
05/02/2023

The Benefits of Bad Advice: Autocontrastive Decoding across Model Layers

Applying language models to natural language processing tasks typically ...

Please sign up or login with your details

Forgot password? Click here to reset