Trusting Your Evidence: Hallucinate Less with Context-aware Decoding

05/24/2023
by   Weijia Shi, et al.
0

Language models (LMs) often struggle to pay enough attention to the input context, and generate texts that are unfaithful or contain hallucinations. To mitigate this issue, we present context-aware decoding (CAD), which follows a contrastive output distribution that amplifies the difference between the output probabilities when a model is used with and without context. Our experiments show that CAD, without additional training, significantly improves the faithfulness of different LM families, including OPT, GPT, LLaMA and FLAN-T5 for summarization tasks (e.g., 14.3 metrics). Furthermore, CAD is particularly effective in overriding a model's prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where resolving the knowledge conflict is essential.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/13/2021

Contrastive Learning for Context-aware Neural Machine TranslationUsing Coreference Information

Context-aware neural machine translation (NMT) incorporates contextual i...
research
02/18/2023

Improving the Out-Of-Distribution Generalization Capability of Language Models: Counterfactually-Augmented Data is not Enough

Counterfactually-Augmented Data (CAD) has the potential to improve langu...
research
10/12/2022

A context-aware knowledge transferring strategy for CTC-based ASR

Non-autoregressive automatic speech recognition (ASR) modeling has recei...
research
04/21/2017

Improving Context Aware Language Models

Increased adaptability of RNN language models leads to improved predicti...
research
10/06/2017

Low-Rank RNN Adaptation for Context-Aware Language Modeling

A context-aware language model uses location, user and/or domain metadat...
research
05/26/2023

CONA: A novel CONtext-Aware instruction paradigm for communication using large language model

We introduce CONA, a novel context-aware instruction paradigm for effect...
research
07/04/2023

Mitigating the Learning Bias towards Repetition by Self-Contrastive Training for Open-Ended Generation

Despite the huge progress in myriad generation tasks, pretrained languag...

Please sign up or login with your details

Forgot password? Click here to reset