Factuality Enhanced Language Models for Open-Ended Text Generation

06/09/2022
by   Nayeon Lee, et al.
2

Pretrained language models (LMs) are susceptible to generate text with nonfactual information. In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation. We design the FactualityPrompts test set and metrics to measure the factuality of LM generations. Based on that, we study the factual accuracy of LMs with parameter sizes ranging from 126M to 530B. Interestingly, we find that larger LMs are more factual than smaller ones, although a previous study suggests that larger LMs can be less truthful in terms of misconceptions. In addition, popular sampling algorithms (e.g., top-p) in open-ended text generation can harm the factuality due to the "uniform randomness" introduced at every sampling step. We propose the factual-nucleus sampling algorithm that dynamically adapts the randomness to improve the factuality of generation while maintaining quality. Furthermore, we analyze the inefficiencies of the standard training method in learning correct associations between entities from factual text corpus (e.g., Wikipedia). We propose a factuality-enhanced training method that uses TopicPrefix for better awareness of facts and sentence completion as the training objective, which can vastly reduce the factual errors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation

We propose a shared task on training instance selection for few-shot neu...
research
06/29/2020

Learning Sparse Prototypes for Text Generation

Prototype-driven text generation uses non-parametric models that first c...
research
04/06/2020

Sparse Text Generation

Current state-of-the-art text generators build on powerful language mode...
research
08/30/2023

Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection

Language models (LMs) have revolutionized the way we interact with infor...
research
06/06/2022

Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation

While large-scale neural language models, such as GPT2 and BART, have ac...
research
05/23/2023

Enhancing Generation through Summarization Duality and Explicit Outline Control

Automatically open-ended long text generation poses significant challeng...
research
06/07/2023

Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions

Large language models (LLMs) can be used to generate text data for train...

Please sign up or login with your details

Forgot password? Click here to reset