Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation

10/19/2019
by   Ran Tian, et al.
0

Neural conditional text generation systems have achieved significant progress in recent years, showing the ability to produce highly fluent text. However, the inherent lack of controllability in these systems allows them to hallucinate factually incorrect phrases that are unfaithful to the source, making them often unsuitable for many real world systems that require high degrees of precision. In this work, we propose a novel confidence oriented decoder that assigns a confidence score to each target position. This score is learned in training using a variational Bayes objective, and can be leveraged at inference time using a calibration technique to promote more faithful generation. Experiments on a structured data-to-text dataset – WikiBio – show that our approach is more faithful to the source than existing state-of-the-art approaches, according to both automatic metrics and human evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2020

Evaluation of Text Generation: A Survey

The paper surveys evaluation methods of natural language generation (NLG...
research
05/25/2022

R2D2: Robust Data-to-Text with Replacement Detection

Unfaithful text generation is a common problem for text generation syste...
research
10/24/2022

On the Effectiveness of Automated Metrics for Text Generation Systems

A major challenge in the field of Text Generation is evaluation because ...
research
11/27/2017

Neural Text Generation: A Practical Guide

Deep learning methods have recently achieved great empirical success on ...
research
08/08/2019

Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation

Table-to-text generation aims to translate the structured data into the ...
research
11/29/2020

Latent Template Induction with Gumbel-CRFs

Learning to control the structure of sentences is a challenging problem ...
research
07/13/2023

Copy Is All You Need

The dominant text generation models compose the output by sequentially s...

Please sign up or login with your details

Forgot password? Click here to reset