Controlling Hallucinations at Word Level in Data-to-Text Generation

02/04/2021
by   Clément Rebuffel, et al.
0

Data-to-Text Generation (DTG) is a subfield of Natural Language Generation aiming at transcribing structured data in natural language descriptions. The field has been recently boosted by the use of neural-based generators which exhibit on one side great syntactic skills without the need of hand-crafted pipelines; on the other side, the quality of the generated text reflects the quality of the training data, which in realistic settings only offer imperfectly aligned structure-text pairs. Consequently, state-of-art neural models include misleading statements - usually called hallucinations - in their outputs. The control of this phenomenon is today a major challenge for DTG, and is the problem addressed in the paper. Previous work deal with this issue at the instance level: using an alignment score for each table-reference pair. In contrast, we propose a finer-grained approach, arguing that hallucinations should rather be treated at the word level. Specifically, we propose a Multi-Branch Decoder which is able to leverage word-level labels to learn the relevant parts of each training instance. These labels are obtained following a simple and efficient scoring procedure based on co-occurrence analysis and dependency parsing. Extensive evaluations, via automated metrics and human judgment on the standard WikiBio benchmark, show the accuracy of our alignment labels and the effectiveness of the proposed Multi-Branch Decoder. Our model is able to reduce and control hallucinations, while keeping fluency and coherence in generated texts. Further experiments on a degraded version of ToTTo show that our model could be successfully used on very noisy settings.

READ FULL TEXT
research
09/01/2017

Order-Planning Neural Text Generation From Structured Data

Generating texts from structured data (e.g., a table) is important for v...
research
05/24/2023

Large Language Models are Effective Table-to-Text Generators, Evaluators, and Feedback Providers

Large language models (LLMs) have shown remarkable ability on controllab...
research
03/30/2021

Evaluating the Morphosyntactic Well-formedness of Generated Texts

Text generation systems are ubiquitous in natural language processing ap...
research
05/22/2022

Diversity Enhanced Table-to-Text Generation via Type Control

Generating natural language statements to convey information from tabula...
research
12/20/2019

A Hierarchical Model for Data-to-Text Generation

Transcribing structured data into natural language descriptions has emer...
research
09/08/2018

Operations Guided Neural Networks for High Fidelity Data-To-Text Generation

Recent neural models for data-to-text generation are mostly based on dat...
research
12/21/2022

Not Just Pretty Pictures: Text-to-Image Generators Enable Interpretable Interventions for Robust Representations

Neural image classifiers are known to undergo severe performance degrada...

Please sign up or login with your details

Forgot password? Click here to reset