Tackling Hallucinations in Neural Chart Summarization

08/01/2023
by   Saad Obaid ul Islam, et al.
0

Hallucinations in text generation occur when the system produces text that is not grounded in the input. In this work, we tackle the problem of hallucinations in neural chart summarization. Our analysis shows that the target side of chart summarization training datasets often contains additional information, leading to hallucinations. We propose a natural language inference (NLI) based method to preprocess the training data and show through human evaluation that our method significantly reduces hallucinations. We also found that shortening long-distance dependencies in the input sequence and adding chart-related information like title and legends improves the overall performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2021

VieSum: How Robust Are Transformer-based Models on Vietnamese Summarization?

Text summarization is a challenging task within natural language process...
research
11/05/2018

Structured Neural Summarization

Summarization of long sequences into a concise statement is a core probl...
research
12/20/2022

Improving the Robustness of Summarization Models by Detecting and Removing Input Noise

The evaluation of abstractive summarization models typically uses test d...
research
04/02/2019

Pragmatically Informative Text Generation

We improve the informativeness of models for conditional text generation...
research
04/28/2023

Text-Blueprint: An Interactive Platform for Plan-based Conditional Generation

While conditional generation models can now generate natural language we...
research
05/10/2018

Global Encoding for Abstractive Summarization

In neural abstractive summarization, the conventional sequence-to-sequen...
research
06/29/2022

Mapping the Design Space of Human-AI Interaction in Text Summarization

Automatic text summarization systems commonly involve humans for prepari...

Please sign up or login with your details

Forgot password? Click here to reset