Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG

09/15/2021
by   Juraj Juraska, et al.
16

Ever since neural models were adopted in data-to-text language generation, they have invariably been reliant on extrinsic components to improve their semantic accuracy, because the models normally do not exhibit the ability to generate text that reliably mentions all of the information provided in the input. In this paper, we propose a novel decoding method that extracts interpretable information from encoder-decoder models' cross-attention, and uses it to infer which attributes are mentioned in the generated text, which is subsequently used to rescore beam hypotheses. Using this decoding method with T5 and BART, we show on three datasets its ability to dramatically reduce semantic errors in the generated outputs, while maintaining their state-of-the-art quality.

READ FULL TEXT

page 6

page 13

research
09/15/2022

Graph-to-Text Generation with Dynamic Structure Pruning

Most graph-to-text works are built on the encoder-decoder framework with...
research
07/24/2023

Integration of Frame- and Label-synchronous Beam Search for Streaming Encoder-decoder Speech Recognition

Although frame-based models, such as CTC and transducers, have an affini...
research
09/09/2023

Reverse-Engineering Decoding Strategies Given Blackbox Access to a Language Generation System

Neural language models are increasingly deployed into APIs and websites ...
research
11/08/2019

A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models

Deep neural networks (DNN) are quickly becoming the de facto standard mo...
research
04/22/2019

BePT: A Process Translator for Sharing Process Models

Sharing process models on the web has emerged as a widely used concept. ...
research
10/23/2018

Visual Semantic Re-ranker for Text Spotting

Many current state-of-the-art methods for text recognition are based on ...
research
09/24/2021

Detect and Perturb: Neutral Rewriting of Biased and Sensitive Text via Gradient-based Decoding

Written language carries explicit and implicit biases that can distract ...

Please sign up or login with your details

Forgot password? Click here to reset