A Good Sample is Hard to Find: Noise Injection Sampling and Self-Training for Neural Language Generation Models

11/08/2019
by   Chris Kedzie, et al.
0

Deep neural networks (DNN) are quickly becoming the de facto standard modeling method for many natural language generation (NLG) tasks. In order for such models to truly be useful, they must be capable of correctly generating utterances for novel meaning representations (MRs) at test time. In practice, even sophisticated DNNs with various forms of semantic control frequently fail to generate utterances faithful to the input MR. In this paper, we propose an architecture agnostic self-training method to sample novel MR/text utterance pairs to augment the original training data. Remarkably, after training on the augmented data, even simple encoder-decoder models with greedy decoding are capable of generating semantically correct utterances that are as good as state-of-the-art outputs in both automatic and human evaluations of quality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2019

Semantic Noise Matters for Neural Natural Language Generation

Neural natural language generation (NNLG) systems are known for their pa...
research
08/01/2016

Crowd-sourcing NLG Data: Pictures Elicit Better Data

Recent advances in corpus-based Natural Language Generation (NLG) hold t...
research
12/18/2020

Learning Contextual Representations for Semantic Parsing with Generation-Augmented Pre-Training

Most recently, there has been significant interest in learning contextua...
research
09/30/2020

Learning from Mistakes: Combining Ontologies via Self-Training for Dialogue Generation

Natural language generators (NLGs) for task-oriented dialogue typically ...
research
01/12/2021

Transforming Multi-Conditioned Generation from Meaning Representation

In task-oriented conversation systems, natural language generation syste...
research
09/15/2021

Attention Is Indeed All You Need: Semantically Attention-Guided Decoding for Data-to-Text NLG

Ever since neural models were adopted in data-to-text language generatio...
research
09/14/2018

Characterizing Variation in Crowd-Sourced Data for Training Neural Language Generators to Produce Stylistically Varied Outputs

One of the biggest challenges of end-to-end language generation from mea...

Please sign up or login with your details

Forgot password? Click here to reset