Neural Data-to-Text Generation with LM-based Text Augmentation

02/06/2021
by   Ernie Chang, et al.
0

For many new application domains for data-to-text generation, the main obstacle in training neural models consists of a lack of training data. While usually large numbers of instances are available on the data side, often only very few text samples are available. To address this problem, we here propose a novel few-shot approach for this setting. Our approach automatically augments the data available for training by (i) generating new text samples based on replacing specific values by alternative ones from the same category, (ii) generating new text samples based on GPT-2, and (iii) proposing an automatic method for pairing the new text samples with data samples. As the text augmentation can introduce noise to the training data, we use cycle consistency as an objective, in order to make sure that a given data sample can be correctly reconstructed after having been formulated as text (and that text samples can be reconstructed from data). On both the E2E and WebNLG benchmarks, we show that this weakly supervised training paradigm is able to outperform fully supervised seq2seq models with less than 10 all annotated data, our model can boost the performance of a standard seq2seq model by over 5 BLEU points, establishing a new state-of-the-art on both datasets.

READ FULL TEXT
research
05/28/2021

Data Augmentation for Text Generation Without Any Augmented Data

Data augmentation is an effective way to improve the performance of many...
research
05/31/2023

Scalable Learning of Latent Language Structure With Logical Offline Cycle Consistency

We introduce Logical Offline Cycle Consistency Optimization (LOCCO), a s...
research
05/24/2023

Faithful Low-Resource Data-to-Text Generation through Cycle Training

Methods to generate text from structured data have advanced significantl...
research
07/07/2021

On Training Instance Selection for Few-Shot Neural Text Generation

Large-scale pretrained language models have led to dramatic improvements...
research
08/08/2019

Key Fact as Pivot: A Two-Stage Model for Low Resource Table-to-Text Generation

Table-to-text generation aims to translate the structured data into the ...
research
05/20/2018

Generating High-Quality Surface Realizations Using Data Augmentation and Factored Sequence Models

This work presents a new state of the art in reconstruction of surface r...
research
03/26/2021

Data Augmentation in Natural Language Processing: A Novel Text Generation Approach for Long and Short Text Classifiers

In many cases of machine learning, research suggests that the developmen...

Please sign up or login with your details

Forgot password? Click here to reset