On Training Instance Selection for Few-Shot Neural Text Generation

07/07/2021
by   Ernie Chang, et al.
0

Large-scale pretrained language models have led to dramatic improvements in text generation. Impressive performance can be achieved by finetuning only on a small number of instances (few-shot setting). Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little to no attention has been paid to the selection strategies and how they would affect model performance. In this work, we present a study on training instance selection in few-shot neural text generation. The selection decision is made based only on the unlabeled data so as to identify the most worthwhile data points that should be annotated under some budget of labeling cost. Based on the intuition that the few-shot training instances should be diverse and representative of the entire data distribution, we propose a simple selection strategy with K-means clustering. We show that even with the naive clustering-based approach, the generation models consistently outperform random sampling on three text generation tasks: data-to-text generation, document summarization and question generation. We hope that this work will call for more attention on this largely unexplored area.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation

We propose a shared task on training instance selection for few-shot neu...
research
01/14/2022

A Survey of Pretrained Language Models Based Text Generation

Text Generation aims to produce plausible and readable text in human lan...
research
05/19/2022

Self-augmented Data Selection for Few-shot Dialogue Generation

The natural language generation (NLG) module in task-oriented dialogue s...
research
06/07/2023

Increasing Diversity While Maintaining Accuracy: Text Data Generation with Large Language Models and Human Interventions

Large language models (LLMs) can be used to generate text data for train...
research
10/16/2021

Improving Compositional Generalization with Self-Training for Data-to-Text Generation

Data-to-text generation focuses on generating fluent natural language re...
research
02/06/2021

Neural Data-to-Text Generation with LM-based Text Augmentation

For many new application domains for data-to-text generation, the main o...
research
10/09/2022

ASDOT: Any-Shot Data-to-Text Generation with Pretrained Language Models

Data-to-text generation is challenging due to the great variety of the i...

Please sign up or login with your details

Forgot password? Click here to reset