Selective Token Generation for Few-shot Natural Language Generation

09/17/2022
by   DaeJin Jo, et al.
0

Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability. Among them, additive learning that incorporates a task-specific adapter on top of the fixed large-scale PLM has been popularly used in the few-shot setting. However, this added adapter is still easy to disregard the knowledge of the PLM especially for few-shot natural language generation (NLG) since an entire sequence is usually generated by only the newly trained adapter. Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) that selectively outputs language tokens between the task-general PLM and the task-specific adapter during both training and inference. This output token selection over the two generators allows the adapter to take into account solely the task-relevant parts in sequence generation, and therefore makes it more robust to overfitting as well as more stable in RL training. In addition, to obtain the complementary adapter from the PLM for each few-shot task, we exploit a separate selecting module that is also simultaneously trained using RL. Experimental results on various few-shot NLG tasks including question answering, data-to-text generation and text summarization demonstrate that the proposed selective token generation significantly outperforms the previous additive learning algorithms based on the PLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/22/2020

Few-Shot Text Generation with Pattern-Exploiting Training

Providing pretrained language models with simple task descriptions or pr...
research
08/14/2021

The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation

We propose a shared task on training instance selection for few-shot neu...
research
09/30/2022

Out-of-Distribution Detection and Selective Generation for Conditional Language Models

Machine learning algorithms typically assume independent and identically...
research
08/27/2021

ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models

Automatic construction of relevant Knowledge Bases (KBs) from text, and ...
research
03/27/2023

ChatGPT as a Factual Inconsistency Evaluator for Abstractive Text Summarization

The performance of abstractive text summarization has been greatly boost...
research
09/20/2021

Learning Natural Language Generation from Scratch

This paper introduces TRUncated ReinForcement Learning for Language (Tru...
research
09/08/2023

LLMCad: Fast and Scalable On-device Large Language Model Inference

Generative tasks, such as text generation and question answering, hold a...

Please sign up or login with your details

Forgot password? Click here to reset