Contrastive Learning with Adversarial Perturbations for Conditional Text Generation

12/14/2020
by   Seanie Lee, et al.
0

Recently, sequence-to-sequence (seq2seq) models with the Transformer architecture have achieved remarkable performance on various conditional text generation tasks, such as machine translation. However, most of them are trained with teacher forcing with the ground truth label given at each time step, without being exposed to incorrectly generated tokens during training, which hurts its generalization to unseen inputs, that is known as the “exposure bias" problem. In this work, we propose to mitigate the conditional text generation problem by contrasting positive pairs with negative pairs, such that the model is exposed to various valid or incorrect perturbations of the inputs, for improved generalization. However, training the model with naive contrastive learning framework using random non-target sequences as negative examples is suboptimal, since they are easily distinguishable from the correct output, especially so with models pretrained with large text corpora. Also, generating positive examples requires domain-specific augmentation heuristics which may not generalize over diverse domains. To tackle this problem, we propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models. Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood, and positive examples by adding large perturbations while enforcing it to have a high conditional likelihood. Such “hard” positive and negative pairs generated using our method guides the model to better distinguish correct outputs from incorrect ones. We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks - machine translation, text summarization, and question generation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/29/2022

CoNT: Contrastive Neural Text Generation

Recently, contrastive learning attracts increasing interests in neural t...
research
12/28/2020

Neural Text Generation with Artificial Negative Examples

Neural text generation models conditioning on given input (e.g. machine ...
research
01/31/2023

Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation

State-of-the-art neural text generation models are typically trained to ...
research
10/12/2020

Improving Text Generation with Student-Forcing Optimal Transport

Neural language models are often trained with maximum likelihood estimat...
research
12/08/2022

Momentum Calibration for Text Generation

The input and output of most text generation tasks can be transformed to...
research
10/26/2020

Dutch Humor Detection by Generating Negative Examples

Detecting if a text is humorous is a hard task to do computationally, as...
research
02/01/2022

Regression Transformer: Concurrent Conditional Generation and Regression by Blending Numerical and Textual Tokens

We report the Regression Transformer (RT), a method that abstracts regre...

Please sign up or login with your details

Forgot password? Click here to reset