Mitigating the Learning Bias towards Repetition by Self-Contrastive Training for Open-Ended Generation

07/04/2023
by   Jian Guan, et al.
0

Despite the huge progress in myriad generation tasks, pretrained language models (LMs) such as GPT2 still tend to generate repetitive texts with maximization-based decoding algorithms for open-ended generation. We attribute their overestimation of token-level repetition probabilities to the learning bias: LMs capture simple repetitive patterns faster with the MLE loss. We propose self-contrastive training to penalize the output of a premature checkpoint of the same model when it incorrectly predicts repetition, which is shown to mitigate repetition effectively while maintaining fluency on two datasets. Furthermore, we find that LMs use longer-range dependencies to predict repetitive tokens than non-repetitive ones, which may be the cause of sentence-level repetition loops.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2022

Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation

While large-scale neural language models, such as GPT2 and BART, have ac...
research
05/05/2022

A Simple Contrastive Learning Objective for Alleviating Neural Text Degeneration

The cross-entropy objective has proved to be an all-purpose training obj...
research
05/16/2020

CERT: Contrastive Self-supervised Learning for Language Understanding

Pretrained language models such as BERT, GPT have shown great effectiven...
research
10/27/2022

Contrastive Decoding: Open-ended Text Generation as Optimization

Likelihood, although useful as a training loss, is a poor search objecti...
research
08/16/2023

Mitigating the Exposure Bias in Sentence-Level Grapheme-to-Phoneme (G2P) Transduction

Text-to-Text Transfer Transformer (T5) has recently been considered for ...
research
10/31/2022

SDCL: Self-Distillation Contrastive Learning for Chinese Spell Checking

Due to the ambiguity of homophones, Chinese Spell Checking (CSC) has wid...
research
05/24/2023

Trusting Your Evidence: Hallucinate Less with Context-aware Decoding

Language models (LMs) often struggle to pay enough attention to the inpu...

Please sign up or login with your details

Forgot password? Click here to reset