A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives

02/25/2021
by   Nils Rethmeier, et al.
9

Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various application tasks. These pretraining methods are frequently extended with recurrence, adversarial or linguistic property masking, and more recently with contrastive learning objectives. Contrastive self-supervised training objectives enabled recent successes in image representation pretraining by learning to contrast input-input pairs of augmented images as either similar or dissimilar. However, in NLP, automated creation of text input augmentations is still very challenging because a single token can invert the meaning of a sentence. For this reason, some contrastive NLP pretraining methods contrast over input-label pairs, rather than over input-input pairs, using methods from Metric Learning and Energy Based Models. In this survey, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, few or zero-shot learning, pretraining data-efficiency and specific NLP end-tasks. We introduce key contrastive learning concepts with lessons learned from prior research and structure works by applications and cross-field relations. Finally, we point to open challenges and future directions for contrastive NLP to encourage bringing contrastive NLP pretraining closer to recent successes in image representation pretraining.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2021

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Masked language modeling (MLM), a self-supervised pretraining objective,...
research
10/02/2020

Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data

For natural language processing (NLP) tasks such as sentiment or topic c...
research
10/23/2022

Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present and Future

In this paper, we review adversarial pretraining of self-supervised deep...
research
06/14/2023

Generate to Understand for Representation

In recent years, a significant number of high-quality pretrained models ...
research
06/06/2023

Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

In the same way that the computer vision (CV) and natural language proce...
research
01/27/2023

Call for Papers – The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

We present the call for papers for the BabyLM Challenge: Sample-efficien...
research
07/21/2022

Leveraging Natural Supervision for Language Representation Learning and Generation

Recent breakthroughs in Natural Language Processing (NLP) have been driv...

Please sign up or login with your details

Forgot password? Click here to reset