Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data

10/02/2020
by   Nils Rethmeier, et al.
0

For natural language processing (NLP) tasks such as sentiment or topic classification, currently prevailing approaches heavily rely on pretraining large self-supervised models on massive external data resources. However, this methodology is being critiqued for: exceptional compute and pretraining data requirements; diminishing returns on both large and small datasets; and importantly, favourable evaluation settings that overestimate performance differences. The core belief behind current methodology, coined `the bitter lesson' by R. Sutton, is that `compute scale-up beats data and compute-efficient algorithms', neglecting that progress in compute hardware scale-up is based almost entirely on the miniaturisation of resource consumption. We thus approach pretraining from a miniaturisation perspective, such as not to require massive external data sources and models, or learned translations from continuous input embeddings to discrete labels. To minimise overly favourable evaluation, we examine learning on a long-tailed, low-resource, multi-label text classification dataset with noisy, highly sparse labels and many rare concepts. To this end, we propose a novel `dataset-internal' contrastive autoencoding approach to self-supervised pretraining and demonstrate marked improvements in zero-shot, few-shot and solely supervised learning performance; even under an unfavorable low-resource scenario, and without defaulting to large-scale external datasets for self-supervision. We also find empirical evidence that zero and few-shot learning markedly benefit from adding more `dataset-internal', self-supervised training signals, which is of practical importance when retrieving or computing on large external sources of such signals is infeasible.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/25/2021

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives

Modern natural language processing (NLP) methods employ self-supervised ...
research
04/12/2023

RECLIP: Resource-efficient CLIP by Training with Small Images

We present RECLIP (Resource-efficient CLIP), a simple method that minimi...
research
05/03/2022

Improving In-Context Few-Shot Learning via Self-Supervised Training

Self-supervised pretraining has made few-shot learning possible for many...
research
01/27/2023

Call for Papers – The BabyLM Challenge: Sample-efficient pretraining on a developmentally plausible corpus

We present the call for papers for the BabyLM Challenge: Sample-efficien...
research
12/14/2020

COAD: Contrastive Pre-training with Adversarial Fine-tuning for Zero-shot Expert Linking

Expert finding, a popular service provided by many online websites such ...
research
02/19/2023

Text Classification in the Wild: a Large-scale Long-tailed Name Normalization Dataset

Real-world data usually exhibits a long-tailed distribution,with a few f...
research
12/21/2021

Supervised Graph Contrastive Pretraining for Text Classification

Contrastive pretraining techniques for text classification has been larg...

Please sign up or login with your details

Forgot password? Click here to reset