Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)

10/11/2020
by   Alex Warstadt, et al.
0

One reason pretraining on self-supervised linguistic tasks is effective is that it teaches models features that are helpful for language understanding. However, we want pretrained models to learn not only to represent linguistic features, but also to use those features preferentially during fine-turning. With this goal in mind, we introduce a new English-language diagnostic set called MSGS (the Mixed Signals Generalization Set), which consists of 20 ambiguous binary classification tasks that we use to test whether a pretrained model prefers linguistic or surface generalizations during fine-tuning. We pretrain RoBERTa models from scratch on quantities of data ranging from 1M to 1B words and compare their performance on MSGS to the publicly available RoBERTa-base. We find that models can learn to represent linguistic features with little pretraining data, but require far more data to learn to prefer linguistic generalizations over surface ones. Eventually, with about 30B words of pretraining data, RoBERTa-base does demonstrate a linguistic bias with some regularity. We conclude that while self-supervised pretraining is an effective way to learn helpful inductive biases, there is likely room to improve the rate at which models learn which features matter.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/10/2020

When Do You Need Billions of Words of Pretraining Data?

NLP is currently dominated by general-purpose pretrained language models...
research
03/29/2022

Investigating Self-supervised Pretraining Frameworks for Pathological Speech Recognition

We investigate the performance of self-supervised pretraining frameworks...
research
09/05/2023

Self-Supervised Pretraining Improves Performance and Inference Efficiency in Multiple Lung Ultrasound Interpretation Tasks

In this study, we investigated whether self-supervised pretraining could...
research
09/09/2018

How clever is the FiLM model, and how clever can it be?

The FiLM model achieves close-to-perfect performance on the diagnostic C...
research
10/07/2022

SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models

Vision-language models such as CLIP are pretrained on large volumes of i...
research
05/03/2020

How Can We Accelerate Progress Towards Human-like Linguistic Generalization?

This position paper describes and critiques the Pretraining-Agnostic Ide...

Please sign up or login with your details

Forgot password? Click here to reset