How does the task complexity of masked pretraining objectives affect downstream performance?

05/18/2023
by   Atsuki Yamaguchi, et al.
0

Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts. Although simpler and computationally efficient pretraining objectives, e.g., predicting the first character of a masked token, have recently shown comparable results to MLM, no objectives with a masking scheme actually outperform it in downstream tasks. Motivated by the assumption that their lack of complexity plays a vital role in the degradation, we validate whether more complex masked objectives can achieve better results and investigate how much complexity they should have to perform comparably to MLM. Our results using GLUE, SQuAD, and Universal Dependencies benchmarks demonstrate that more complicated objectives tend to show better downstream results with at least half of the MLM complexity needed to perform comparably to MLM. Finally, we discuss how we should pretrain a model using a masked objective from the task complexity perspective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/04/2021

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Masked language modeling (MLM), a self-supervised pretraining objective,...
research
12/09/2022

Audiovisual Masked Autoencoders

Can we leverage the audiovisual information already present in video to ...
research
05/03/2022

Improving In-Context Few-Shot Learning via Self-Supervised Training

Self-supervised pretraining has made few-shot learning possible for many...
research
05/23/2023

Difference-Masking: Choosing What to Mask in Continued Pretraining

Self-supervised learning (SSL) and the objective of masking-and-predicti...
research
05/24/2023

Revisiting Token Dropping Strategy in Efficient BERT Pretraining

Token dropping is a recently-proposed strategy to speed up the pretraini...
research
06/06/2021

Meta-learning for downstream aware and agnostic pretraining

Neural network pretraining is gaining attention due to its outstanding p...
research
04/18/2021

On the Influence of Masking Policies in Intermediate Pre-training

Current NLP models are predominantly trained through a pretrain-then-fin...

Please sign up or login with your details

Forgot password? Click here to reset