Representation Deficiency in Masked Language Modeling

02/04/2023
by   Yu Meng, et al.
0

Masked Language Modeling (MLM) has been one of the most prominent approaches for pretraining bidirectional text encoders due to its simplicity and effectiveness. One notable concern about MLM is that the special symbol causes a discrepancy between pretraining data and downstream data as it is present only in pretraining but not in fine-tuning. In this work, we offer a new perspective on the consequence of such a discrepancy: We demonstrate empirically and theoretically that MLM pretraining allocates some model dimensions exclusively for representing tokens, resulting in a representation deficiency for real tokens and limiting the pretrained model's expressiveness when it is adapted to downstream data without tokens. Motivated by the identified issue, we propose MAE-LM, which pretrains the Masked Autoencoder architecture with MLM where tokens are excluded from the encoder. Empirically, we show that MAE-LM improves the utilization of model dimensions for real token representations, and MAE-LM consistently outperforms MLM-pretrained models across different pretraining settings and model sizes when fine-tuned on the GLUE and SQuAD benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2022

Token Dropping for Efficient BERT Pretraining

Transformer-based models generally allocate the same amount of computati...
research
05/08/2023

Token-level Fitting Issues of Seq2seq Models

Sequence-to-sequence (seq2seq) models have been widely used for natural ...
research
10/11/2021

On a Benefit of Mask Language Modeling: Robustness to Simplicity Bias

Despite the success of pretrained masked language models (MLM), why MLM ...
research
12/10/2022

Uniform Masking Prevails in Vision-Language Pretraining

Masked Language Modeling (MLM) has proven to be an essential component o...
research
03/24/2023

Accelerating Vision-Language Pretraining with Free Language Modeling

The state of the arts in vision-language pretraining (VLP) achieves exem...
research
04/07/2022

Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

We present a new framework AMOS that pretrains text encoders with an Adv...
research
04/06/2022

Fusing finetuned models for better pretraining

Pretrained models are the standard starting point for training. This app...

Please sign up or login with your details

Forgot password? Click here to reset