Variance-reduced Language Pretraining via a Mask Proposal Network

08/12/2020
by   Liang Chen, et al.
0

Self-supervised learning, a.k.a., pretraining, is important in natural language processing. Most of the pretraining methods first randomly mask some positions in a sentence and then train a model to recover the tokens at the masked positions. In such a way, the model can be trained without human labeling, and the massive data can be used with billion parameters. Therefore, the optimization efficiency becomes critical. In this paper, we tackle the problem from the view of gradient variance reduction. In particular, we first propose a principled gradient variance decomposition theorem, which shows that the variance of the stochastic gradient of the language pretraining can be naturally decomposed into two terms: the variance that arises from the sample of data in a batch, and the variance that arises from the sampling of the mask. The second term is the key difference between selfsupervised learning and supervised learning, which makes the pretraining slower. In order to reduce the variance of the second part, we leverage the importance sampling strategy, which aims at sampling the masks according to a proposal distribution instead of the uniform distribution. It can be shown that if the proposal distribution is proportional to the gradient norm, the variance of the sampling is reduced. To improve efficiency, we introduced a MAsk Proposal Network (MAPNet), which approximates the optimal mask proposal distribution and is trained end-to-end along with the model. According to the experimental result, our model converges much faster and achieves higher performance than the baseline BERT model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2023

Difference-Masking: Choosing What to Mask in Continued Pretraining

Self-supervised learning (SSL) and the objective of masking-and-predicti...
research
01/11/2023

NarrowBERT: Accelerating Masked Language Model Pretraining and Inference

Large-scale language model pretraining is a very successful form of self...
research
11/20/2015

Variance Reduction in SGD by Distributed Importance Sampling

Humans are able to accelerate their learning by selecting training mater...
research
06/09/2023

Exploring Effective Mask Sampling Modeling for Neural Image Compression

Image compression aims to reduce the information redundancy in images. M...
research
12/10/2022

Uniform Masking Prevails in Vision-Language Pretraining

Masked Language Modeling (MLM) has proven to be an essential component o...
research
02/16/2021

COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining

We present COCO-LM, a new self-supervised learning framework that pretra...
research
09/16/2023

RMP: A Random Mask Pretrain Framework for Motion Prediction

As the pretraining technique is growing in popularity, little work has b...

Please sign up or login with your details

Forgot password? Click here to reset