METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

04/13/2022
by   Payal Bajaj, et al.
0

We present an efficient method of pretraining large-scale autoencoding language models using training signals generated by an auxiliary model. Originated in ELECTRA, this training strategy has demonstrated sample-efficiency to pretrain models at the scale of hundreds of millions of parameters. In this work, we conduct a comprehensive empirical study, and propose a recipe, namely "Model generated dEnoising TRaining Objective" (METRO), which incorporates some of the best modeling techniques developed recently to speed up, stabilize, and enhance pretrained language models without compromising model effectiveness. The resultant models, METRO-LM, consisting of up to 5.4 billion parameters, achieve new state-of-the-art on the GLUE, SuperGLUE, and SQuAD benchmarks. More importantly, METRO-LM are efficient in that they often outperform previous large models with significantly smaller model sizes and lower pretraining cost.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/11/2021

Improving Large-scale Language Models and Resources for Filipino

In this paper, we improve on existing language resources for the low-res...
research
02/05/2020

Aligning the Pretraining and Finetuning Objectives of Language Models

We demonstrate that explicitly aligning the pretraining objectives to th...
research
04/15/2021

Pseudo Zero Pronoun Resolution Improves Zero Anaphora Resolution

The use of pretrained masked language models (MLMs) has drastically impr...
research
04/07/2022

Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

We present a new framework AMOS that pretrains text encoders with an Adv...
research
05/02/2022

POLITICS: Pretraining with Same-story Article Comparison for Ideology Prediction and Stance Detection

Ideology is at the core of political science research. Yet, there still ...
research
11/25/2022

PipeFisher: Efficient Training of Large Language Models Using Pipelining and Fisher Information Matrices

Pipeline parallelism enables efficient training of Large Language Models...
research
05/08/2023

SNT: Sharpness-Minimizing Network Transformation for Fast Compression-friendly Pretraining

Model compression has become the de-facto approach for optimizing the ef...

Please sign up or login with your details

Forgot password? Click here to reset