No Train No Gain: Revisiting Efficient Training Algorithms For Transformer-based Language Models

07/12/2023
by   Jean Kaddour, et al.
0

The computation necessary for training Transformer-based language models has skyrocketed in recent years. This trend has motivated research on efficient training algorithms designed to improve training, validation, and downstream performance faster than standard training. In this work, we revisit three categories of such algorithms: dynamic architectures (layer stacking, layer dropping), batch selection (selective backprop, RHO loss), and efficient optimizers (Lion, Sophia). When pre-training BERT and T5 with a fixed computation budget using such methods, we find that their training, validation, and downstream gains vanish compared to a baseline with a fully-decayed learning rate. We define an evaluation protocol that enables computation to be done on arbitrary machines by mapping all computation time to a reference machine which we call reference system time. We discuss the limitations of our proposed protocol and release our code to encourage rigorous research in efficient training procedures: https://github.com/JeanKaddour/NoTrainNoGain.

READ FULL TEXT

page 6

page 10

page 25

research
10/25/2022

Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models

Language modeling on large-scale datasets leads to impressive performanc...
research
10/26/2020

Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping

Recently, Transformer-based language models have demonstrated remarkable...
research
04/06/2023

Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster

We study recent research advances that improve large language models thr...
research
10/09/2017

MSC: A Dataset for Macro-Management in StarCraft II

Macro-management is an important problem in StarCraft, which has been st...
research
05/16/2020

MicroNet for Efficient Language Modeling

It is important to design compact language models for efficient deployme...
research
03/26/2023

Koala: An Index for Quantifying Overlaps with Pre-training Corpora

In very recent years more attention has been placed on probing the role ...

Please sign up or login with your details

Forgot password? Click here to reset