Staged Training for Transformer Language Models

by   Sheng Shen, et al.

The current standard approach to scaling transformer language models trains each model size from a different random initialization. As an alternative, we consider a staged training setup that begins with a small model and incrementally increases the amount of compute used for training by applying a "growth operator" to increase the model depth and width. By initializing each stage with the output of the previous one, the training process effectively re-uses the compute from prior stages and becomes more efficient. Our growth operators each take as input the entire training state (including model parameters, optimizer state, learning rate schedule, etc.) and output a new training state from which training continues. We identify two important properties of these growth operators, namely that they preserve both the loss and the "training dynamics" after applying the operator. While the loss-preserving property has been discussed previously, to the best of our knowledge this work is the first to identify the importance of preserving the training dynamics (the rate of decrease of the loss during training). To find the optimal schedule for stages, we use the scaling laws from (Kaplan et al., 2020) to find a precise schedule that gives the most compute saving by starting a new stage when training efficiency starts decreasing. We empirically validate our growth operators and staged training for autoregressive language models, showing up to 22 scratch. Our code is available at


page 1

page 2

page 3

page 4


Training Compute-Optimal Large Language Models

We investigate the optimal model size and number of tokens for training ...

Scaling Laws for Neural Language Models

We study empirical scaling laws for language model performance on the cr...

Adaptive Fine-Tuning of Transformer-Based Language Models for Named Entity Recognition

The current standard approach for fine-tuning transformer-based language...

GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training

Changes in neural architectures have fostered significant breakthroughs ...

Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models

Despite their wide adoption, the underlying training and memorization dy...

Scaling Up Influence Functions

We address efficient calculation of influence functions for tracking pre...