Scaling Laws for Neural Language Models

01/23/2020 ∙ by Jared Kaplan, et al. ∙ Johns Hopkins University OpenAI 0

We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in language modeling, with state of the art models

radford2018improving ; 1810.04805 ; 1906.08237 ; DBLP:journals/corr/abs-1907-11692 ; 1910.10683 approaching human-level performance on many specific tasks wang2019superglue , including the composition of coherent multi-paragraph prompted text samples radford2019language .

One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the Transformer architecture OriginalTransformer ; liu2018generating . The high ceiling and low floor for performance on language tasks allows us to study trends over more than seven orders of magnitude in scale.

Throughout we will observe precise power-law scalings for performance as a function of training time, context length, dataset size, model size, and compute budget.

Figure 1: Language modeling performance improves smoothly as we increase the model size, datasetset size, and amount of compute111Here we display predicted compute when using a sufficiently small batch size. See Figure 13 for comparison to the purely empirical data. used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two.

1.1 Summary

Our key findings for Transformer language models are are as follows:

Performance depends strongly on scale, weakly on model shape:

Model performance depends most strongly on scale, which consists of three factors: the number of model parameters (excluding embeddings), the size of the dataset , and the amount of compute

used for training. Within reasonable limits, performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section

3)

Smooth power laws:

Performance has a power-law relationship with each of the three scale factors when not bottlenecked by the other two, with trends spanning more than six orders of magnitude (see Figure 1). We observe no signs of deviation from these trends on the upper end, though performance must flatten out eventually before reaching zero loss. (Section 3)

Universality of overfitting:

Performance improves predictably as long as we scale up and in tandem, but enters a regime of diminishing returns if either or is held fixed while the other increases. The performance penalty depends predictably on the ratio , meaning that every time we increase the model size 8x, we only need to increase the data by roughly 5x to avoid a penalty. (Section 4)

Universality of training:

Training curves follow predictable power-laws whose parameters are roughly independent of the model size. By extrapolating the early part of a training curve, we can roughly predict the loss that would be achieved if we trained for much longer. (Section 5)

Transfer improves with test performance:

When we evaluate models on text with a different distribution than they were trained on, the results are strongly correlated to those on the training validation set with a roughly constant offset in the loss – in other words, transfer to a different distribution incurs a constant penalty but otherwise improves roughly in line with performance on the training set. (Section 3.2.2)

Sample efficiency:

Large models are more sample-efficient than small models, reaching the same level of performance with fewer optimization steps (Figure 2) and using fewer data points (Figure 4).

Figure 2: We show a series of language model training runs, with models ranging in size from to parameters (excluding embeddings).
Figure 3: As more compute becomes available, we can choose how much to allocate towards training larger models, using larger batches, and training for more steps. We illustrate this for a billion-fold increase in compute. For optimally compute-efficient training, most of the increase should go towards increased model size. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required.
Convergence is inefficient:

When working within a fixed compute budget but without any other restrictions on the model size or available data , we attain optimal performance by training very large models and stopping significantly short of convergence (see Figure 3). Maximally compute-efficient training would therefore be far more sample efficient than one might expect based on training small models to convergence, with data requirements growing very slowly as with training compute. (Section 6)

Optimal batch size:

The ideal batch size for training these models is roughly a power of the loss only, and continues to be determinable by measuring the gradient noise scale 1812.06162 ; it is roughly 1-2 million tokens at convergence for the largest models we can train. (Section 5.1)

Taken together, these results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. We expect that larger language models will perform better and be more sample efficient than current models.

1.2 Summary of Scaling Laws

The test loss of a Transformer trained to autoregressively model language can be predicted using a power-law when performance is limited by only either the number of non-embedding parameters

, the dataset size , or the optimally allocated compute budget (see Figure 1):

  1. For models with a limited number of parameters, trained to convergence on sufficiently large datasets:

    (1.1)
  2. For large models trained with a limited dataset with early stopping:

    (1.2)
  3. When training with a limited amount of compute, a sufficiently large dataset, an optimally-sized model, and a sufficiently small batch size (making optimal222We also observe an empirical power-law trend with the training compute (Figure 1) while training at fixed batch size, but it is the trend with that should be used to make predictions. They are related by equation (5.5). use of compute):

    (1.3)

These relations hold across eight orders of magnitude in , six orders of magnitude in , and over two orders of magnitude in . They depend very weakly on model shape and other Transformer hyperparameters (depth, width, number of self-attention heads), with specific numerical values associated with the Webtext2 training set radford2019language . The power laws specify the degree of performance improvement expected as we scale up , , or ; for example, doubling the number of parameters yields a loss that is smaller by a factor . The precise numerical values of and depend on the vocabulary size and tokenization and hence do not have a fundamental meaning.

The critical batch size, which determines the speed/efficiency tradeoff for data parallelism (1812.06162 ), also roughly obeys a power law in :

(1.4)
Figure 4: Left: The early-stopped test loss varies predictably with the dataset size and model size according to Equation (1.5). Right: After an initial transient period, learning curves for all model sizes can be fit with Equation (1.6), which is parameterized in terms of , the number of steps when training at large batch size (details in Section 5.1).

Equation (1.1) and (1.2) together suggest that as we increase the model size, we should increase the dataset size sublinearly according to . In fact, we find that there is a single equation combining (1.1) and (1.2) that governs the simultaneous dependence on and and governs the degree of overfitting:

(1.5)

with fits pictured on the left in figure 4. We conjecture that this functional form may also parameterize the trained log-likelihood for other generative modeling tasks.

When training a given model for a finite number of parameter update steps in the infinite data limit, after an initial transient period, the learning curves can be accurately fit by (see the right of figure 4)

(1.6)

where and , and

is the minimum possible number of optimization steps (parameter updates) estimated using Equation (

5.4).

When training within a fixed compute budget , but with no other constraints, Equation (1.6) leads to the prediction that the optimal model size , optimal batch size , optimal number of steps , and dataset size should grow as

(1.7)

with

(1.8)

which closely matches the empirically optimal results , , and . As the computational budget increases, it should be spent primarily on larger models, without dramatic increases in training time or dataset size (see Figure 3). This also implies that as models grow larger, they become increasingly sample efficient. In practice, researchers typically train smaller models for longer than would be maximally compute-efficient because of hardware constraints. Optimal performance depends on total compute as a power law (see Equation (1.3)).

We provide some basic theoretical motivation for Equation (1.5), an analysis of learning curve fits and their implications for training time, and a breakdown of our results per token. We also make some brief comparisons to LSTMs and recurrent Transformers DBLP:journals/corr/abs-1807-03819 .

1.3 Notation

We use the following notation:

  • – the cross entropy loss in nats. Typically it will be averaged over the tokens in a context, but in some cases we report the loss for specific tokens within the context.

  • – the number of model parameters, excluding all vocabulary and positional embeddings

  • – an estimate of the total non-embedding training compute, where is the batch size, and is the number of training steps (ie parameter updates). We quote numerical values in PF-days, where one PF-day floating point operations.

  • – the dataset size in tokens

  • – the critical batch size 1812.06162 , defined and discussed in Section 5.1. Training at the critical batch size provides a roughly optimal compromise between time and compute efficiency.

  • – an estimate of the minimum amount of non-embedding compute to reach a given value of the loss. This is the training compute that would be used if the model were trained at a batch size much less than the critical batch size.

  • – an estimate of the minimal number of training steps needed to reach a given value of the loss. This is also the number of training steps that would be used if the model were trained at a batch size much greater than the critical batch size.

  • – power-law exponents for the scaling of the loss as where can be any of .

2 Background and Methods

We train language models on WebText2, an extended version of the WebText radford2019language dataset, tokenized using byte-pair encoding BPE with a vocabulary size . We optimize the autoregressive log-likelihood (i.e. cross-entropy loss) averaged over a 1024-token context, which is also our principal performance metric. We record the loss on the WebText2 test distribution and on a selection of other text distributions. We primarily train decoder-only liu2018generating ; radford2018improving Transformer OriginalTransformer models, though we also train LSTM models and Universal Transformers DBLP:journals/corr/abs-1807-03819 for comparison.

2.1 Parameter and Compute Scaling of Transformers

Operation Parameters FLOPs per Token
Embed
Attention: QKV
Attention: Mask
Attention: Project
Feedforward
De-embed
Total (Non-Embedding)
Table 1: Parameter counts and compute (forward pass) estimates for a Transformer model. Sub-leading terms such as nonlinearities, biases, and layer normalization are omitted.

We parameterize the Transformer architecture using hyperparameters (number of layers), (dimension of the residual stream), (dimension of the intermediate feed-forward layer), (dimension of the attention output), and (number of attention heads per layer). We include tokens in the input context, with except where otherwise noted.

We use to denote the model size, which we define as the number of non-embedding parameters

(2.1)

where we have excluded biases and other sub-leading terms. Our models also have parameters in an embedding matrix, and use parameters for positional embeddings, but we do not include these when discussing the ‘model size’ ; we will see that this produces significantly cleaner scaling laws.

Evaluating a forward pass of the Transformer involves roughly

(2.2)

add-multiply operations, where the factor of two comes from the multiply-accumulate operation used in matrix multiplication. A more detailed per-operation parameter and compute count is included in Table 1.

For contexts and models with , the context-dependent computational cost per token is a relatively small fraction of the total compute. Since we primarily study models where , we do not include context-dependent terms in our training compute estimate. Accounting for the backwards pass (approximately twice the compute as the forwards pass), we then define the estimated non-embedding compute as floating point operators per training token.

2.2 Training Procedures

Unless otherwise noted, we train models with the Adam optimizer kingma2014adam for a fixed steps with a batch size of sequences of tokens. Due to memory constraints, our largest models (more than 1B parameters) were trained with Adafactor DBLP:journals/corr/abs-1804-04235 . We experimented with a variety of learning rates and schedules, as discussed in Appendix D.6. We found that results at convergence were largely independent of learning rate schedule. Unless otherwise noted, all training runs included in our data used a learning rate schedule with a 3000 step linear warmup followed by a cosine decay to zero.

2.3 Datasets

We train our models on an extended version of the WebText dataset described in radford2019language

. The original WebText dataset was a web scrape of outbound links from Reddit through December 2017 which received at least 3 karma. In the second version, WebText2, we added outbound Reddit links from the period of January to October 2018, also with a minimum of 3 karma. The karma threshold served as a heuristic for whether people found the link interesting or useful. The text of the new links was extracted with the Newspaper3k python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and

words (as defined by wc). We then apply the reversible tokenizer described in radford2019language , which yields tokens. We reserve of these tokens for use as a test set, and we also test on similarly-prepared samples of Books Corpus Zhu_2015 , Common Crawl commoncrawl , English Wikipedia, and a collection of publicly-available Internet Books.

3 Empirical Results and Basic Power Laws

To characterize language model scaling we train a wide variety of models, varying a number of factors including:

  • Model size (ranging in size from 768 to 1.5 billion non-embedding parameters)

  • Dataset size (ranging from 22 million to 23 billion tokens)

  • Shape (including depth, width, attention heads, and feed-forward dimension)

  • Context length (1024 for most runs, though we also experiment with shorter contexts)

  • Batch size ( for most runs, but we also vary it to measure the critical batch size)

In this section we will display data along with empirically-motivated fits, deferring theoretical analysis to later sections.

3.1 Approximate Transformer Shape and Hyperparameter Independence

Figure 5: Performance depends very mildly on model shape when the total number of non-embedding parameters is held fixed. The loss varies only a few percent over a wide range of shapes. Small differences in parameter counts are compensated for by using the fit to as a baseline. Aspect ratio in particular can vary by a factor of 40 while only slightly impacting performance; an reaches a loss within 3% of the model used in radford2019language .

Transformer performance depends very weakly on the shape parameters , and when we hold the total non-embedding parameter count fixed. To establish these results we trained models with fixed size while varying a single hyperparameter. This was simplest for the case of . When varying , we simultaneously varied while keeping fixed. Similarly, to vary at fixed model size we also simultaneously varied the parameter, as required by the parameter counts in Table 1. Independence of would follow if deeper Transformers effectively behave as ensembles of shallower models, as has been suggested for ResNets ResNetsEnsemblesShallow . The results are shown in Figure 5.

3.2 Performance with Non-Embedding Parameter Count

Figure 6: Left: When we include embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. Right: When we exclude embedding parameters, the performance of models with different depths converge to a single trend. Only models with fewer than 2 layers or with extreme depth-to-width ratios deviate significantly from the trend.

In Figure 6 we display the performance of a wide variety of models, ranging from small models with shape through billion-parameter models, ranging in shape from through . Here we have trained to near convergence on the full WebText2 dataset and observe no overfitting (except possibly for the very largest models).

As shown in Figure 1, we find a steady trend with non-embedding parameter count , which can be fit to the first term of Equation (1.5), so that

(3.1)

To observe these trends it is crucial to study performance as a function of ; if we instead use the total parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure 6). This suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in recent work lan2019albert .

Although these models have been trained on the WebText2 dataset, their test loss on a variety of other datasets is also a power-law in with nearly identical power, as shown in Figure 8.

3.2.1 Comparing to LSTMs and Universal Transformers

Figure 7:

In Figure 7 we compare LSTM and Transformer performance as a function of non-embedding parameter count . The LSTMs were trained with the same dataset and context length. We see from these figures that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match the Transformer performance for later tokens. We present power-law relationships between performance and context position Appendix D.5, where increasingly large powers for larger models suggest improved ability to quickly recognize patterns.

We also compare the performance of standard Transformers to recurrent Transformers DBLP:journals/corr/abs-1807-03819 in Figure 17 in the appendix. These models re-use parameters, and so perform slightly better as a function of , at the cost of additional compute per-parameter.

3.2.2 Generalization Among Data Distributions

We have also tested our models on a set of additional text data distributions. The test loss on these datasets as a function of model size is shown in Figure 8; in all cases the models were trained only on the WebText2 dataset. We see that the loss on these other data distributions improves smoothly with model size, in direct parallel with the improvement on WebText2. We find that generalization depends almost exclusively on the in-distribution validation loss, and does not depend on the duration of training or proximity to convergence. We also observe no dependence on model depth (see Appendix D.8).

Figure 8: Left: Generalization performance to other data distributions improves smoothly with model size, with only a small and very slowly growing offset from the WebText2 training distribution. Right: Generalization performance depends only on training distribution performance, and not on the phase of training. We compare generalization of converged models (points) to that of a single large model (dashed curves) as it trains.

3.3 Performance with Dataset Size and Compute

We display empirical trends for the test loss as a function of dataset size (in tokens) and training compute in Figure 1.

For the trend with we trained a model with on fixed subsets of the WebText2 dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be fit with simple power-law

(3.2)

in the dataset size. The data and fit appear in Figure 1.

The total amount of non-embedding compute used during training can be estimated as , where is the batch size, is the number of parameter updates, and the factor of accounts for the forward and backward passes. Thus for a given value of we can scan over all models with various to find the model with the best performance on step . Note that in these results the batch size remains fixed for all models, which means that these empirical results are not truly optimal. We will account for this in later sections using an adjusted to produce cleaner trends.

The result appears as the heavy black line on the left-hand plot in Figure 1. It can be fit with

(3.3)

The figure also includes images of individual learning curves to clarify when individual models are optimal. We will study the optimal allocation of compute more closely later on. The data strongly suggests that sample efficiency improves with model size, and we also illustrate this directly in Figure 19 in the appendix.

4 Charting the Infinite Data Limit and Overfitting

In Section 3 we found a number of basic scaling laws for language modeling performance. Here we will study the performance of a model of size trained on a dataset with tokens while varying and simultaneously. We will empirically demonstrate that the optimally trained test loss accords with the scaling law of Equation (1.5). This provides guidance on how much data we would need to train models of increasing size while keeping overfitting under control.

4.1 Proposed Equation

Figure 9: The early-stopped test loss depends predictably on the dataset size and model size according to Equation (1.5). Left: For large , performance is a straight power law in . For a smaller fixed , performance stops improving as increases and the model begins to overfit. (The reverse is also true, see Figure 4.) Right: The extent of overfitting depends predominantly on the ratio , as predicted in equation (4.3). The line is our fit to that equation.

We have chosen the parameterization (1.5) (repeated here for convenience):

(4.1)

using three principles:

  1. Changes in vocabulary size or tokenization are expected to rescale the loss by an overall factor. The parameterization of (and all models of the loss) must naturally allow for such a rescaling.

  2. Fixing and sending , the overall loss should approach . Conversely, fixing and sending the loss must approach .

  3. should be analytic at , so that it has a series expansion in with integer powers. Theoretical support for this principle is significantly weaker than for the first two.

Our choice of satisfies the first requirement because we can rescale with changes in the vocabulary. This also implies that the values of have no fundamental meaning.

Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we expect that larger models should always perform better than smaller models. But with fixed finite , we also do not expect any model to be capable of approaching the best possible loss (ie the entropy of text). Similarly, a model with fixed size will be capacity-limited. These considerations motivate our second principle. Note that knowledge of at infinite and at infinite fully determines all the parameters in .

The third principle is more speculative. There is a simple and general reason one might expect overfitting to scale at very large

. Overfitting should be related to the variance or the signal-to-noise ratio of the dataset

1710.03667 , and this scales as

. This expectation should hold for any smooth loss function, since we expect to be able to expand the loss about the

limit. However, this argument assumes that corrections dominate over other sources of variance, such as the finite batch size and other limits on the efficacy of optimization. Without empirical confirmation, we would not be very confident of its applicability.

Our third principle explains the asymmetry between the roles of and in Equation (1.5). Very similar symmetric expressions333For example, one might have used , but this does not have a expansion. are possible, but they would not have a expansion with integer powers, and would require the introduction of an additional parameter.

In any case, we will see that our equation for fits the data well, which is the most important justification for our ansatz.

4.2 Results

We regularize all our models with 10% dropout, and by tracking test loss and stopping once it is no longer decreasing. The results are displayed in Figure 9, including a fit to the four parameters in Equation (1.5):

Parameter
Value
Table 2: Fits to

We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of , to about

tokens. With such a small dataset, an epoch consists of only 40 parameter updates. Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happens very early in training (see Figure

16). Also note that the parameters differ very slightly from those obtained in Section 3, as here we are fitting the full rather than just or .

To chart the borderlands of the infinite data limit, we can directly study the extent of overfitting. For all but the largest models, we see no sign of overfitting when training with the full 22B token WebText2 dataset, so we can take it as representative of . Thus we can compare finite to the infinite data limit by defining

(4.2)

and studying it as a function of . In fact, we see empirically that depends only a specific combination of and , as shown in Figure 16. This follows from the scaling law of Equation (1.5), which implies

(4.3)

Note that at large this formula also has a series expansion in powers of .

We estimate that the variation in the loss with different random seeds is roughly , which means that to avoid overfitting when training to within that threshold of convergence we require

(4.4)

With this relation, models smaller than

parameters can be trained with minimal overfitting on the 22B token WebText2 dataset, but our largest models will encounter some mild overfitting. More generally, this relation shows that dataset size may grow sub-linearly in model size while avoiding overfitting. Note however that this does not typically represent maximally compute-efficient training. We should also emphasize that we have not optimized regularization (eg the dropout probability) while varying dataset and model size.

5 Scaling Laws with Model Size and Training Time

In this section we will demonstrate that a simple scaling law provides a good description for the loss as a function of model size and training time. First we will explain how to use the results of 1812.06162 to define a universal training step , which accounts for the fact that most of our models have not been trained at an optimal batch size. Then we will demonstrate that we can fit the model size and training time dependence of the loss using Equation (1.6). Later we will use these results to predict the optimal allocation of training compute between model size and training time, and then confirm that prediction.

5.1 Adjustment for Training at

Figure 10: The critical batch size follows a power law in the loss as performance increase, and does not depend directly on the model size. We find that the critical batch size approximately doubles for every decrease in loss. is measured empirically from the data shown in Figure 18, but it is also roughly predicted by the gradient noise scale, as in 1812.06162 .

A simple empirical theory for the batch size dependence of training was developed in 1812.06162 (see also 1811.03600 ; DBLP:journals/corr/abs-1907-04164 ). It was argued that there is a critical batch size for training; for up to the batch size can be increased with very minimal degradation in compute-efficiency, whereas for increases in result in diminishing returns. It was also argued that the gradient noise scale provides a simple prediction for , and that neither depends directly on model size except through the value of the loss that has been attained. These results can be used to predict how training time and compute will vary with the batch size. To utilize both training time and compute as effectively as possible, it is best to train with a batch size . Training at minimizes the number of training steps, while minimizes the use of compute.

More specifically, it was demonstrated that for a wide variety of neural network tasks, the number of training steps

and the number of data examples processed satisfy the simple relation

(5.1)

when training to any fixed value of the loss . Here is the minimum number of steps necessary to reach , while is the minimum number of data examples that must be processed.

We demonstrate the relation (5.1) for Transformers in Figure 18 in the appendix. This relation defines the critical batch size

(5.2)

which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal time/compute tradeoff, requiring training steps and processing data examples.

In Figure 10 we have plotted the critical batch size and gradient noise scale444Although the critical batch size roughly matches the gradient noise scale, we are using a direct measurements of from Figures 18 and 10 for all our later analyses. as a function of training loss for two different models. We see that is independent of model size, and only depends on the loss . So the predictions of 1812.06162 continue to hold for Transformer language models. The critical batch size can be fit with a power-law in the loss

(5.3)

where and .

We have chosen this parameterization for because as the loss approaches its minimum value , the gradient noise scale is expected to diverge, and we expect to track this noise scale. We do not know , as we see no sign that our models are approaching it, but since the entropy of natural language is non-zero. Since apparently is much smaller than the values of we have achieved, we used a parameterization where diverges as .

We will use to estimate the relation between the number of training steps while training at batch size tokens and the number of training steps while training at . This is simply

(5.4)

for any given target value for the loss. This also defines a critical value of the compute needed to train to with a model of size if we were to train at . This is

(5.5)

where estimates the (non-embedding) compute used at batch size .

5.2 Results for and Performance with Model Size and Compute

Figure 11: When we hold either total compute or number of training steps fixed, performance follows from Equation (5.6). Each value of compute budget has an associated optimal model size that maximizes performance. Mediocre fits at small are unsurprising, as the power-law equation for the learning curves breaks down very early in training.

Now we will use defined in Equation (5.4) to obtain a simple and universal fit for the dependence of the loss on model size and training time in the infinite data limit. We will fit the stable, Adam-optimized training runs using Equation (1.6), repeated here for convenience:

(5.6)

for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit to the data with the parameters:

Parameter
Value
Table 3: Fits to

With these parameters, we obtain the learning curve fits in Figure 4. Though the fits are imperfect, we believe they are quite compelling given the simplicity of Equation (5.6).

The data and fits can be visualized in a different and more interesting way, as shown in Figure 11. There we study the test loss as a function of model size while fixing either the total non-embedding compute used in training, or the number of steps . For the fits we use Equation (5.5) and (5.4) along with the parameters above and Equation (5.6).

The power-law dependence of the loss on

reflects the interplay of optimizer dynamics and the loss landscape. Since the fits are best late in training, when the loss may be approximately quadratic, the power-law should provide information about the spectrum of the Hessian of the loss. Its universality suggests that the Hessian eigenvalue density is roughly independent of model size.

5.3 Lower Bound on Early Stopping Step

The results for can be used to derive a lower-bound (and rough estimate) of the step at which early stopping should occur when training is data limited. It is motivated by the idea that finite and infinite learning curves for a given model will be very similar until we reach . Thus overfitting should be proportional to the correction from simply ending training at . This will underestimate , because in reality the test loss will decrease more slowly when we have a finite , and therefore we will require more training steps to reach the optimal test loss at finite . This line of reasoning leads to the inequality

(5.7)

where is the converged loss, evaluated with infinite available data. This inequality and its comparison to the empirical data is displayed in Figure 16 in the appendix. In that figure, the values of and are empirical (though is adjusted to mimic training at ), while is computed from the fit to evaluated at .

Figure 12: Left: Given a fixed compute budget, a particular model size is optimal, though somewhat larger or smaller models can be trained with minimal additional compute. Right: Models larger than the compute-efficient size require fewer steps to train, allowing for potentially faster training if sufficient additional parallelism is possible. Note that this equation should not be trusted for very large models, as it is only valid in the power-law region of the learning curve, after initial transient effects.

6 Optimal Allocation of the Compute Budget

We displayed the empirical trend of performance as a function of the computation used during training in the top-right of Figure 1. However, this result involved training at a fixed batch size , whereas we know that in fact we could train more efficiently555One might ask why we did not simply train at in the first place. The reason is that it depends not only on the model but also on the target value of the loss we wish to achieve, and so is a moving target. by training at the batch size discussed in Section 5.1. Large and small values of the loss could have been achieved with fewer samples or fewer steps, respectively, and correcting for this inefficiency by standardizing to the critical batch size results in cleaner and more predictable trends.

Figure 13: When adjusting performance to simulate training far below the critical batch size, we find a somewhat altered power law for when compared with the fully empirical results. The conspicuous lump at PF-days marks the transition from 1-layer to 2-layer networks; we exclude 1-layer networks in the power-law fits. It is the trend that we expect to provide a reliable extrapolation for larger compute.

In this section we will adjust for this oversight. More importantly, we will use the results of Section 5 to determine the optimal allocation of compute between model size and the quantity of data processed during training, namely . We will determine this allocation both empirically and theoretically, by using the equation for , and we will demonstrate that these methods agree.

6.1 Optimal Performance and Allocations

Let us first study the loss as a function of the optimally allocated compute from Equation (5.5). The result is plotted in Figure 13, along with a power-law fit. We see that as compared to the compute plot of Figure 1, the new fit with is somewhat improved.

Given , it is natural to ask for the optimal model size that provides the minimal loss with a given quantity of training compute. The optimal model size is shown in Figure 14. We observe that can be fit very well with a power-law

(6.1)

In Figure 12, we show the effect of training models of sub-optimal sizes (see Appendix B.4).

By definition , and so we can use to extract further results. In particular, since prior fits show and , we can conclude that . This leads us to conclude that the optimal number of steps will only grow very slowly with compute, as

(6.2)

matching the empirical results in Figure 14. In fact the measured exponent is sufficiently small that our results may even be consistent with an exponent of zero.

Thus we conclude that as we scale up language modeling with an optimal allocation of computation, we should predominantly increase the model size , while simultaneously scaling up the batch size via with negligible increase in the number of serial steps. Since compute-efficient training uses relatively few optimization steps, additional work on speeding up early training dynamics may be warranted.

Figure 14: Left: Each value of the compute budget has an associated optimal model size . Optimal model size grows very rapidly with , increasing by 5x for each 10x increase in compute. The number of data examples processed makes up the remainder of the increase, growing relatively modestly by only 2x. Right: The batch-adjusted number of optimization steps also grows very slowly, if at all, meaning that most of the growth in data examples processed can be used for increased batch sizes.

6.2 Predictions from

The results for and the allocations can be predicted from the equation obtained in Section 5. Given our equation for , we can substitute and then find the minimum of the loss as a function of , while fixing the training compute. We carry out this procedure in detail in Appendix B, where we also provide some additional predictions.

For the loss as a function of training compute, we predict that

(6.3)

where

(6.4)

in excellent agreement with the exponent of Figure 13. We also predict that

(6.5)

which also matches the scaling of Figure 14 to within a few percent. Our scaling laws provide a predictive framework for the performance of language modeling.

6.3 Contradictions and a Conjecture

Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations for and due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits.

We observe no signs of deviation from straight power-law trends at large values of compute, data, or model size. Our trends must eventually level off, though, since natural language has non-zero entropy.

Indeed, the trends for compute-efficient training described in this section already contain an apparent contradiction. At scales several orders of magnitude above those documented here, the performance predicted by the scaling law decreases below what should be possible given the slow growth in training data with compute. This implies that our scaling laws must break down before this point, but we conjecture that the intersection point has a deeper meaning: it provides an estimate of the point at which Transformer language models reach maximal performance.

Since the amount of data used by compute-efficient training grows slowly with the compute budget, the performance predicted by eventually hits a lower bound set by the power law (see Figure 15). Let us work this out in more detail.

To keep overfitting under control, the results of Section 4 imply that we should scale the dataset size as

(6.6)

where we have used the compute-efficient from Figure 14.

Let us compare this to the data requirements of compute-efficient training. If we train at the critical batch size (i.e. ) and never re-use data during training, we find that data usage grows with compute as

(6.7)

This is the maximum rate at which the dataset size can productively grow with compute, since it means that we are only training for a single epoch. But it grows the dataset much more slowly than in Equation (6.6). It appears to imply that compute-efficient training will eventually run into a problem with overfitting, even if the training process never re-uses any data!

According to Figure 1, we expect that when we are bottlenecked by the dataset size (ie by overfitting), the loss should scale as . This implies that the loss would scale with compute as once we are data-limited. Once again, we have a contradiction, as this will eventually intersect with our prediction for from Figure 13, where we found a scaling .

The intersection point of and occurs at

(6.8)

though the numerical values are highly uncertain, varying by an order or magnitude in either direction depending on the precise values of the exponents from the power-law fits. The most obvious interpretation is that our scaling laws break down at or before we reach this point, which is still many orders of magnitude away in both compute and model size.

One might also conjecture that this intersection point has a deeper meaning. If we cannot increase the model size beyond without qualitatively different data requirements, perhaps this means that once we reach and , we have extracted all of the reliable information available in natural language data. In this interpretation, would provide a rough estimate for the entropy-per-token666Defining words using the wc utility, the WebText2 dataset has tokens per word and characters per token. of natural language. In this scenario, we would expect the loss trend to level off at or before .

We can guess at the functional form of as it levels off by considering a version of our training dataset with added noise. For example, we could append a random string of tokens to each context shown to the model to artificially boost the loss by a constant additive factor. Then, the distance from the noise floor would be a more meaningful performance metric, with even a small decrease in this distance potentially representing a significant boost in qualitative performance. Since the artificial noise would affect all of our trends equally, the critical point of 6.8 would not change (aside from the absolute value of ), and may be meaningful even if it occurs after the leveling off.

7 Related Work

Power laws can arise from a wide variety of sources thurner2018introduction . Power-law scalings with model and dataset size in density estimation wasserman2006all

and in random forest models

biau2012analysis may be connected with our results. These models suggest that power-law exponents may have a very rough interpretation as the inverse of the number of relevant features in the data.

Some early banko2001scaling ; DBLP:journals/corr/cs-CL-0108005 work found power-law scalings between performance and dataset size. More recent work 1712.00409 ; Hestness:2019:BHA:3293883.3295710 also investigated scaling between model size and data size; their work is perhaps the closest to ours in the literature777After this work was completed, rosenfeld2019constructive also appeared, which makes similar predictions for the dependence of loss on both model and dataset size.. Note, however, that 1712.00409 found super-linear scaling of dataset size with model size, whereas we find a sub-linear scaling. There are some parallels between our findings on optimal allocation of compute and 1906.06669 , including power-law learning curves. EfficientNets DBLP:journals/corr/abs-1905-11946 also appear to obey an approximate power-law relation between accuracy and model size. Very recent work 1909.12673 studies scaling with both dataset size and model size for a variety of datasets, and fits an ansatz similar to ours.

EfficientNet DBLP:journals/corr/abs-1905-11946 advocates scaling depth and width exponentially (with different coefficients) for optimal performance of image models, resulting in a power-law scaling of width as a function of depth. We find that for language models this power should be roughly one when scaling up (as width/depth should remain fixed). But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the overall scale of the language model. In ResNetsEnsemblesShallow it was argued that deep models can function as ensembles of shallower models, which could potentially explain this finding. Earlier work Zagoruyko_2016 has compared width and depth, and found that wide ResNets can outperform deep ResNets on image classification. Some studies fix computation per data example, which tends to scale in proportion to the number of model parameters, whereas we investigate scaling with both model size and the quantity of training computation.

Various works 1710.03667 ; 1812.11118 have investigated generalization in highly overparameterized models, finding a “jamming transition” 1901.01608 when the model size reaches the dataset size (this may require training many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do not observe such a transition, and find that the necessary training data scales sublinearly in the model size. Expansions in the model size, particularly at large width jacot2018neural ; 1902.06720 , may provide a useful framework for thinking about some of our scaling relations. Our results on optimization, such as the shape of learning curves, can likely be explained using a noisy quadratic model, which can provide quite accurate predictions DBLP:journals/corr/abs-1907-04164 in realistic settings. Making this connection quantitative will require a characterization of the Hessian spectrum DBLP:journals/corr/abs-1811-07062 ; DBLP:journals/corr/abs-1901-10159 ; unpublished-grd .

8 Discussion

We have observed consistent scalings of language model log-likelihood loss with non-embedding parameter count , dataset size , and optimized training computation , as encapsulated in Equations (1.5) and (1.6). Conversely, we find very weak dependence on many architectural and optimization hyperparameters. Since scalings with are power-laws, there are diminishing returns with increasing scale.

We were able to precisely model the dependence of the loss on and , and alternatively on and , when these parameters are varied simultaneously. We used these relations to derive the compute scaling, magnitude of overfitting, early stopping step, and data requirements when training large language models. So our scaling relations go beyond mere observation to provide a predictive framework. One might interpret these relations as analogues of the ideal gas law, which relates the macroscopic properties of a gas in a universal way, independent of most of the details of its microscopic consituents.

It is natural to conjecture that the scaling relations will apply to other generative modeling tasks with a maximum likelihood loss, and perhaps in other settings as well. To this purpose, it will be interesting to test these relations on other domains, such as images, audio, and video models, and perhaps also for random network distillation. At this point we do not know which of our results depend on the structure of natural language data, and which are universal. It would also be exciting to find a theoretical framework from which the scaling relations can be derived: a ‘statistical mechanics’ underlying the ‘thermodynamics’ we have observed. Such a theory might make it possible to derive other more precise predictions, and provide a systematic understanding of the limitations of the scaling laws.

In the domain of natural language, it will be important to investigate whether continued improvement on the loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy provides no indication of the specific technological developments that underwrite it. Similarly, the smooth improvements in language model loss may hide seemingly qualitative changes in capability.

Our results strongly suggest that larger models will continue to perform better, and will also be much more sample efficient than has been previously appreciated. Big models may be more important than big data. In this context, further investigation into model parallelism is warranted. Deep models can be trained using pipelining DBLP:journals/corr/abs-1811-06965 , which splits parameters depth-wise between devices, but eventually requires increased batch sizes as more devices are used. Wide networks on the other hand are more amenable to parallelization shazeer2018meshtensorflow , since large layers can be split between multiple workers with less serial dependency. Sparsity DBLP:journals/corr/abs-1904-10509 ; gray2017gpu or branching (e.g. Krizhevsky:2012:ICD:2999134.2999257 ) may allow for even faster training of large networks through increased model parallelism. And using methods like Wang_2017 ; wen2019autogrow , which grow networks as they train, it might be possible to remain on the compute-efficient frontier for an entire training run.

Acknowledgements

We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feedback on drafts of this work.

Appendix A Summary of Power Laws

For easier reference, we provide a summary below of the key trends described throughout the paper.

Parameters Data Compute Batch Size Equation
Fixed
Early Stop Fixed
Optimal Fixed (naive)
Early Stop Fixed
steps
Table 4:

The empirical fitted values for these trends are:

Power Law Scale (tokenization-dependent)
params (non-embed)
tokens
PF-days
PF-days
tokens
steps
Table 5:

The optimal parameters for compute efficient training are given by:

Compute-Efficient Value Power Law Scale
params
tokens
(lower bound) steps
(1 epoch) tokens
Table 6:

Appendix B Empirical Model of Compute-Efficient Frontier

Throughout this appendix all values of and are adjusted for training at the critical batch size . We have left off the ‘adj’ label to avoid cluttering the notation.

b.1 Defining Equations

The power-law fit to the learning curves implies a simple prescription for compute-efficient training. In this appendix, we will derive the optimal performance, model size, and number of training steps as a function of the compute budget. We start with the Equation (1.6), repeated here for convenience:

(B.1)

Here, represents the number of parameter updates when training at the critical batch size 1812.06162 , which was defined in Equation (5.2)888There is a slight ambiguity here: we can imagine training either at a constant batch size , or we could instead train at a variable batch size , where is the instantaneous critical batch size (as opposed to , which is the averaged version). These two prescriptions result in the same number of steps, so we can ignore this subtlety (see 1812.06162 ).:

(B.2)

We would like to determine optimal training parameters for a fixed compute budget, so we replace , where is the number of FLOPs used in the training run:

(B.3)

Now, we set to find the condition for optimality:

(B.4)

Equation (B.3) and (B.4) together determine the compute-efficient frontier.

b.2 Efficient Training

Now we assemble the implications of (B.3) and (B.4). First, note that inserting (B.4) into (B.3) yields

(B.5)

which implies that for compute-efficient training, we should train to a fixed percentage above the converged loss. Next, let’s determine how the optimal loss depends on the compute budget. Eliminating yields a power-law dependence of performance on compute:

(B.6)

where we defined

(B.7)
(B.8)

Similarly, we can eliminate to find :

(B.9)

and

(B.10)

b.3 Comparison to Inefficient

Typically, researchers train models until they appear to be close to convergence. In this section, we compare the efficient training procedure described above to this more typical setup. We define a the convergence factor as the percent deviation from the converged loss:

(B.11)

For compute-efficient training we have from the previous section, but researchers typically use a much smaller value. Here, we choose as an estimate. For a fixed value of the loss, we predict:

(B.12)
(B.13)
(B.14)

So that compute-efficient training uses 7.7x fewer parameter updates, 2.7x more parameters, and 65% less compute to reach the same loss.

b.4 Suboptimal Model Sizes

We can solve A.1 to find an expression for the amount of compute needed to reach a given value of the loss with a model of size :

(B.15)

Using A.6 and A.9, we can eliminate in favor of , the model size which reaches most efficiently. From there, we find an expression for the excess compute needed as a consequence of using a suboptimal model size:

(B.16)

The result is shown in Figure X. Models between 0.6x and 2.2x the optimal size can be used with only a 20% increase in compute budget. Using a smaller model is useful when accounting for the cost inference. A larger model can be trained the the same level of performance in fewer steps, allowing for more parallelism and faster training if sufficient harware is available (see Figure Y):

(B.17)

A 2.2x larger model requires 45% fewer steps at a cost of 20% more training compute. Note that this equation should not be trusted for very large models, as it is only valid in the power-law region of the learning curve after initial transient effects.

Appendix C Caveats

In this section we list some potential caveats to our analysis.

  • At present we do not have a solid theoretical understanding for any of our proposed scaling laws. The scaling relations with model size and compute are especially mysterious. It may be possible to understand scaling at very large holding model size fixed 1710.03667 , and also the shape of learning curves late in training, by modeling the loss with a noisy quadratic. But the scaling with at very large model size still remains mysterious. Without a theory or a systematic understanding of the corrections to our scaling laws, it’s difficult to determine in what circumstances they can be trusted.

  • We are not especially confident in the prediction of