1 Introduction
Language provides a natural domain for the study of artificial intelligence, as the vast majority of reasoning tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in language modeling, with state of the art models
radford2018improving ; 1810.04805 ; 1906.08237 ; DBLP:journals/corr/abs190711692 ; 1910.10683 approaching humanlevel performance on many specific tasks wang2019superglue , including the composition of coherent multiparagraph prompted text samples radford2019language .One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the Transformer architecture OriginalTransformer ; liu2018generating . The high ceiling and low floor for performance on language tasks allows us to study trends over more than seven orders of magnitude in scale.
Throughout we will observe precise powerlaw scalings for performance as a function of training time, context length, dataset size, model size, and compute budget.
1.1 Summary
Our key findings for Transformer language models are are as follows:
Performance depends strongly on scale, weakly on model shape:
Model performance depends most strongly on scale, which consists of three factors: the number of model parameters (excluding embeddings), the size of the dataset , and the amount of compute
used for training. Within reasonable limits, performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section
3)Smooth power laws:
Performance has a powerlaw relationship with each of the three scale factors when not bottlenecked by the other two, with trends spanning more than six orders of magnitude (see Figure 1). We observe no signs of deviation from these trends on the upper end, though performance must flatten out eventually before reaching zero loss. (Section 3)
Universality of overfitting:
Performance improves predictably as long as we scale up and in tandem, but enters a regime of diminishing returns if either or is held fixed while the other increases. The performance penalty depends predictably on the ratio , meaning that every time we increase the model size 8x, we only need to increase the data by roughly 5x to avoid a penalty. (Section 4)
Universality of training:
Training curves follow predictable powerlaws whose parameters are roughly independent of the model size. By extrapolating the early part of a training curve, we can roughly predict the loss that would be achieved if we trained for much longer. (Section 5)
Transfer improves with test performance:
When we evaluate models on text with a different distribution than they were trained on, the results are strongly correlated to those on the training validation set with a roughly constant offset in the loss – in other words, transfer to a different distribution incurs a constant penalty but otherwise improves roughly in line with performance on the training set. (Section 3.2.2)
Sample efficiency:
Convergence is inefficient:
When working within a fixed compute budget but without any other restrictions on the model size or available data , we attain optimal performance by training very large models and stopping significantly short of convergence (see Figure 3). Maximally computeefficient training would therefore be far more sample efficient than one might expect based on training small models to convergence, with data requirements growing very slowly as with training compute. (Section 6)
Optimal batch size:
The ideal batch size for training these models is roughly a power of the loss only, and continues to be determinable by measuring the gradient noise scale 1812.06162 ; it is roughly 12 million tokens at convergence for the largest models we can train. (Section 5.1)
Taken together, these results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. We expect that larger language models will perform better and be more sample efficient than current models.
1.2 Summary of Scaling Laws
The test loss of a Transformer trained to autoregressively model language can be predicted using a powerlaw when performance is limited by only either the number of nonembedding parameters
, the dataset size , or the optimally allocated compute budget (see Figure 1):
For models with a limited number of parameters, trained to convergence on sufficiently large datasets:
(1.1) 
For large models trained with a limited dataset with early stopping:
(1.2) 
When training with a limited amount of compute, a sufficiently large dataset, an optimallysized model, and a sufficiently small batch size (making optimal^{2}^{2}2We also observe an empirical powerlaw trend with the training compute (Figure 1) while training at fixed batch size, but it is the trend with that should be used to make predictions. They are related by equation (5.5). use of compute):
(1.3)
These relations hold across eight orders of magnitude in , six orders of magnitude in , and over two orders of magnitude in . They depend very weakly on model shape and other Transformer hyperparameters (depth, width, number of selfattention heads), with specific numerical values associated with the Webtext2 training set radford2019language . The power laws specify the degree of performance improvement expected as we scale up , , or ; for example, doubling the number of parameters yields a loss that is smaller by a factor . The precise numerical values of and depend on the vocabulary size and tokenization and hence do not have a fundamental meaning.
The critical batch size, which determines the speed/efficiency tradeoff for data parallelism (1812.06162 ), also roughly obeys a power law in :
(1.4) 
Equation (1.1) and (1.2) together suggest that as we increase the model size, we should increase the dataset size sublinearly according to . In fact, we find that there is a single equation combining (1.1) and (1.2) that governs the simultaneous dependence on and and governs the degree of overfitting:
(1.5) 
with fits pictured on the left in figure 4. We conjecture that this functional form may also parameterize the trained loglikelihood for other generative modeling tasks.
When training a given model for a finite number of parameter update steps in the infinite data limit, after an initial transient period, the learning curves can be accurately fit by (see the right of figure 4)
(1.6) 
where and , and
is the minimum possible number of optimization steps (parameter updates) estimated using Equation (
5.4).When training within a fixed compute budget , but with no other constraints, Equation (1.6) leads to the prediction that the optimal model size , optimal batch size , optimal number of steps , and dataset size should grow as
(1.7) 
with
(1.8) 
which closely matches the empirically optimal results , , and . As the computational budget increases, it should be spent primarily on larger models, without dramatic increases in training time or dataset size (see Figure 3). This also implies that as models grow larger, they become increasingly sample efficient. In practice, researchers typically train smaller models for longer than would be maximally computeefficient because of hardware constraints. Optimal performance depends on total compute as a power law (see Equation (1.3)).
We provide some basic theoretical motivation for Equation (1.5), an analysis of learning curve fits and their implications for training time, and a breakdown of our results per token. We also make some brief comparisons to LSTMs and recurrent Transformers DBLP:journals/corr/abs180703819 .
1.3 Notation
We use the following notation:

– the cross entropy loss in nats. Typically it will be averaged over the tokens in a context, but in some cases we report the loss for specific tokens within the context.

– the number of model parameters, excluding all vocabulary and positional embeddings

– an estimate of the total nonembedding training compute, where is the batch size, and is the number of training steps (ie parameter updates). We quote numerical values in PFdays, where one PFday floating point operations.

– the dataset size in tokens

– the critical batch size 1812.06162 , defined and discussed in Section 5.1. Training at the critical batch size provides a roughly optimal compromise between time and compute efficiency.

– an estimate of the minimum amount of nonembedding compute to reach a given value of the loss. This is the training compute that would be used if the model were trained at a batch size much less than the critical batch size.

– an estimate of the minimal number of training steps needed to reach a given value of the loss. This is also the number of training steps that would be used if the model were trained at a batch size much greater than the critical batch size.

– powerlaw exponents for the scaling of the loss as where can be any of .
2 Background and Methods
We train language models on WebText2, an extended version of the WebText radford2019language dataset, tokenized using bytepair encoding BPE with a vocabulary size . We optimize the autoregressive loglikelihood (i.e. crossentropy loss) averaged over a 1024token context, which is also our principal performance metric. We record the loss on the WebText2 test distribution and on a selection of other text distributions. We primarily train decoderonly liu2018generating ; radford2018improving Transformer OriginalTransformer models, though we also train LSTM models and Universal Transformers DBLP:journals/corr/abs180703819 for comparison.
2.1 Parameter and Compute Scaling of Transformers
Operation  Parameters  FLOPs per Token 
Embed  
Attention: QKV  
Attention: Mask  —  
Attention: Project  
Feedforward  
Deembed  —  
Total (NonEmbedding) 
We parameterize the Transformer architecture using hyperparameters (number of layers), (dimension of the residual stream), (dimension of the intermediate feedforward layer), (dimension of the attention output), and (number of attention heads per layer). We include tokens in the input context, with except where otherwise noted.
We use to denote the model size, which we define as the number of nonembedding parameters
(2.1) 
where we have excluded biases and other subleading terms. Our models also have parameters in an embedding matrix, and use parameters for positional embeddings, but we do not include these when discussing the ‘model size’ ; we will see that this produces significantly cleaner scaling laws.
Evaluating a forward pass of the Transformer involves roughly
(2.2) 
addmultiply operations, where the factor of two comes from the multiplyaccumulate operation used in matrix multiplication. A more detailed peroperation parameter and compute count is included in Table 1.
For contexts and models with , the contextdependent computational cost per token is a relatively small fraction of the total compute. Since we primarily study models where , we do not include contextdependent terms in our training compute estimate. Accounting for the backwards pass (approximately twice the compute as the forwards pass), we then define the estimated nonembedding compute as floating point operators per training token.
2.2 Training Procedures
Unless otherwise noted, we train models with the Adam optimizer kingma2014adam for a fixed steps with a batch size of sequences of tokens. Due to memory constraints, our largest models (more than 1B parameters) were trained with Adafactor DBLP:journals/corr/abs180404235 . We experimented with a variety of learning rates and schedules, as discussed in Appendix D.6. We found that results at convergence were largely independent of learning rate schedule. Unless otherwise noted, all training runs included in our data used a learning rate schedule with a 3000 step linear warmup followed by a cosine decay to zero.
2.3 Datasets
We train our models on an extended version of the WebText dataset described in radford2019language
. The original WebText dataset was a web scrape of outbound links from Reddit through December 2017 which received at least 3 karma. In the second version, WebText2, we added outbound Reddit links from the period of January to October 2018, also with a minimum of 3 karma. The karma threshold served as a heuristic for whether people found the link interesting or useful. The text of the new links was extracted with the Newspaper3k python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and
words (as defined by wc). We then apply the reversible tokenizer described in radford2019language , which yields tokens. We reserve of these tokens for use as a test set, and we also test on similarlyprepared samples of Books Corpus Zhu_2015 , Common Crawl commoncrawl , English Wikipedia, and a collection of publiclyavailable Internet Books.3 Empirical Results and Basic Power Laws
To characterize language model scaling we train a wide variety of models, varying a number of factors including:

Model size (ranging in size from 768 to 1.5 billion nonembedding parameters)

Dataset size (ranging from 22 million to 23 billion tokens)

Shape (including depth, width, attention heads, and feedforward dimension)

Context length (1024 for most runs, though we also experiment with shorter contexts)

Batch size ( for most runs, but we also vary it to measure the critical batch size)
In this section we will display data along with empiricallymotivated fits, deferring theoretical analysis to later sections.
3.1 Approximate Transformer Shape and Hyperparameter Independence
Transformer performance depends very weakly on the shape parameters , and when we hold the total nonembedding parameter count fixed. To establish these results we trained models with fixed size while varying a single hyperparameter. This was simplest for the case of . When varying , we simultaneously varied while keeping fixed. Similarly, to vary at fixed model size we also simultaneously varied the parameter, as required by the parameter counts in Table 1. Independence of would follow if deeper Transformers effectively behave as ensembles of shallower models, as has been suggested for ResNets ResNetsEnsemblesShallow . The results are shown in Figure 5.
3.2 Performance with NonEmbedding Parameter Count
In Figure 6 we display the performance of a wide variety of models, ranging from small models with shape through billionparameter models, ranging in shape from through . Here we have trained to near convergence on the full WebText2 dataset and observe no overfitting (except possibly for the very largest models).
As shown in Figure 1, we find a steady trend with nonembedding parameter count , which can be fit to the first term of Equation (1.5), so that
(3.1) 
To observe these trends it is crucial to study performance as a function of ; if we instead use the total parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure 6). This suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in recent work lan2019albert .
Although these models have been trained on the WebText2 dataset, their test loss on a variety of other datasets is also a powerlaw in with nearly identical power, as shown in Figure 8.
3.2.1 Comparing to LSTMs and Universal Transformers
In Figure 7 we compare LSTM and Transformer performance as a function of nonembedding parameter count . The LSTMs were trained with the same dataset and context length. We see from these figures that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match the Transformer performance for later tokens. We present powerlaw relationships between performance and context position Appendix D.5, where increasingly large powers for larger models suggest improved ability to quickly recognize patterns.
We also compare the performance of standard Transformers to recurrent Transformers DBLP:journals/corr/abs180703819 in Figure 17 in the appendix. These models reuse parameters, and so perform slightly better as a function of , at the cost of additional compute perparameter.
3.2.2 Generalization Among Data Distributions
We have also tested our models on a set of additional text data distributions. The test loss on these datasets as a function of model size is shown in Figure 8; in all cases the models were trained only on the WebText2 dataset. We see that the loss on these other data distributions improves smoothly with model size, in direct parallel with the improvement on WebText2. We find that generalization depends almost exclusively on the indistribution validation loss, and does not depend on the duration of training or proximity to convergence. We also observe no dependence on model depth (see Appendix D.8).
3.3 Performance with Dataset Size and Compute
We display empirical trends for the test loss as a function of dataset size (in tokens) and training compute in Figure 1.
For the trend with we trained a model with on fixed subsets of the WebText2 dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be fit with simple powerlaw
(3.2) 
in the dataset size. The data and fit appear in Figure 1.
The total amount of nonembedding compute used during training can be estimated as , where is the batch size, is the number of parameter updates, and the factor of accounts for the forward and backward passes. Thus for a given value of we can scan over all models with various to find the model with the best performance on step . Note that in these results the batch size remains fixed for all models, which means that these empirical results are not truly optimal. We will account for this in later sections using an adjusted to produce cleaner trends.
The result appears as the heavy black line on the lefthand plot in Figure 1. It can be fit with
(3.3) 
The figure also includes images of individual learning curves to clarify when individual models are optimal. We will study the optimal allocation of compute more closely later on. The data strongly suggests that sample efficiency improves with model size, and we also illustrate this directly in Figure 19 in the appendix.
4 Charting the Infinite Data Limit and Overfitting
In Section 3 we found a number of basic scaling laws for language modeling performance. Here we will study the performance of a model of size trained on a dataset with tokens while varying and simultaneously. We will empirically demonstrate that the optimally trained test loss accords with the scaling law of Equation (1.5). This provides guidance on how much data we would need to train models of increasing size while keeping overfitting under control.
4.1 Proposed Equation
We have chosen the parameterization (1.5) (repeated here for convenience):
(4.1) 
using three principles:

Changes in vocabulary size or tokenization are expected to rescale the loss by an overall factor. The parameterization of (and all models of the loss) must naturally allow for such a rescaling.

Fixing and sending , the overall loss should approach . Conversely, fixing and sending the loss must approach .

should be analytic at , so that it has a series expansion in with integer powers. Theoretical support for this principle is significantly weaker than for the first two.
Our choice of satisfies the first requirement because we can rescale with changes in the vocabulary. This also implies that the values of have no fundamental meaning.
Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we expect that larger models should always perform better than smaller models. But with fixed finite , we also do not expect any model to be capable of approaching the best possible loss (ie the entropy of text). Similarly, a model with fixed size will be capacitylimited. These considerations motivate our second principle. Note that knowledge of at infinite and at infinite fully determines all the parameters in .
The third principle is more speculative. There is a simple and general reason one might expect overfitting to scale at very large
. Overfitting should be related to the variance or the signaltonoise ratio of the dataset
1710.03667 , and this scales as. This expectation should hold for any smooth loss function, since we expect to be able to expand the loss about the
limit. However, this argument assumes that corrections dominate over other sources of variance, such as the finite batch size and other limits on the efficacy of optimization. Without empirical confirmation, we would not be very confident of its applicability.Our third principle explains the asymmetry between the roles of and in Equation (1.5). Very similar symmetric expressions^{3}^{3}3For example, one might have used , but this does not have a expansion. are possible, but they would not have a expansion with integer powers, and would require the introduction of an additional parameter.
In any case, we will see that our equation for fits the data well, which is the most important justification for our ansatz.
4.2 Results
We regularize all our models with 10% dropout, and by tracking test loss and stopping once it is no longer decreasing. The results are displayed in Figure 9, including a fit to the four parameters in Equation (1.5):
Parameter  

Value 
We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of , to about
tokens. With such a small dataset, an epoch consists of only 40 parameter updates. Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happens very early in training (see Figure
16). Also note that the parameters differ very slightly from those obtained in Section 3, as here we are fitting the full rather than just or .To chart the borderlands of the infinite data limit, we can directly study the extent of overfitting. For all but the largest models, we see no sign of overfitting when training with the full 22B token WebText2 dataset, so we can take it as representative of . Thus we can compare finite to the infinite data limit by defining
(4.2) 
and studying it as a function of . In fact, we see empirically that depends only a specific combination of and , as shown in Figure 16. This follows from the scaling law of Equation (1.5), which implies
(4.3) 
Note that at large this formula also has a series expansion in powers of .
We estimate that the variation in the loss with different random seeds is roughly , which means that to avoid overfitting when training to within that threshold of convergence we require
(4.4) 
With this relation, models smaller than
parameters can be trained with minimal overfitting on the 22B token WebText2 dataset, but our largest models will encounter some mild overfitting. More generally, this relation shows that dataset size may grow sublinearly in model size while avoiding overfitting. Note however that this does not typically represent maximally computeefficient training. We should also emphasize that we have not optimized regularization (eg the dropout probability) while varying dataset and model size.
5 Scaling Laws with Model Size and Training Time
In this section we will demonstrate that a simple scaling law provides a good description for the loss as a function of model size and training time. First we will explain how to use the results of 1812.06162 to define a universal training step , which accounts for the fact that most of our models have not been trained at an optimal batch size. Then we will demonstrate that we can fit the model size and training time dependence of the loss using Equation (1.6). Later we will use these results to predict the optimal allocation of training compute between model size and training time, and then confirm that prediction.
5.1 Adjustment for Training at
A simple empirical theory for the batch size dependence of training was developed in 1812.06162 (see also 1811.03600 ; DBLP:journals/corr/abs190704164 ). It was argued that there is a critical batch size for training; for up to the batch size can be increased with very minimal degradation in computeefficiency, whereas for increases in result in diminishing returns. It was also argued that the gradient noise scale provides a simple prediction for , and that neither depends directly on model size except through the value of the loss that has been attained. These results can be used to predict how training time and compute will vary with the batch size. To utilize both training time and compute as effectively as possible, it is best to train with a batch size . Training at minimizes the number of training steps, while minimizes the use of compute.
More specifically, it was demonstrated that for a wide variety of neural network tasks, the number of training steps
and the number of data examples processed satisfy the simple relation(5.1) 
when training to any fixed value of the loss . Here is the minimum number of steps necessary to reach , while is the minimum number of data examples that must be processed.
We demonstrate the relation (5.1) for Transformers in Figure 18 in the appendix. This relation defines the critical batch size
(5.2) 
which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal time/compute tradeoff, requiring training steps and processing data examples.
In Figure 10 we have plotted the critical batch size and gradient noise scale^{4}^{4}4Although the critical batch size roughly matches the gradient noise scale, we are using a direct measurements of from Figures 18 and 10 for all our later analyses. as a function of training loss for two different models. We see that is independent of model size, and only depends on the loss . So the predictions of 1812.06162 continue to hold for Transformer language models. The critical batch size can be fit with a powerlaw in the loss
(5.3) 
where and .
We have chosen this parameterization for because as the loss approaches its minimum value , the gradient noise scale is expected to diverge, and we expect to track this noise scale. We do not know , as we see no sign that our models are approaching it, but since the entropy of natural language is nonzero. Since apparently is much smaller than the values of we have achieved, we used a parameterization where diverges as .
We will use to estimate the relation between the number of training steps while training at batch size tokens and the number of training steps while training at . This is simply
(5.4) 
for any given target value for the loss. This also defines a critical value of the compute needed to train to with a model of size if we were to train at . This is
(5.5) 
where estimates the (nonembedding) compute used at batch size .
5.2 Results for and Performance with Model Size and Compute
Now we will use defined in Equation (5.4) to obtain a simple and universal fit for the dependence of the loss on model size and training time in the infinite data limit. We will fit the stable, Adamoptimized training runs using Equation (1.6), repeated here for convenience:
(5.6) 
for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit to the data with the parameters:
Parameter  

Value 
With these parameters, we obtain the learning curve fits in Figure 4. Though the fits are imperfect, we believe they are quite compelling given the simplicity of Equation (5.6).
The data and fits can be visualized in a different and more interesting way, as shown in Figure 11. There we study the test loss as a function of model size while fixing either the total nonembedding compute used in training, or the number of steps . For the fits we use Equation (5.5) and (5.4) along with the parameters above and Equation (5.6).
The powerlaw dependence of the loss on
reflects the interplay of optimizer dynamics and the loss landscape. Since the fits are best late in training, when the loss may be approximately quadratic, the powerlaw should provide information about the spectrum of the Hessian of the loss. Its universality suggests that the Hessian eigenvalue density is roughly independent of model size.
5.3 Lower Bound on Early Stopping Step
The results for can be used to derive a lowerbound (and rough estimate) of the step at which early stopping should occur when training is data limited. It is motivated by the idea that finite and infinite learning curves for a given model will be very similar until we reach . Thus overfitting should be proportional to the correction from simply ending training at . This will underestimate , because in reality the test loss will decrease more slowly when we have a finite , and therefore we will require more training steps to reach the optimal test loss at finite . This line of reasoning leads to the inequality
(5.7) 
where is the converged loss, evaluated with infinite available data. This inequality and its comparison to the empirical data is displayed in Figure 16 in the appendix. In that figure, the values of and are empirical (though is adjusted to mimic training at ), while is computed from the fit to evaluated at .
6 Optimal Allocation of the Compute Budget
We displayed the empirical trend of performance as a function of the computation used during training in the topright of Figure 1. However, this result involved training at a fixed batch size , whereas we know that in fact we could train more efficiently^{5}^{5}5One might ask why we did not simply train at in the first place. The reason is that it depends not only on the model but also on the target value of the loss we wish to achieve, and so is a moving target. by training at the batch size discussed in Section 5.1. Large and small values of the loss could have been achieved with fewer samples or fewer steps, respectively, and correcting for this inefficiency by standardizing to the critical batch size results in cleaner and more predictable trends.
In this section we will adjust for this oversight. More importantly, we will use the results of Section 5 to determine the optimal allocation of compute between model size and the quantity of data processed during training, namely . We will determine this allocation both empirically and theoretically, by using the equation for , and we will demonstrate that these methods agree.
6.1 Optimal Performance and Allocations
Let us first study the loss as a function of the optimally allocated compute from Equation (5.5). The result is plotted in Figure 13, along with a powerlaw fit. We see that as compared to the compute plot of Figure 1, the new fit with is somewhat improved.
Given , it is natural to ask for the optimal model size that provides the minimal loss with a given quantity of training compute. The optimal model size is shown in Figure 14. We observe that can be fit very well with a powerlaw
(6.1) 
In Figure 12, we show the effect of training models of suboptimal sizes (see Appendix B.4).
By definition , and so we can use to extract further results. In particular, since prior fits show and , we can conclude that . This leads us to conclude that the optimal number of steps will only grow very slowly with compute, as
(6.2) 
matching the empirical results in Figure 14. In fact the measured exponent is sufficiently small that our results may even be consistent with an exponent of zero.
Thus we conclude that as we scale up language modeling with an optimal allocation of computation, we should predominantly increase the model size , while simultaneously scaling up the batch size via with negligible increase in the number of serial steps. Since computeefficient training uses relatively few optimization steps, additional work on speeding up early training dynamics may be warranted.
6.2 Predictions from
The results for and the allocations can be predicted from the equation obtained in Section 5. Given our equation for , we can substitute and then find the minimum of the loss as a function of , while fixing the training compute. We carry out this procedure in detail in Appendix B, where we also provide some additional predictions.
For the loss as a function of training compute, we predict that
(6.3) 
where
(6.4) 
in excellent agreement with the exponent of Figure 13. We also predict that
(6.5) 
which also matches the scaling of Figure 14 to within a few percent. Our scaling laws provide a predictive framework for the performance of language modeling.
6.3 Contradictions and a Conjecture
We observe no signs of deviation from straight powerlaw trends at large values of compute, data, or model size. Our trends must eventually level off, though, since natural language has nonzero entropy.
Indeed, the trends for computeefficient training described in this section already contain an apparent contradiction. At scales several orders of magnitude above those documented here, the performance predicted by the scaling law decreases below what should be possible given the slow growth in training data with compute. This implies that our scaling laws must break down before this point, but we conjecture that the intersection point has a deeper meaning: it provides an estimate of the point at which Transformer language models reach maximal performance.
Since the amount of data used by computeefficient training grows slowly with the compute budget, the performance predicted by eventually hits a lower bound set by the power law (see Figure 15). Let us work this out in more detail.
To keep overfitting under control, the results of Section 4 imply that we should scale the dataset size as
(6.6) 
where we have used the computeefficient from Figure 14.
Let us compare this to the data requirements of computeefficient training. If we train at the critical batch size (i.e. ) and never reuse data during training, we find that data usage grows with compute as
(6.7) 
This is the maximum rate at which the dataset size can productively grow with compute, since it means that we are only training for a single epoch. But it grows the dataset much more slowly than in Equation (6.6). It appears to imply that computeefficient training will eventually run into a problem with overfitting, even if the training process never reuses any data!
According to Figure 1, we expect that when we are bottlenecked by the dataset size (ie by overfitting), the loss should scale as . This implies that the loss would scale with compute as once we are datalimited. Once again, we have a contradiction, as this will eventually intersect with our prediction for from Figure 13, where we found a scaling .
The intersection point of and occurs at
(6.8) 
though the numerical values are highly uncertain, varying by an order or magnitude in either direction depending on the precise values of the exponents from the powerlaw fits. The most obvious interpretation is that our scaling laws break down at or before we reach this point, which is still many orders of magnitude away in both compute and model size.
One might also conjecture that this intersection point has a deeper meaning. If we cannot increase the model size beyond without qualitatively different data requirements, perhaps this means that once we reach and , we have extracted all of the reliable information available in natural language data. In this interpretation, would provide a rough estimate for the entropypertoken^{6}^{6}6Defining words using the wc utility, the WebText2 dataset has tokens per word and characters per token. of natural language. In this scenario, we would expect the loss trend to level off at or before .
We can guess at the functional form of as it levels off by considering a version of our training dataset with added noise. For example, we could append a random string of tokens to each context shown to the model to artificially boost the loss by a constant additive factor. Then, the distance from the noise floor would be a more meaningful performance metric, with even a small decrease in this distance potentially representing a significant boost in qualitative performance. Since the artificial noise would affect all of our trends equally, the critical point of 6.8 would not change (aside from the absolute value of ), and may be meaningful even if it occurs after the leveling off.
7 Related Work
Power laws can arise from a wide variety of sources thurner2018introduction . Powerlaw scalings with model and dataset size in density estimation wasserman2006all
and in random forest models
biau2012analysis may be connected with our results. These models suggest that powerlaw exponents may have a very rough interpretation as the inverse of the number of relevant features in the data.Some early banko2001scaling ; DBLP:journals/corr/csCL0108005 work found powerlaw scalings between performance and dataset size. More recent work 1712.00409 ; Hestness:2019:BHA:3293883.3295710 also investigated scaling between model size and data size; their work is perhaps the closest to ours in the literature^{7}^{7}7After this work was completed, rosenfeld2019constructive also appeared, which makes similar predictions for the dependence of loss on both model and dataset size.. Note, however, that 1712.00409 found superlinear scaling of dataset size with model size, whereas we find a sublinear scaling. There are some parallels between our findings on optimal allocation of compute and 1906.06669 , including powerlaw learning curves. EfficientNets DBLP:journals/corr/abs190511946 also appear to obey an approximate powerlaw relation between accuracy and model size. Very recent work 1909.12673 studies scaling with both dataset size and model size for a variety of datasets, and fits an ansatz similar to ours.
EfficientNet DBLP:journals/corr/abs190511946 advocates scaling depth and width exponentially (with different coefficients) for optimal performance of image models, resulting in a powerlaw scaling of width as a function of depth. We find that for language models this power should be roughly one when scaling up (as width/depth should remain fixed). But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the overall scale of the language model. In ResNetsEnsemblesShallow it was argued that deep models can function as ensembles of shallower models, which could potentially explain this finding. Earlier work Zagoruyko_2016 has compared width and depth, and found that wide ResNets can outperform deep ResNets on image classification. Some studies fix computation per data example, which tends to scale in proportion to the number of model parameters, whereas we investigate scaling with both model size and the quantity of training computation.
Various works 1710.03667 ; 1812.11118 have investigated generalization in highly overparameterized models, finding a “jamming transition” 1901.01608 when the model size reaches the dataset size (this may require training many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do not observe such a transition, and find that the necessary training data scales sublinearly in the model size. Expansions in the model size, particularly at large width jacot2018neural ; 1902.06720 , may provide a useful framework for thinking about some of our scaling relations. Our results on optimization, such as the shape of learning curves, can likely be explained using a noisy quadratic model, which can provide quite accurate predictions DBLP:journals/corr/abs190704164 in realistic settings. Making this connection quantitative will require a characterization of the Hessian spectrum DBLP:journals/corr/abs181107062 ; DBLP:journals/corr/abs190110159 ; unpublishedgrd .
8 Discussion
We have observed consistent scalings of language model loglikelihood loss with nonembedding parameter count , dataset size , and optimized training computation , as encapsulated in Equations (1.5) and (1.6). Conversely, we find very weak dependence on many architectural and optimization hyperparameters. Since scalings with are powerlaws, there are diminishing returns with increasing scale.
We were able to precisely model the dependence of the loss on and , and alternatively on and , when these parameters are varied simultaneously. We used these relations to derive the compute scaling, magnitude of overfitting, early stopping step, and data requirements when training large language models. So our scaling relations go beyond mere observation to provide a predictive framework. One might interpret these relations as analogues of the ideal gas law, which relates the macroscopic properties of a gas in a universal way, independent of most of the details of its microscopic consituents.
It is natural to conjecture that the scaling relations will apply to other generative modeling tasks with a maximum likelihood loss, and perhaps in other settings as well. To this purpose, it will be interesting to test these relations on other domains, such as images, audio, and video models, and perhaps also for random network distillation. At this point we do not know which of our results depend on the structure of natural language data, and which are universal. It would also be exciting to find a theoretical framework from which the scaling relations can be derived: a ‘statistical mechanics’ underlying the ‘thermodynamics’ we have observed. Such a theory might make it possible to derive other more precise predictions, and provide a systematic understanding of the limitations of the scaling laws.
In the domain of natural language, it will be important to investigate whether continued improvement on the loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy provides no indication of the specific technological developments that underwrite it. Similarly, the smooth improvements in language model loss may hide seemingly qualitative changes in capability.
Our results strongly suggest that larger models will continue to perform better, and will also be much more sample efficient than has been previously appreciated. Big models may be more important than big data. In this context, further investigation into model parallelism is warranted. Deep models can be trained using pipelining DBLP:journals/corr/abs181106965 , which splits parameters depthwise between devices, but eventually requires increased batch sizes as more devices are used. Wide networks on the other hand are more amenable to parallelization shazeer2018meshtensorflow , since large layers can be split between multiple workers with less serial dependency. Sparsity DBLP:journals/corr/abs190410509 ; gray2017gpu or branching (e.g. Krizhevsky:2012:ICD:2999134.2999257 ) may allow for even faster training of large networks through increased model parallelism. And using methods like Wang_2017 ; wen2019autogrow , which grow networks as they train, it might be possible to remain on the computeefficient frontier for an entire training run.
Acknowledgements
We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feedback on drafts of this work.
Appendix A Summary of Power Laws
For easier reference, we provide a summary below of the key trends described throughout the paper.
Parameters  Data  Compute  Batch Size  Equation 

Fixed  
Early Stop  Fixed  
Optimal  Fixed  (naive)  
Early Stop  Fixed  
steps 
The empirical fitted values for these trends are:
Power Law  Scale (tokenizationdependent) 

params (nonembed)  
tokens  
PFdays  
PFdays  
tokens  
steps 
The optimal parameters for compute efficient training are given by:
ComputeEfficient Value  Power Law  Scale 

params  
tokens  
(lower bound)  steps  
(1 epoch)  tokens 
Appendix B Empirical Model of ComputeEfficient Frontier
Throughout this appendix all values of and are adjusted for training at the critical batch size . We have left off the ‘adj’ label to avoid cluttering the notation.
b.1 Defining Equations
The powerlaw fit to the learning curves implies a simple prescription for computeefficient training. In this appendix, we will derive the optimal performance, model size, and number of training steps as a function of the compute budget. We start with the Equation (1.6), repeated here for convenience:
(B.1) 
Here, represents the number of parameter updates when training at the critical batch size 1812.06162 , which was defined in Equation (5.2)^{8}^{8}8There is a slight ambiguity here: we can imagine training either at a constant batch size , or we could instead train at a variable batch size , where is the instantaneous critical batch size (as opposed to , which is the averaged version). These two prescriptions result in the same number of steps, so we can ignore this subtlety (see 1812.06162 ).:
(B.2) 
We would like to determine optimal training parameters for a fixed compute budget, so we replace , where is the number of FLOPs used in the training run:
(B.3) 
Now, we set to find the condition for optimality:
(B.4) 
Equation (B.3) and (B.4) together determine the computeefficient frontier.
b.2 Efficient Training
Now we assemble the implications of (B.3) and (B.4). First, note that inserting (B.4) into (B.3) yields
(B.5) 
which implies that for computeefficient training, we should train to a fixed percentage above the converged loss. Next, let’s determine how the optimal loss depends on the compute budget. Eliminating yields a powerlaw dependence of performance on compute:
(B.6) 
where we defined
(B.7)  
(B.8) 
Similarly, we can eliminate to find :
(B.9) 
and
(B.10) 
b.3 Comparison to Inefficient
Typically, researchers train models until they appear to be close to convergence. In this section, we compare the efficient training procedure described above to this more typical setup. We define a the convergence factor as the percent deviation from the converged loss:
(B.11) 
For computeefficient training we have from the previous section, but researchers typically use a much smaller value. Here, we choose as an estimate. For a fixed value of the loss, we predict:
(B.12)  
(B.13)  
(B.14) 
So that computeefficient training uses 7.7x fewer parameter updates, 2.7x more parameters, and 65% less compute to reach the same loss.
b.4 Suboptimal Model Sizes
We can solve A.1 to find an expression for the amount of compute needed to reach a given value of the loss with a model of size :
(B.15) 
Using A.6 and A.9, we can eliminate in favor of , the model size which reaches most efficiently. From there, we find an expression for the excess compute needed as a consequence of using a suboptimal model size:
(B.16) 
The result is shown in Figure X. Models between 0.6x and 2.2x the optimal size can be used with only a 20% increase in compute budget. Using a smaller model is useful when accounting for the cost inference. A larger model can be trained the the same level of performance in fewer steps, allowing for more parallelism and faster training if sufficient harware is available (see Figure Y):
(B.17) 
A 2.2x larger model requires 45% fewer steps at a cost of 20% more training compute. Note that this equation should not be trusted for very large models, as it is only valid in the powerlaw region of the learning curve after initial transient effects.
Appendix C Caveats
In this section we list some potential caveats to our analysis.

At present we do not have a solid theoretical understanding for any of our proposed scaling laws. The scaling relations with model size and compute are especially mysterious. It may be possible to understand scaling at very large holding model size fixed 1710.03667 , and also the shape of learning curves late in training, by modeling the loss with a noisy quadratic. But the scaling with at very large model size still remains mysterious. Without a theory or a systematic understanding of the corrections to our scaling laws, it’s difficult to determine in what circumstances they can be trusted.

We are not especially confident in the prediction of
Comments
There are no comments yet.