Transformers Vaswani2017AttentionIA have been used extensively in many NLP advances over the past few years (e.g., devlin2018bert ; yang2019xlnet ; liu2019roberta ; 2020t5 ; Adiwardana2020TowardsAH ; brown2020language ). With scaling, Transformers have produced increasingly better performance yang2019xlnet ; brown2020language ; Fedus2021SwitchTS ; Kaplan2020ScalingLF , but the costs of training larger models have become prohibitively expensive.
In this paper, we aim to reduce the training costs of Transformer language models. To this end, we propose searching for more efficient alternatives to Transformer by modifying its TensorFlow computation graph Abadi2016TensorFlowAS . Given a search space of TensorFlow programs, we use evolution RealMSSSLK17 ; liu2017hierarchical ; so2019evolved ; liu2020evolving ; Yao1999EvolvingAN ; schmidhuber:1987:srl ; Stanley2019DesigningNN to search for models that achieve as low of a validation loss as possible given a fixed amount of training compute. An advantage of using TensorFlow programs as the search space is that it is easier to find simple low-level improvements to optimize Transformers. We focus on decoder-only auto-regressive language modeling (LM), because of its generality and success Radford2019LanguageMA ; brown2020language ; Schick2021ItsNJ ; Wang2021EntailmentAF ; Gao2020MakingPL .222 We provide details of our primitives search in TensorFlow, but the same approach can also be applied to other deep learning libraries.
We provide details of our primitives search in TensorFlow, but the same approach can also be applied to other deep learning libraries.The discovered model, named Primer (PRIMitives searched transformER), exhibits strong performance improvements over common Transformer variants on auto-regressive language modeling. Our experiments show that Primer has the benefits of (1) achieving a target quality using a smaller training cost, (2) achieving higher quality given a fixed training cost, and (3) achieving a target quality using a smaller inference cost. These benefits are robust and hold across model sizes (20M to 1.9B parameters), across compute scales (10 to 105
accelerator hours), across datasets (LM1B, C4, PG19Rae2020CompressiveTF ), across hardware platforms (TPUv2, TPUv3, TPUv4 and V100), across multiple Transformer codebases using default configurations (Tensor2Tensor, Lingvo, and T5) and across multiple model families (dense Transformers Vaswani2017AttentionIA , sparse mixture-of-experts Switch Transformers Fedus2021SwitchTS , and Synthesizers tay2020synthesizer ). We open source these comparisons to help with the reproducibility of our results.Primer: Searching for Efficient Transformers for Language ModelingOur main finding is that the compute savings of Primer over Transformers increase as training cost grows, when controlling for model size and quality. These savings follow a power law with respect to quality when using optimally sized models. To demonstrate Primer’s savings in an established training setup, we compare 500M parameter Primer to the original T5 architecture, using the exact configuration used by Raffel et al. 2020t5 applied to auto-regressive language modeling. In this setting, Primer achieves an improvement of 0.9 perplexity given the same training cost, and reaches quality parity with the T5 baseline model using 4.2X less compute. We further demonstrate that Primer’s savings transfer to one-shot evaluations by comparing Primer to Transformer at 1.9B parameters in a setup similar to GPT-3 XL brown2020language . There, using 3X less training compute, Primer achieves similar performance to Transformer on both pretraining perplexity and downstream one-shot tasks. Our analysis shows that the improvements of Primer over Transformer can be mostly attributed to two main modifications: squaring ReLU activations and adding a depthwise convolution layer after each Q, K, and V projection in self-attention. These two modifications are simple and can be dropped into existing Transformer codebases to obtain significant gains for auto-regressive language modeling.
2 Search Space and Search Method
Searching Over TensorFlow Programs:
To construct a search space for Transformer alternatives, we use operations from TensorFlow (TF). In this search space, each program defines the stackable decoder block of an auto-regressive language model
. Given input tensorsthat represent sequences of length with embedding length , our programs return tensors of the same shape. When stacked, their outputs represent next-token prediction embeddings at each sequence position. Our programs only specify model architectures and nothing else. In other words, the input and output embedding matrices themselves, as well as input preprocessing and weight optimization are not within the scope of our programs.
Figure 1 shows how programs are constructed in our search space. Each program is built from an evolutionary search DNA, which is an indexed collection of subprograms. subprogram 0 is the main() function that is the execution entry point, and the other subprograms are part of the DNA’s subprogram bank. Each subprogram is an indexed array of instructions with no length constraints. An instruction is an operation with a set of input arguments. The operation denotes the function that the instruction executes. Each operation maps to either a TF function from the primitives vocabulary or another subprogram in the DNA subprogram bank. The primitives vocabulary is comprised of simple primitive TF functions, such as add, log, and matmul (see Appendix A.1 for details). It is worth emphasizing that high-level building blocks such as self-attention are not operations in the search space, but can be constructed from our low-level operations. The DNA’s subprogram bank is comprised of additional programs that can be executed as functions by instructions. Each subprogram can only call subprograms with a higher index in the subprogram bank, which removes the possibility of cycles.Each instruction’s argument set contains a list of potential argument values for each instruction operation. The set of argument fields represents the union of fields that all the operation primitives use:
Input 1: The index of the hidden state that will be used as the first tensor input. The index of each hidden state is the index of the instruction that produced it, with the subprogram’s input states at indexes 0 and 1. An example of an operation that uses this is sin.
Input 2: The index of the second tensor input. This is only used by operations that are binary with respect to tensor inputs. An example of an operation that uses this is add.
Constant: A real valued constant. An example of an operation that uses this is max; tf.math.maximum(x, C) for is how we express the Transformer’s ReLU activation.
Dimension Size: An integer representing the output dimension size for transformations that utilize weight matrices. An example of an operation that uses this is conv 1x1, the dense projection used by the Transformer’s attention projections and feed forward portions. See Appendix A.2 for how we employ relative dimensions so2019evolved to resize our models.
Our search subprograms are converted to TF programs by converting each subprogram instruction to a corresponding line of TF code, one at a time in indexing order. To create the TF line, the instruction operation is mapped to the corresponding TF primitive function or DNA subprogram, and any relevant arguments are plugged in (see Appendix A.1 for the full TF primitives vocabulary, including argument mappings); the other arguments are ignored. The TF tensor that is generated by the final instruction is taken as the subprogram output. We do not use TF Eager and so a useful property of the constructed programs is that irrelevant nodes that do not contribute to the programs’ outputs are ignored as per TF’s original deferred execution design Abadi2016TensorFlowAS . See Figure 2 for an illustration of how subprograms are converted to TF graphs and see Appendix A.2 for more details on how TF graphs are constructed, including how we handle causal masking.
The goal of our evolutionary search is to find the most training efficient architecture in the search space. To do this, we give each model a fixed training budget (24 TPUv2 hours) and define its fitness as its perplexity on the One Billion Words Benchmark (LM1B) chelba2014billion in Tensor2Tensor tensor2tensor . This approach, which we call an implicit efficiency objective by fixed training budget, contrasts previous architecture search works that explicitly aim to reduce training or inference step time when optimizing for efficiency tan2019efficientnet ; Tan2019MnasNetPN ; Cai2019ProxylessNASDN ; Elsken2019EfficientMN . Our objective is different in that the trade-off between step time and sample efficiency is implicit. For instance, a modification that doubles step time, but triples sample efficiency is a good modification in our search, as it ultimately makes the architecture more compute efficient. Indeed, the modifications we find to be most beneficial, squaring ReLUs and adding depthwise convolutions to attention, increase training step time. However, they improve the sample efficiency of the model so much that they decrease the total compute needed to reach a target quality, by drastically reducing the number of training steps needed to get there.The search algorithm we use is Regularized Evolution Real2019RegularizedEF with hurdles so2019evolved . We configure our hurdles using a percentile passing bar and space them such that equal compute is invested in each hurdle band; this reduces the search cost by a factor of 6.25X compared to the same experiment with full model evaluations (see Appendix A.3 for more details). Additionally, we use 7 training hours as a proxy for a full day’s training because a vanilla Transformer comes within 90% of its 24 hour training perplexity with just 7 hours of training. This reduces the search cost further by a factor of 3.43X, for a total compute reduction factor of 21.43X. So, although our target is to improve 24 hour performance, it only takes about 1.1 hours to evaluate an individual on average (see Appendix A.4
for more search specifics, including mutation details and hyperparameters). We run our search for25K individuals and retrain the top 100 individuals on the search task to select the best one.
Our search space is different from previous search spaces (see architecture search survey by Elsken2019NeuralAS ), which are often heavily biased such that random search performs well (see analysis by Li2019RandomSA ; Sciuto2020EvaluatingTS ; Bender2020CanWS ). As our search space does not have this bias, 78% of random programs in our space with length equal to a Transformer program cannot train more than five minutes, due to numerical instability. Because of this open-endedness and abundance of degenerate programs, it is necessary to initialize the search population with copies of the Transformer so2019evolved (input embedding size , feed forward upwards projection size , and number of layers ) (Figure 3
). To apply this initialization to our search space, we must determine how to divide the Transformer program into subprograms. To do this, we divide along the lines of the machine learning concepts that constitute it. For instance, we create one subprogram each for self-attention, ReLU and layer norm, using commonly used implementations (see AppendixA.5 for the complete list). We call this method conceptual initialization because it introduces a bias to the search through initialization, while leaving the search space for evolution and the action space for mutations open-ended. This contrasts the large amount of previous works that introduce bias through the search space. Although some works have also explored searching spaces that are open-ended like ours on miniature tasks Real2020AutoMLZeroEM , we demonstrate that our techniques can scale to full sized deep learning regimes (see Section 4).
We name the discovered model Primer, which stands for PRIMitives searched transformER (See Appendix Figure 23 for the full program). Primer shows significant improvement when retrained on the search task, requiring less than half the compute of Transformer to reach the same quality (Figure 6). In Section 4, we additionally show that Primer makes equally large gains when transferred to other codebases, training regimes, datasets, and downstream one-shot tasks.
A core motivation of this work is to develop simple techniques that can be easily adopted by language modeling practitioners. To accomplish this, we perform ablation tests across two codebases (T5 2020t5 and Tensor2Tensor tensor2tensor ) and determine which Primer modifications are generally useful (Appendix Figure 26). The two that produce the most robust improvements are squaring feed forward ReLUs and adding depthwise convolution to attention multi-head projections (Figure 4). We refer to a Transformer with just these two easy modifications as Primer-EZ; this is our recommended starting point for language modeling practitioners interested in using Primer. We now explain these modifications and then measure their empirical effectiveness.
The most effective modification is the improvement from a ReLU activation to a squared ReLU activation in the Transformer’s feed forward block. Rectified polynomials of varying degrees have been studied in the context of neural network activation functionsKrotov2016DenseAM , but are not commonly used; to the best of our knowledge, this is the first time such rectified polynomial activations are demonstrated to be useful in Transformers. Interestingly, the effectiveness of higher order polynomials Jayakumar2020MultiplicativeI can also be observed in other effective Transformer nonlinearities, such as GLU dauphin2017language variants like ReGLU shazeer2020glu ( where is an element-wise product) and point-wise activations like approximate GELU Hendrycks2016BridgingNA (). However, squared ReLU has drastically different asymptotics as compared to the most commonly used activation functions: ReLU, GELU and Swish (Figure 5 left side). Squared ReLU does have significant overlap with ReGLU and in fact is equivalent when ReGLU’s and
weight matrices are the same and squared ReLU is immediately preceded by a linear transformation with weight matrix. This leads us to believe that squared ReLUs capture the benefits of these GLU variants, while being simpler, without additional parameters, and delivering better quality (Figure 5 right side).
Multi-DConv-Head Attention (MDHA):
Another effective modification is adding 3x1 depthwise convolutions after each of the multi-head projections for query , key and value in self-attention. These depthwise convolutions are performed over the spatial dimension of each dense projection’s output. Interestingly, this ordering of pointwise followed by depthwise convolution is the reverse of typical separable convolution, which we find to be less effective in Appendix A.6. We also find that wider depthwise convolution and standard convolution not only do not improve performance, but in several cases hurt it. Although depthwise convolutions have been used for Transformers before wei2018qanet ; gulati2020conformer , using them after each dense head projection has not been done to the best of our knowledge. MDHA is similar to Convolutional Attention Wu2021CvTIC , which uses separable convolution instead of depthwise convolution and does not apply convolution operations per attention head as we do.
The other Primer modifications are less effective. Graphs for each modification can be found in Appendix A.5 and an ablation study can be found in Appendix A.7. We briefly describe the modifications and their usefulnesses here:
Shared Q and K Depthwise Representation: Primer shares some weight matrices for and . is created using the previously described MDHA projection and for learnable weight matrix . We find that this generally hurts performance.
Pre and Post Normalization: The standard practice for Transformers has become putting normalization before both the self-attention and feed forward transformations Baevski2019AdaptiveIR ; Xiong2020OnLN . Primer uses normalization before self-attention but applies the second normalization after the feed forward transformation. We find this is helpful in some but not all cases.
Custom Normalization: Primer uses a modified version of layer normalization Ba2016LayerN that uses instead of , but we find this is not always effective.
12X Bottleneck Projection: The discovered model uses a smaller size of 384 (compared to the baseline’s 512) and a larger size of 4608 (compared to the baseline’s 2048). We find this larger projection improves results dramatically at smaller sizes (35M parameters), but is less effective for larger models, as has been previously noted Kaplan2020ScalingLF . For this reason we do not include this modification when referencing Primer or Primer-EZ.
Post-Softmax Spatial Gating: The discovered model has a set of per-channel learnable scalars after the attention softmax, which improves perplexity for fixed length sequences. However, these scalars cannot be applied to variable sequence lengths and so we do not include this modification in Primer for our experiments.
Extraneous Modifications: There are a handful of additional modifications that produce no meaningful difference in the discovered architecture. For example, hidden states being multiplied by -1.12. Verifying that these modifications neither help nor hurt quality, we exclude them from discussion in the main text and do not include them when experimenting with Primer. These extraneous modifications can still be found in Appendix A.5.
In our experiments, we compare Primer against three Transformer variants:
Transformer++: A Transformer with the following enhancements: RMS normalization Zhang2019RootMS , Swish activation Ramachandran2018SearchingFA and a GLU multiplicative branch dauphin2017language in the feed forward inverted bottleneck (SwiGLU) shazeer2020glu . These modifications were benchmarked and shown to be effective in T5 narang2021 .
We conduct our comparisons across three different codebases: Tensor2Tensor (T2T) tensor2tensor , T5 2020t5 , and Lingvo Shen2019LingvoAM . Tensor2Tensor is the codebase we use for searching and so a majority of our side-by-sides are done in T5 and Lingvo to prove transferability. In all cases, we use the default Transformer hyperparameters for each codebase, with regularization disabled. See Appendix A.8 for more hyperparameter details.In the following sections, we will present our results in four main experiments on auto-regressive language modeling. First, we will show that Primer outperforms the baseline models on the search task. Next, we will show that the relationship between Primer’s compute savings over Transformers and model quality follow a power law at optimal model sizes. These savings also transfer across datasets and codebases. Then, we will study Primer’s gains in an established training regime and show that it enables 4.2X compute savings at a 500M parameter size using full compute T5 training. Finally, we will demonstrate that these gains transfer to the pretraining and one-shot downstream task setup established by GPT-3 brown2020language .
4.1 Search Task Comparison
We first analyze Primer’s performance on the search task: LM1B language modeling with sequence length 64, 35M model parameters, batches of 4096 tokens and 24 hours of training. We compare against the baseline models in both Tensor2Tensor (T2T) tensor2tensor and T5 2020t5 and on TPUv2s and V100 GPUs. We grade each model’s performance according to how much faster it reaches the vanilla Transformer’s final quality, which we will refer to as its speedup factor. Figure 6 shows that Primer provides a speedup factor of 1.7X or more over Transformer in all cases. Figure 6 also shows that both Primer and Primer-EZ generalize to other hardware platforms and codebases.
Next we study the scaling laws of Primer. Here we compare Primer to our baselines over many sizes by training each model using every permutation of layers, initial embedding size, and feed forward upwards projection ratio, creating a parameter range from 23M to 385M. The results, shown in Figure 7, corroborate previous claims that, at optimal parameters sizes, the relationship between compute and language model quality roughly follows a power law Kaplan2020ScalingLF . That is, the relationship between validation loss, , and training compute, , follows the relationship , for empirical constants and . This is represented as a line in double log space (Figure 7): . However, these lines are not the same for each architecture. The lines are roughly parallel but shifted up and down. In Appendix A.9 we show that, given a vertical spacing of , parallel lines such as these indicate compute savings, , for superior modeling also follow a power law of the form . The intuition behind this is that is a constant compute reduction factor for all and thus a power law investment of training compute with relation to results in a power law savings with relation to as well (see Appendix A.9). Primer also has the capacity to improve inference, despite our search focusing on training compute. Figure 8 shows a Pareto front comparison of quality vs. inference, when using feed forward pass timing as a proxy for inference. We use forward pass timing as a proxy for inference because there are multiple ways to decode a language model, each with varying compute costs. A more in depth study could be conducted analyzing Primer’s inference performance across different decoding methods, serving platforms, datasets, etc., but that is beyond the scope of this work.
4.2 Primer Transferability to Other Codebases, Datasets, and Model Types
We now study Primer’s ability to transfer to larger datasets, PG19 and C4, in another codebase, T5. We additionally scale up to a higher compute regime that has been used as a proxy for large scale training by previous studies narang2021 ; 2020t5 ; the batches are increased to 65K tokens, the sequence lengths are a longer 512, each decoder is 110M parameters (, , ) and each model is trained to 525K steps on 4 TPUv3 chips. We also continue training each model to 1M steps to study the effect of larger compute budgets on Primer savings. The results, shown in Figure 9, indicate that the Primer models are as strong in larger data, higher compute regimes, as they are in the smaller LM1B regime. Compared to the vanilla baseline, Primer and Primer-EZ are at least 1.8X more efficient at the end of training on both PG19 and C4.
Figure 9 also shows that the Primer modifications are compatible with other efficient model families, such as large sparse mixture-of-experts like Switch Transformer Fedus2021SwitchTS and efficient Transformer approximations like Synthesizer tay2020synthesizer . For these experiments, we use the T5 implementations provided by Narang et al. narang2021 . The Primer-EZ techniques of added depthwise convolutions and squared ReLUs reduce Switch Transformer’s compute cost by a factor of 1.5X; this translates to a 0.6 perplexity improvement when controlling for compute (see Appendix A.10). Adding squared ReLUs to Synthesizer reduces training costs by a factor of 2.0X and improves perplexity by 0.7 when fully trained.
4.3 Large Scale T5 Auto-Regressive Language Model Training
In large scale compute configurations, the Primer compute savings ratios are even higher. To demonstrate Primer’s savings in an established high compute training setup, we scale up to the full T5 compute regime, copying Raffel et al. exactly 2020t5 . This is the same as the C4 configuration in the previous section, but uses batches of 1M tokens, 64 TPUv3 chips and 537M parameters (, , ). Primer is 4.2X more compute efficient than the original T5 model and 2X more efficient than our strengthened Transformer++ baseline (Table 1).The reason why savings are even better here is because, at fixed sizes, more compute invested yields higher Primer compute savings. Figure 10 shows how the fraction of compute Primer needs to achieve parity with the original T5 architecture shrinks as the models are trained for longer; this is due to the asymptotic nature of both the control and variable perplexity training curves. This differs from the power law savings described in Section A.6. There, we use the optimal number of parameters for each compute budget, and so the compute saving factor,
, remains constant. For fixed model sizes, the compute saving factor grows as more compute is invested, meaning that compute savings can exceed the power law estimation. Note, this means that comparisons such as the ones given here can be “gamed” by investing more compute than is necessary for baseline models. It is for this reason that we use an exact replica of Raffel et al.’s2020t5 training regime: to demonstrate Primer’s savings in an already published training configuration.
4.4 Primer Transferability to Downstream One-Shot Tasks
In our final comparison, we demonstrate Primer’s improvements also hold in the pretraining and one-shot downstream task transfer regime. Recent trends in language modeling have moved towards training large models on large datasets, which is referred to as “pretraining.” These models are then transferred to unseen datasets and tasks, and, without much or any additional training, demonstrate the capacity to perform well on those “downstream” tasks devlin2018bert ; Dai2015SemisupervisedSL . In the decoder-only auto-regressive language modeling configuration we study here, the most impressive results have been achieved by GPT-3 brown2020language , which showed that large language models can exhibit strong performance on unseen tasks given only one example – referred to as “one-shot” learning. In this section, we demonstrate that Primer’s training compute savings stretch beyond reaching a target pretraining perplexity and indeed transfer to downstream one-shot task performance. To do this, we replicate the GPT-3 pretraining and one-shot evaluation setup.33footnotetext: The development of the training dataset and evaluation pipeline used in this section is its own standalone work. Full details of such work will soon be released in a separate technical report. This replication is not exactly the same as the one used for GPT-3 because GPT-3 was not open sourced. Thus, these experiments are not meant to compare directly to GPT-3, as there are configuration differences. Instead, these experiments are used as a controlled comparison of the Transformer and Primer architectures. We conduct these experiments in the Lingvo codebase using a proprietary pretraining dataset. The downstream tasks are configured in the same one-shot way described by Brown et al. brown2020language , with single prefix examples fed into each model with each task’s inputs. We compare (1) a baseline 1.9B parameter Transformer (, , ) with GELU activations, meant to approximate the GPT-3 XL architecture, and (2) a full Primer without shared QK representations, which only hurt performance according to Appendix A.7. Each model is trained using batches of 2M tokens using 512 TPUv4 chips for 140 hours (71.8K total accelerator hours or 1M train steps). We once again use the T5 training hyperparemeters without any additional tuning.
Figure 11 shows that Primer achieves the same pretraining perplexity and one-shot downstream performance as Transformer+GELU while using 3X less compute. Table 6 in the Appendix gives the exact performance numbers for each of the 27 evaluated downstream tasks. Primer, despite using 3X less compute, outperforms Transfomer+GELU on 5 tasks, does worse on 1 task, and performs equivalently on the remaining 21 tasks. The same table shows that when given equivalent compute, Primer outperforms Transformer+GELU on 15 tasks, does worse on 2 tasks, and performs equivalently on the remaining 10 tasks. This result shows that not only can Primer improve language modeling perplexity, but the improvements also transfer to downstream NLP tasks.
Primer’s Return on Investment:
The compute savings in this large-scale experiment demonstrate the return on investment for the Primer search. The search for Primer itself cost 2.14E+21 FLOPs. Training Transformer for this experiment cost 2.96E+22 FLOPs, which means the compute saved by Primer to reach the same performance is 1.98E+22 FLOPs. Thus, for this single training, the return on investment for the architecture search is roughly 9.24X. Note that the search cost is a one-time cost, and Primer can be reused in future trainings to save more compute. More details on energy cost and carbon emission estimates can be found in Appendix A.13.
There are limitations to this study. First, our model parameter sweeps are approximately an order of magnitude smaller than the sweeps performed in the original study by Kaplan et al. Kaplan2020ScalingLF . Likewise, although our large-scale models use a significant amount of compute, they are still orders of magnitude smaller than state-of-the-art models such as the full-scale GPT-3 brown2020language . Another limitation is that we focus primarily on decoder-only models, while encoder-only devlin2018bert ; yang2019xlnet ; liu2019roberta and encoder-decoder sequence models sutskever2014sequence ; Vaswani2017AttentionIA ; Adiwardana2020TowardsAH are still widely used. In Appendix A.12, we perform encoder-decoder masked language modeling comparisons in T5, but do not study the results in significant depth. The main finding there is that, although Primer modifications improve upon vanilla Transformer, they perform only as well as Transformer++. This result suggests that architectural modifications that work well for decoder-only auto-regressive language models may not necessarily be as effective for encoder-based masked language models. Developing an architecture that also works well for masked language models is a topic of our future research.
The main motivation of this work is to develop simple and practical changes to Transformers that can be easily adopted. To that end, we provide answers to some questions that practitioners may ask:
Are the Primer training compute savings going to be the same in all setups? No. Across our own provided experiments, Primer yields various compute savings. This is because the compute savings depend on hardware specifics, deep learning library operation speeds, model sample efficiencies on specific tasks, and other factors that may vary across setups. We use the exact replica of T5 training as a demonstration of what savings look like in an established configuration (4.2X), but expect results to vary across configurations.
Can Primer improve BERT devlin2018bert ? This work has focused on the specific task of auto-regressive language modeling, which, with the development of GPT-3, proves to be important for both traditional NLP applications as well as generative applications. We have only briefly investigated Primer’s application to masked language modeling and encoder-decoder models (Appendix A.12). Our investigations show that, while Primer improves upon vanilla Transformer, it is not obviously better than Transformer++. Thus, modifications that work well for auto-regressive language modeling may not be as effective for masked language modeling. Future work could investigate if the Primer modifications can be integrated into encoder-decoder and encoder-only models in a more effective way that can improve models like BERT. Future work could also apply the search method described here to finding better encoder-based masked language models.
Do hyperparameter configurations need to be retuned to use Primer? Our intention is for Primer modifications to not require any additional hyperparameter tuning. To that end, in our experiments we did not tune any hyperparameters, and instead used the Transformer hyperparameters from established libraries. However, Primer may work even better with additional tuning.
Is Primer-EZ better than Primer? In our comparison experiments, we find that Primer-EZ is sometimes better than Primer in the T5 codebase. However, in application to other codebases, such as Lingvo and T2T, we find that the full Primer can give improved performance over Primer-EZ. Thus, we recommend that practitioners first try using Primer-EZ for its ease of implementation and then move on to implementing the full Primer if they are interested in achieving further gains.
Recommendations and Future Directions:
We recommend the adoption of Primer and Primer-EZ for auto-regressive language modeling because of their strong performance, simplicity, and robustness to hyperparameter and codebase changes. To prove their potential, we simply dropped them into established codebases and, without any changes, showed that they can give significant performance boosts. Furthermore, in practice, additional tuning could further improve their performance. We also hope our work encourages more research into the development of efficient Transformers. For example, an important finding of this study is that small changes to activation functions can yield more efficient training. In the effort to reduce the cost of Transformers, more investment in the development of such simple changes could be a promising area for future exploration.
We thank Zhen Xu for his help with infrastructure. We also thank Gabriel Bender, Hallie Cramer, Andrew Dai, Nan Du, Yanping Huang, Daphne Ippolito, Norm Jouppi, Lluis-Miquel Munguia, Sharan Narang, Ruoming Pang, David Patterson, Yanqi Zhou, and the Google Brain Team for their help and feedback.
-  Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, 2017.
-  Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2018.
-  Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems, 2019.
-  Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael
Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
Exploring the limits of transfer learning with a unified text-to-text transformer.Journal of Machine Learning Research, 21(140):1–67, 2020.
-  Daniel Adiwardana, Minh-Thang Luong, David R. So, J. Hall, Noah Fiedel, R. Thoppilan, Z. Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, and Quoc V. Le. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977, 2020.
-  Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems, 2020.
-  William Fedus, Barret Zoph, and Noam M. Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021.
-  Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.
-  Martín Abadi, P. Barham, J. Chen, Z. Chen, Andy Davis, J. Dean, M. Devin, Sanjay Ghemawat, Geoffrey Irving, M. Isard, M. Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, D. Murray, Benoit Steiner, P. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Y. Yu, and Xiaoqiang Zhang. Tensorflow: A system for large-scale machine learning. In OSDI, 2016.
Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu,
Quoc V. Le, and Alex Kurakin.
Large-scale evolution of image classifiers.In ICML, 2017.
-  Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. In ICLR, 2018.
-  David R. So, Chen Liang, and Quoc V. Le. The evolved transformer. In ICML, 2019.
-  Hanxiao Liu, Andrew Brock, Karen Simonyan, and Quoc V Le. Evolving normalization-activation layers. In NeurIPS, 2020.
-  Xin Yao. Evolving artificial neural networks. Proceedings of the IEEE, 87(9):1423–1447, 1999.
-  Jurgen Schmidhuber. Evolutionary principles in self-referential learning. (on learning how to learn: The meta-meta-… hook.). Diploma thesis, Technische Universitat Munchen, Germany, 1987.
-  Kenneth Stanley, Jeff Clune, Joel Lehman, and Risto Miikkulainen. Designing neural networks through neuroevolution. Nature Machine Intelligence, 1:24–35, 2019.
-  Alec Radford, Jeffrey Wu, R. Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. In Technical report, OpenAI, 2019.
-  Timo Schick and Hinrich Schütze. It’s not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118, 2021.
-  Sinong Wang, Han Fang, Madian Khabsa, Hanzi Mao, and Hao Ma. Entailment as few-shot learner. arXiv preprint arXiv:2104.14690, 2021.
-  Tianyu Gao, Adam Fisch, and Danqi Chen. Making pre-trained language models better few-shot learners. arXiv preprint arXiv:2012.15723, 2020.
-  Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, and T. Lillicrap. Compressive transformers for long-range sequence modelling. ArXiv, abs/1911.05507, 2020.
-  Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, and Che Zheng. Synthesizer: Rethinking self-attention in transformer models. arXiv preprint arXiv:2005.00743, 2020.
-  Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. In Interspeech, 2014.
-  Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416, 2018.
Mingxing Tan and Quoc Le.
Efficientnet: Rethinking model scaling for convolutional neural networks.In ICML, 2019.
-  Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V. Le. Mnasnet: Platform-aware neural architecture search for mobile. , pages 2815–2823, 2019.
-  Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint:1812.00332, 2019.
-  Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Efficient multi-objective neural architecture search via lamarckian evolution. In ICLR, 2019.
-  Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. Regularized evolution for image classifier architecture search. In AAAI, 2019.
-  Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural architecture search: A survey. Journal of Machine Learning Research, 20(55):1–21, 2019.
-  Liam Li and Ameet S. Talwalkar. Random search and reproducibility for neural architecture search. In UAI, 2019.
-  Kaicheng Yu, Christian Sciuto, Martin Jaggi, Claudiu Musat, and Mathieu Salzmann. Evaluating the search phase of neural architecture search. In ICLR, 2020.
-  Gabriel Bender, Hanxiao Liu, Bo Chen, Grace Chu, Shuyang Cheng, Pieter-Jan Kindermans, and Quoc V. Le. Can weight sharing outperform random architecture search? An investigation with tunas. In CVPR, 2020.
-  Esteban Real, Chen Liang, David R. So, and Quoc V. Le. Automl-zero: Evolving machine learning algorithms from scratch. In ICML, 2020.
-  Dmitry Krotov and John J. Hopfield. Dense associative memory for pattern recognition. In Advances in Neural Information Processing Systems, 2016.
-  Siddhant M. Jayakumar, Jacob Menick, Wojciech M. Czarnecki, Jonathan Schwarz, Jack W. Rae, Simon Osindero, Y. Teh, Tim Harley, and Razvan Pascanu. Multiplicative interactions and where to find them. In ICLR, 2020.
-  Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In ICML, 2017.
-  Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020.
-  Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. arXiv preprint arXiv:1606.08415, 2016.
-  Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. QANet: Combining local convolution with global self-attention for reading comprehension. In ICLR, 2018.
-  Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. Conformer: Convolution-augmented transformer for speech recognition. In Interspeech, 2020.
-  Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan, and Lei Zhang. CvT: Introducing convolutions to vision transformers. arXiv preprint arXiv:2103.15808, 2021.
-  Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In arXiv preprint arXiv:1809.10853, 2019.
-  Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, S. Zheng, Chen Xing, Huishuai Zhang, Yanyan Lan, L. Wang, and T. Liu. On layer normalization in the transformer architecture. arXiv preprint arXiv:2002.04745, 2020.
-  Jimmy Ba, Jamie Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
-  Biao Zhang and Rico Sennrich. Root mean square layer normalization. In NeurIPS, 2019.
-  Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2018.
-  Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Févry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, and Colin Raffel. Do transformer modifications transfer across implementations and applications? arXiv preprint arXiv:2102.11972, 2021.
-  Jonathan Shen, P. Nguyen, Yonghui Wu, Z. Chen, M. Chen, Ye Jia, Anjuli Kannan, T. Sainath, Yuan Cao, C. Chiu, Yanzhang He, J. Chorowski, Smit Hinsu, S. Laurenzo, James Qin, Orhan Firat, Wolfgang Macherey, Suyog Gupta, Ankur Bapna, Shuyuan Zhang, Ruoming Pang, Ron J. Weiss, Rohit Prabhavalkar, Qiao Liang, Benoit Jacob, Bowen Liang, HyoukJoong Lee, Ciprian Chelba, Sébastien Jean, Bo Li, M. Johnson, Rohan Anil, Rajat Tibrewal, Xiaobing Liu, Akiko Eriguchi, Navdeep Jaitly, Naveen Ari, Colin Cherry, Parisa Haghani, Otavio Good, Youlong Cheng, R. Álvarez, Isaac Caswell, Wei-Ning Hsu, Zongheng Yang, Kuan-Chieh Wang, Ekaterina Gonina, Katrin Tomanek, Ben Vanik, Zelin Wu, Llion Jones, M. Schuster, Y. Huang, Dehao Chen, Kazuki Irie, George F. Foster, John Richardson, Uri Alon, and E. al. Lingvo: a modular and scalable framework for sequence-to-sequence modeling. ArXiv, abs/1902.08295, 2019.
-  Andrew M. Dai and Quoc V. Le. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems, 2015.
-  Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, 2014.
-  Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In Proceedings of the 30th International Conference on Machine Learning, 2013.
-  Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 18(185):1–52, 2018.
Thomas Helmuth, N. McPhee, and L. Spector.
Program synthesis using uniform mutation by addition and deletion.
Proceedings of the Genetic and Evolutionary Computation Conference, 2018.
-  Noam M. Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235, 2018.
-  Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In NAACL-HLT, 2018.
-  Taku Kudo and J. Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In EMNLP, 2018.
-  David Patterson, Joseph Gonzalez, Quoc V. Le, Chen Liang, Lluís-Miquel Munguía, D. Rothchild, David R. So, Maud Texier, and J. Dean. Carbon emissions and large neural network training. arXiv preprint arXiv:2104.10350, 2021.
-  24/7 carbon-free energy: Methodologies and metrics. https://www.gstatic.com/gumdrop/sustainability/24x7-carbon-free-energy-methodologies-metrics.pdf. Accessed: 2021-09-14.
Appendix A Appendix
a.1 TensorFlow Primitives Vocabulary
|Name||TF Function||Argument Mapping|
|Input 1||Input 2||Constant||
a.2 Constructing TensorFlow Graphs
TensorFlow graphs are built from DNA programs as described in Section 2 of the main text. Here we provide additional implementation details.
We use relative dimensions  instead of absolute dimensions for each instruction’s “dimension size” argument. This allows us to resize the models to fit within our parameter limits (32M to 38M parameters). The vocabulary for these relative dimensions is [1, 2, 4, 8, 12, 16, 24, 32, 48, 64]. This vocabulary was not tuned.
For “constant” and “dimension size” argument fields, we create a shared bank of values that each instruction references. The constants bank holds 2 values and the dimension sizes bank holds 6 values; these numbers were not tuned. Instead of each instruction possessing their own individual values for these arguments, they instead hold an index to these shared banks. This allows multiple instructions to share the same value and to change simultaneously when that value is changed. For example, each of the individual attention multi-head projections for , and start off sharing the same output dimension size so that they all change simultaneously if that value changes. See A.4 for an example of how these bank values are mutated.
An important part of teacher-forced language model training is that positions cannot “see” the token they are trying to predict. Each position should only get information from previous positions, otherwise the model will be degenerate when the targets are not provided. To enforce this causal constraint we add additional overhead to operations that move information spatially to mask out any information from future positions. For example, when applying convolutions we follow the standard practice of shifting the inputs spatially by (kernel width 1) so that each position only receives information from previous positions.
To enable multi-head capabilities for the Transformer search seed, we add a meta argument to our instructions called “branching.” This argument can take any value in [1, 2, 4, 8, 16] and determines how many times that instruction is executed in parallel, with the resulting tensors being concatenated together along their embedding axes. Branching can be used with any of the TensorFlow primitives as well as with any of a DNA’s subprograms. This allows us to initialize the search with multi-head self-attention by branching subprogram 1 (self-attention) 8 times (see Appendix A.5 for subprogram implementations). Primer does not utilize this branching capability in any meaningful way, beyond using the initialized multi-head attention.
Resolving Dimension Mismatches:
We do not constrain how tensor dimensions can be mutated and so programs may be invalid because they perform binary operations on tensors with incompatible sizes. For example, a program may describe adding together two tensors with differing embedding sizes. To resolve these dimension mismatch issues we deterministically pseudorandomly set one of the tensor dimensions to match the other.
a.3 Halving Hurdles
We configure our hurdles  such that the top 50% of individuals passes each hurdle, according to fitness. We space the hurdles in such a way that the expected amount of compute devoted to training each hurdle band is roughly equal at the end of the search. That is, given that our maximum amount of training compute for an individual is 7 hours or 25,200 seconds (s), we construct hurdles at the 812.9s, 2438.7s, 5690.3s, and 12,193.5s marks. Thus, 1/5 of the compute budget is devoted to training every individual up to the first hurdle (812.9s), 1/5 of the compute budget is devoted to training the 50% of individuals that are trained from the first to the second hurdle (2438.7s 812.9s = 1625.8s), 1/5 of the compute budget is devoted to training the 25% of individuals that are trained from the second to the third hurdle (5690.3s 2438.7s = 3251.6s), etc. This configuration strategy, which we refer to as “halving hurdles,” requires setting only one hyperparameter, the number of hurdles, and removes the need to set hurdle threshold values and comparison steps, as has been previously done [13, 35]. We choose four hurdles because five hurdles would require the first hurdle to be anchored at less than ten minutes of training, which we find empirically to be too noisy of a signal. Using hurdles in this way decreases the average train time per model to 4064s or about 1 hour and 8 minutes, reducing the compute cost by a factor of 6.2X. This strategy is not unlike bandit algorithms such as Successive Halving and Hyperband, however we do not use a static population of individuals created a priori, but integrate our halving with the changing evolutionary population.
a.4 Evolution Search Details
We use Regularized Evolution  with a population size of 100 and a tournament selection size of 10. These values were not tuned. The mutations we use are as follows.
To create new candidates in our search, we uniform randomly select a parent from our search population and apply a single mutation to it. We employ five different mutation types (selections and decisions are performed uniform randomly unless specified otherwise):
Delete: Remove an instruction from a subprogram.
Insert: Create an instruction and insert it into a subprogram.
Delete and Insert: Perform a delete mutation followed by an insert mutation .
Mutate Field: Select a field from an instruction and change its value.
Swap: Swap the position of two instructions in a randomly selected subprogram. The input tensors for each instruction are also swapped so that the net effect is switching the positions of the instructions in the compute graph.
Mutate Bank Value: Change the value of a relative tensor dimension or constant in the corresponding bank. The values for relative tensor dimensions are selected from their vocabulary (see Appendix A.2). The values for constants are changed according to
for previous value , new value
and random variables.
After a mutation is applied, we run a light check to see if the resulting candidate’s compute graph is exactly equivalent to the parent’s compute graph. If it is, we perform another mutation.
a.5 Transformer and Primer Program Comparisons
Here we present the programs for both the Transformer seed and the discovered Primer model. Table 3 is a key that maps operation names to graph symbols for subsequent graphs. Figures 13 to 22 depict the subprograms for each model with the Primer changes highlighted in orange. Figure 23 depicts the full compute graphs for each model, with all subprograms resolved to their constituent primitives. Figures 24 and 25 depict the DNA programs for Transformer and Primer with all subprograms resolved and all instruction bank values plugged in.
a.6 Exact LM1B Numbers
|Vanilla Transformer||35M||1.9M||22.4||35.44 +/- 0.30||-|
|Transformer+GELU||35M||1.9M||22.4||35.00 +/- 0.12||1.23 +/- 0.07|
|Transformer++||35M||1.9M||22.0||34.87 +/- 0.46||1.37 +/- 0.24|
|Primer||34M||1.9M||21.7||33.77 +/- 0.15||2.12 +/- 0.09|
|Primer-EZ||35M||1.8M||21.0||33.53 +/- 0.09||2.34 +/- 0.04|
|Transformer+MDHA||35M||1.8M||21.0||34.26 +/- 0.12||1.76 +/- 0.06|
|Transformer+Sep Conv||35M||1.8M||21.0||34.34 +/- 0.10||1.54 +/- 0.05|
|Vanilla Transformer||35M||1.3M||15.4||37.19 +/- 0.07||-|
|Transformer+GELU||35M||1.2M||14.1||37.11 +/- 0.02||1.05 +/- 0.02|
|Transformer++||35M||1.3M||14.7||36.23 +/- 0.11||1.54 +/- 0.05|
|Primer||34M||1.2M||13.8||35.06 +/- 0.15||2.13 +/- 0.11|
|Primer-EZ||35M||1.1M||13.3||35.16 +/- 0.13||2.03 +/- 0.09|
|Vanilla Transformer||35M||2.1M||23.9||23.30 +/- 0.02||-|
|Transformer+GELU||35M||2.1M||23.8||23.39 +/- 0.02||0.97 +/- 0.03|
|Transformer++||35M||2.1M||24.2||23.04 +/- 0.02||1.33 +/- 0.05|
|Evolved Transformer||38M||1.6M||18.7||23.08 +/- 0.02||1.23 +/- 0.02|
|Primer||36M||2.0M||22.9||22.71 +/- 0.03||1.72 +/- 0.01|
|Primer-EZ||36M||2.0M||22.5||22.62 +/- 0.02||1.75 +/- 0.03|
a.7 Ablation and Insertion Studies
One of the core motivations of this work is to develop simple and robust Transformer modifications. To that end, we study the individual effectiveness of each Primer modification, described in Section 3 of the main text. We measure this effectiveness using insertion and ablation studies. In the insertion studies we add each modification in isolation to a vanilla Transformer. In the ablation studies we remove each modification from Primer one at a time. We are interested in how these modifications affect performance not just in our search library, Tensor2Tensor, but also in other libraries. Thus, we perform these insertion and ablation studies in a different library, T5, as a well, and use modification transferability as the key guiding metric for our modeling recommendations.
The results of these studies are shown in Figure 26. “Normalized PPLX Delta” describes the degree to which a modification helps or hurts performance. For baseline perplexity, , and modification perplexity, , “Normalized PPLX Delta” is defined as in the insertion study and for the ablation study. These definitions differ so that a positive value always indicates that the modification is good and a negative value always indicates that the modification is bad. Three techniques are beneficial in all scenarios. The first is “12X proj,” which increases the size of the Transformer feed forward upwards projection while controlling for parameters. We find this works well for smaller models but is not useful at larger sizes. The second two, MDHA and squared ReLUs, are the defining modifications of Primer-EZ, a simpler model that captures much of the gains of the full Primer.
a.8 Full Training Details
In all experiments, we use previously published hyperparameter settings that were tuned for Transformer, with regularization disabled and no additional tuning for Primer. In Tensor2Tensor (T2T) these are the transformer_tpu hyperparameters and in T5 and Lingvo these are the open-sourced parameters used in previous T5 studies [5, 49]. They both specify an Adafactor optimizer , with 10K warmup steps at a learning rate of 0.01, followed by reciprocal square root learning rate decay. T2T uses positional embeddings and subword tokenization, while T5 and Lingvo use relative attention  and SentencePieces . For LM1B, we use the T2T default settings of max sequence length of 64 and batches of 4096 tokens; this is appropriate because LM1B has an average sequence length of roughly 32. For C4 and PG19, we use the T5 default of a max sequence length of 512. For one-shot pretraining, we use a max sequence length of 1024. In Section 4.2 we use batches of 65K tokens, in Section 4.3 we use batches of 1M tokens, and in Section 4.4 we uses batches of 2M tokens.
a.9 Power Law Compute Savings Derivations
In Section 4.1 of the main text, we reproduce the results of Kaplan et al.  and show that, at optimal parameter sizing, the relationship between language model quality and training compute follows a power law: , where is validation loss, is training compute, and and are empirical constants. This is represented as a line in double log space (Figure 7): . However, these lines are not the same for each architecture we compare. The lines are roughly parallel but shifted up and down. Thus, defining the shift between two architectures’ lines as , we can derive the relationship of their training costs as:
where is a consistent reduction factor regardless of . Compute savings, , for using a superior architecture can now be calculated as:
Plugging this into the original power law relationship for we get:
Thus, the relationship between quality and compute savings yielded by an improved architecture also follows a power law with coefficient . This relationship is intuitive when recognizing that the compute reduction factor is consistent for all values of and thus a power law investment of training compute with relation to results in a power law savings with relation to as well.
a.10 Exact T5 Numbers for Medium Sized Experiments
|Baseline Compute @525K||Baseline Compute @1M|
|+ Squared ReLU||145M||523K||19.55||1.74||996K||18.83||1.96|
a.11 Performance on Individual One-Shot Tasks
|Question Answering Tasks|
|Multi-Choice Schema Tasks|
Comparison between Transformer+GELU and Primer at 1.9B parameters on downstream one-shot tasks at 1/3 and full pretraining compute budgets. One-shot sample means and standard deviations are computed using the evaluated performance of 5 weight checkpoints.Bold numbers denote improved one-shot performance and shaded numbers
denote worse one-shot performance compared to Transformer with full compute that is statistically significant under an independent t-test with p-value threshold 0.05. Primer achieves the same performance as Transformer when given 1/3 the training compute and stronger performance on a majority of tasks when given the same training compute. GPT-3 XL scores are provided as a grounding reference point; they should not be closely compared to our results as the models have different pretraining configurations.
a.12 Masked Language Modeling
Encoder-decoder style masked language modeling (MLM) is not the focus of this work. However, because it was the focus of the original T5 project, we include MLM comparisons here for completeness (Table 7). Specifically, we use the exact comparison configuration used by Narang et al., who benchmarked several Transformer variants; the one difference is that we only run model training one time, since this regime is not the focus of our study. For “Primer-EZ Decoder” we use a Transformer++ encoder and a Primer-EZ decoder. Our treatments demonstrate that the Primer-EZ modifications have the capacity to improve encoder-decoder MLM models, but perhaps to a lesser degree, when compared to Transformer++. We believe this indicates that decoder-only LM and encoder-decoder MLM benefit from different modeling decisions – something that could be studied in future works. We also believe that running our search on encoder-decoder MLM directly could yield modifications that are more beneficial for this task.
|Model||Params||Pretraining Log PPLX||SGLUE||XSum||WebQ|
a.13 Carbon Emission Estimates
Following the recommendations of Patterson et al. , we release the carbon emission estimates for our largest experiments. To estimate the carbon emissions44footnotetext: Our CO2e accounting methodology for data center net carbon intensity does not currently fit the Greenhouse Gas (GHG) protocol for emissions reporting (Scope 2 and 3 for electricity). This deviation is due to a change in methodology where Google uses hourly life cycle emission factors, while the GHG Protocol generally relies on annual operating emission factor data. Google chooses to share these modified metrics as part of our 24/7 carbon-free energy (CFE) program, focused on our goal of achieving 100% 24/7 local CFE by 2030. Google’s target for 2030 goes beyond the traditional Scope 2 rules to restrict both the location and the accounting period. This means that, instead of anywhere in a continent, the CFE purchase should be on the same geographically local grid; and instead of the accounting period being one year, the accounting should be within the same hour.55footnotetext: While electricity consumption is relatively straightforward, strategies to reduce greenhouse gas emissions are not. For details on the distinction between conventional carbon offsets, Google’s goal for 2030 of 24/7 CFE for its global data centers and campuses, and what it is doing now to set the groundwork for 2030, please see Appendix B of Patterson et al. . for our architecture search, we build off of the measurements taken by Patterson et al. Their emissions estimate for architecture search is 3.2 MTCO2e for 1360 days of TPUv2 usage . Here, we use 1145.8 days of TPUv2 compute for our search. Additionally, the PUE for our data center66footnotetext: Each data center is located within a Regional Grid, which is the geographic basis for Google’s 24/7 CFE goals. For our data center in Georgia, the Regional Grid is the Southern Company balancing authority. at the time of our search was 1.08 instead of 1.10, and its net carbon intensity average was 0.336 MTCO2e/MWh instead of 0.431 MTCO2e/MWh.77footnotetext: The net carbon intensity at a particular data center is based on accounting for hourly emission reductions via real time, local carbon-free energy purchases. This is calculated using the 24/7 carbon-free energy methodology, which can be reviewed in greater depth in “24/7 Carbon-Free Energy: Methodologies and Metrics” .88footnotetext: The carbon intensity values utilized in this paper are at the annual 2020 grid level for each data center in which the models were run. Thus, the proportional emissions estimate for our architecture search experiments is 3.2 MTCO2e 2.06 MTCO2e. For comparison, a round trip plane ticket from San Francisco to New York for a single passenger is 1.2 MTCO2e  and so our search costs roughly 1.72 such plane tickets. We follow the same process of building off of the Patterson et al. measurements to estimate emissions for our large scale T5 experiments. The Patterson et al. emissions estimate for 11B parameter T5 is 46.7 tCO2e for 10,249 days of TPUv3 usage. Our T5 models are smaller, and so only require 687.5 TPUv3 days to train on average. We run 3 trainings (Primer, original T5 and T5++) to show Primer’s improvements over baselines, yielding a total of 2062.5 TPUv3 days. When we ran our experiments, the data center99footnotetext: For our data center in Taipei, for purposes of Google’s 24/7 CFE accounting, the Regional Grid is Taiwan. PUE was 1.10 instead of 1.12 and its net carbon intensity average was 0.540 MTCO2e/MWh instead of 0.545 MTCO2e/MWh. Thus, the proportional total estimate for these T5 model trainings is 46.7 MTCO2e 8.54 MTCO2e. To estimate the emissions of our one-shot pretrainings in Lingvo, we measure system average power in the same manner as Patterson et al. . Including memory, network interface, fans, and host CPU, the average power per TPUv4 chip is 343W. We use the same equation as Patterson et al. to calculate CO2e for our 2 large scale pretrainings: 2 343W 71,800h 1.08(PUE) 0.055 MTCO2e/MWh 29.26 MTCO2e.1010footnotetext: For our data center in Oklahoma, for purposes of Google’s 24/7 CFE accounting, the Regional Grid is the Southwest Power Pool (SPP) Independent System Operator. The emission cost for our large scale T5 and one-shot comparisons are higher than the cost of the architecture search itself. We invest in these large scale comparisons to demonstrate the potential savings of our efficient modifications. For instance, the savings for using Primer over Transformer described in Section 4.4 of the main text equates to 9.75 MTCO2e, which alone is 4.7X the cost of the architecture search. Note, differences in hardware setups affect these savings. For example, the one-shot models were trained in Oklahoma, which has favorable MTCO2e/MWh when compared to Georgia, where the Primer search was conducted. Viewing compute in terms of FLOPs, to remove these hardware-specific factors, Primer’s savings in the one-shot experiments are 9.24X the cost of the search itself, as described in Section 4.4 of the main text. Thus, the architecture search yields returns on investment, even at our relatively small comparison sizes, which are roughly 100X smaller than the full scale GPT-3 .
a.14 Comparison to Evolved Transformer
|Params||PPLX @ 1.5M Steps||Params||PPLX @ 1M Steps|
This work builds off of the Evolved Transformer , which also sought to discover improved sequence models using architecture search. Compute efficiency comparisons to the Evolved Transformer architecture are provided in T5 on LM1B in Table 4 and on C4 in Table 5. Sample efficiency comparisons to the Evolved Transformer architecture are offered in Table 8 on those same experiments. In this section we discuss these comparisons and how they highlight the improvements of our Primer search over the Evolved Transformer search. Firstly, our Primer search aims to improve training compute efficiency, which yields more practical results than the sample efficiency objective of So et al. , who controlled for number of train steps when evaluating models. Evolved Transformer is effective in this controlled-train-step regime when comparing to other baselines, as shown in Table 8. When controlling for number of training steps in this way, Evolved Transformer is roughly on par with Transformer++ on C4 and is better than Transformer++ on LM1B. However, Evolved Transformer is substantially slower than all other models (see Tables 4 and 5) because it is deeper; we follow the same scaling policy as So et al. of adding additional layers to control for parameters, given that an Evolved Transformer layer has significantly less parameters than a standard Transformer layer. Evolved Transformer’s slowness counteracts its sample efficiency and for this reason its speedup factor is diminished on LM1B and less than 1.0 (indicating a slowdown over vanilla Transformer) on C4 (see Tables 4 and 5). This limits Evolved Transformer’s practicality. In contrast, Primer is designed to specifically address this shortcoming and thus delivers the practical result of substantial compute savings. The open-ended nature of the Primer search also allows for effective modifications that were not available to the Evolved Transformer search. In fact, none of the Primer modifications (see Section 3) can be represented in the Evolved Transformer search space, aside from resizing hidden dimension sizes. This is because the Evolved Transformer search space followed a rigid ordering of components and used a vocabulary of unalterable high level building blocks. For example, normalization always preceded weighted transformations and, although there were different weighted transformations to choose from such as self-attention and GLU, those transformations could not be modified by the search. In contrast, the Primer search space allows for the modification of all initialized modules – such as weighted transformations, activation functions and normalization functions – as well as allows for macro-level reordering, such as moving normalization after weighted transformations. We believe that this difference in openness is what allowed Primer to develop definitively superior modifications, as demonstrated not only by improved compute efficiency, but also by improved sample efficiency (Table 8), which is what Evolved Transformer was meant to optimize.