Towards Structured Dynamic Sparse Pre-Training of BERT

08/13/2021 ∙ by Anastasia Dietrich, et al. ∙ Graphcore 4

Identifying algorithms for computational efficient unsupervised training of large language models is an important and active area of research. In this work, we develop and study a straightforward, dynamic always-sparse pre-training approach for BERT language modeling task, which leverages periodic compression steps based on magnitude pruning followed by random parameter re-allocation. This approach enables us to achieve Pareto improvements in terms of the number of floating-point operations (FLOPs) over statically sparse and dense models across a broad spectrum of network sizes. Furthermore, we demonstrate that training remains FLOP-efficient when using coarse-grained block sparsity, making it particularly promising for efficient execution on modern hardware accelerators.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing task performance gains of large, pre-trained language models have fueled interest in computationally efficient unsupervised training (Kaplan et al., 2020)

. In recent years, sparsity has regained popularity as a technique for improving the computational efficiency of deep learning models 

(Hoefler et al., 2021)

. Current sparsity methods can be distinguished into approaches that impose sparsity on the weights of neural networks via

weight sparsity (Frankle and Carbin, 2019; Gale et al., 2019; Bellec et al., 2017; Mostafa and Wang, 2019; Evci et al., 2019; Dettmers and Zettlemoyer, 2019; Mocanu et al., 2018; Jayakumar et al., 2020), or techniques that dynamically route activations to only interact with a subset of the network weights via conditional sparsity (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2021; Lewis et al., 2021).

In weight sparse training (Frankle and Carbin, 2019; Gale et al., 2019), the number of network parameters is reduced by imposing sparsity patterns on the network weights. As a result, weight sparse training can lead to significant savings in FLOPs, making it promising for scaling to larger network architectures for a given compute budget. One of the most promising candidates for weight sparse training is dynamic sparsity (DynSparse), which reduces FLOPs while only requiring training of sparse subsets of the over-parameterized network (Bellec et al., 2017; Mostafa and Wang, 2019; Evci et al., 2019; Dettmers and Zettlemoyer, 2019; Mocanu et al., 2018; Jayakumar et al., 2020; Liu et al., 2021). In DynSparse approaches, the sparsity pattern imposed on the weights is continuously modified during training using pruning and re-allocation strategies. This evolution leads to a joint exploration of both network topology and parameters, which has been shown to outperform static sparsity baselines (Bellec et al., 2017; Mostafa and Wang, 2019; Evci et al., 2019; Dettmers and Zettlemoyer, 2019).

However, so far, the limited performance on language modeling task (Evci et al., 2019) has resulted in DynSparse training not seeing wide adoption for large-scale language modeling tasks despite recent advances (Jayakumar et al., 2020). Given the high cost and energy consumption of unsupervised training of large-scale language models (Strubell et al., 2019; Patterson et al., 2021), dynamic sparsity bears the potential to make pre-training more efficient and affordable. To this end, we adopt and investigate DynSparse training techniques (Dettmers and Zettlemoyer, 2019; Evci et al., 2019) for pre-training of BERT bidirectional language encoder (Devlin et al., 2018) based on the highly scalable Transformer architecture (Vaswani et al., 2017).

Our work achieves Pareto improvements versus the dense baseline using both structured and unstructured DynSparse training of BERT.

1.1 Contributions

Investigating dynamic always-sparse training for BERT pre-training. We adapt the DynSparse training algorithm to BERT pre-training (Section 2.1). In particular, we find that gradient-based re-allocation (Evci et al., 2019) results in a collapse of the explored network parameters (Figure 11), which we mitigate through the use of random parameter re-allocation.

Achieving scalable FLOPs efficient dynamic sparse training. We compare dense and sparse methods for a given FLOPs budget and demonstrate both algorithmic scalability and Pareto improvement on the FLOPs scale, as shown in Figure 4.

Adapting dynamic always-sparse training to block structures. We extend the unstructured DynSparse training towards block-sparse structure (Section 3.2). In particular, we find that the choice of metric during block pruning has a strong influence on the task performance, as shown in Figure 7.

Pareto improvements for structured DynSparse training We show that the resulting structured DynSparse training of BERT without structured regularization gives Pareto improvements compared to the dense BERT baseline, as shown in Figure 1.

In the following section, we report the results of explorative experiments conducted to motivate our study of DynSparse training of BERT. The rest of the paper then concentrates on DynSparse training, with methodology discussed in Section 2 and results presented in Section 3.

1.2 Identifying suitable angle of attacks for sparse pre-training

Sparse training of unsupervised language models is relatively under-explored, compared to sparse training of the supervised fine-tuning objective (Radiya-Dixit and Wang, 2020; Chen et al., 2020; Sanh et al., 2020). Consequently, we design two explorative experiments to assess whether DynSparse training is a suitable sparse training algorithm for pre-training of BERT.

Firstly, we analyze the importance of trainable parameters by keeping a random pattern of weights non-trainable (constant non-zero) or zero-valued throughout training. This experiment allows us to disentangle the role of ’zero’ vs. ’untrainable’ weights in the connectivity patterns, to shed light on the parameter dependence of BERT pre-training. Like zeroed weights, the constant weights are unable to encode new information about the task. Still, they might promote the propagation of signals or gradient-flow through the network, which has been considered a core aspect of some sparse training algorithms in vision (Evci et al., 2020; Lubana and Dick, 2021; Tessera et al., 2021). Non-zero parameters might also lead to better utilization of the remaining trainable parameters. However, as shown in Figure 1(a), we find that none of these effects plays a relevant role in the training dynamics, since the task performance of the network with sparsified weights (dashed orange line) matches the one with the same fraction of untrained weights (solid blue line). Different from vision models that are often based on convolutions, the transformer architecture contains large dense matrices and multiplicative interaction (Jayakumar et al., 2019). While zeroing parameters is not expected to affect the training dynamics, the performance remains bounded by the number of trainable parameters.

Figure 1: (a) MLM validation loss of BERT-Small with a random subset of parameters set to zero (solid blue curve) or kept untrained (dashed orange). (b)

training loss curves of BERT-Small during pre-training of 10 epochs (757k steps) fixing a random subset of the parameter either early (orange dashed) or late (blue dash-dotted) during the training, as well as for the dense baseline (solid black). The vertical line indicates the unfreeze (freeze) event location, where untrainable parameters are made trainable (or trainable parameters are frozen). We pick the best learning rate for each experiment using a grid search over

with , given in Table A.4.1.

Secondly, we would like to analyze the effect of sparsification at different stages of the training process. For this purpose, we keep a random subset of the network parameters untrainable in the first half of the pre-training, before making them trainable in the second half (and vice versa). Unlike magnitude pruning, which eliminates learned information, freezing and unfreezing parameters ensures symmetry between the different phases (ignoring the linearly decaying learning rate schedule). The agreement in the task performance towards the end of training in Figure 1(b) indicates that representation is continuously built up during training, with no particular effect of when the sparsification is applied. This lack of preference is interesting, given that pre-training has been found to lead to a reduction of the intrinsic dimension (Li et al., 2018) with respect to downstream tasks (Aghajanyan et al., 2020). Our results suggest that sparse pre-training does not necessarily profit from an initial dense training phase. Therefore, we can distribute computation in a way that is both algorithmic and computationally beneficial. In DynSparse training, the network representation is "always-sparse", i.e. it does not rely on the representation of the underlying dense network, making the approach suited for sparse training of large language modeling architectures. Consequently, we believe that BERT pre-training is well suited for DynSparse training.

2 Methodology

Throughout this work, we study the self-supervised pre-training objective from the original BERT model (Devlin et al., 2018), which consists of the Masked Language Model (MLM) loss, corresponding to the task performance in predicting a random subset of masked tokens, and the noisier Next Sentence Prediction

(NSP) loss for binarized next sentence prediction. We focus on a single phase of pre-training with a sequence length of 128, using the Adam optimizer. All hyperparameters are given in Appendix 

A for a training length of 10 epochs.

2.1 Adapting unstructured DynSparse algorithm to BERT

Figure 2: Schematic illustration of pruning and re-allocation step in a typical DynSparse training algorithm leading to an evolution of the network representation in parameter space. The dynamic evolution of the sparsity pattern allows the DynSparse training algorithm to explore a larger fraction of the network parameters compared to static sparsity, while remaining "always sparse". For unstructured DynSparse training, the granularity of the sparsity pattern is of block size 11, while for structured DynSparse training, the block size is chosen between 44, 88 and 1616.

In the present work, we first study and adapt the unstructured DynSparse training schematically shown in Figure 2

to pre-training of the BERT language models. Specifically, we initialize the sparsity pattern randomly with the same fixed sparsity ratio on all fully connected encoder weights (non-embedding weights). The weights are initialized using a truncated normal distribution (see also Figure 

9). During an update step of DynSparse training (see Algorithm 1) we use magnitude pruning to remove a time dependent fraction

of the network parameters. The same fraction of parameters is re-allocated elsewhere in the weight tensor. To complete the sparsity update step, all newly allocated parameters and their corresponding first and second-order moments of the Adam optimizer are initialized to zero. Given that DynSparse training has been primarily developed for vision architectures 

(Dettmers and Zettlemoyer, 2019; Evci et al., 2019) and did not show competitive performance on the language tasks, we find it necessary to reassess some of the algorithm choices for BERT. In particular, during the re-allocation step of DynSparse training, we use random re-allocation of pruned weights instead of gradient-based techniques as in RigL (Evci et al., 2019). For one, this avoids potential issues from a collapse of the explored parameter space (compare Figure 11). More importantly, the absence of dense gradient computation makes our approach always-sparse, such that the full dense model is never actually instantiated. We found that the cosine decay of the pruning ratio introduced in Evci et al. (2019) outperforms constant pruning schedules and leads to a reduction of the changes in network topology during training. We refer to the maximum pruning ratio simply as "pruning ratio" throughout the paper. All DynSparse hyperparameters are optimized for a sparsity ratio of 0.9 (for more details, refer to Appendix A.1).

2.2 Block-sparse DynSparse algorithm

Extending unstructured DynSparse training towards using structured sparse computation requires modification to both the prune and update steps in Figure 2. Magnitude pruning can be justified as a simple compression algorithm resulting in unstructured sparsity. However, there is no unique way to extend the magnitude pruning metric to blocks of parameters. Choosing a good metric for block pruning is essential, as magnitude pruning has been surprisingly successful in preserving the task performance of sparse networks (Gale et al., 2019). In the following, we evaluate the selection of norms consisting of -norm, -norm and

-norm as different criteria for estimating the importance of blocks of parameters. For a block of weights

taken from a weight tensor indexed by , the -norm is given by

(1)

where the exponent , e.g. , controls the relative importance of individual parameters of a block according to their magnitude. In the limit of block size 11, the block pruning according to Eq. (1) reduces to magnitude pruning, allowing us to investigate the task performance with increasing block sizes. For small values of , each parameter in the block contributes equally towards the importance of the block element as , while for large values of the importance of the block collapses towards the parameter with the largest magnitude, with . Therefore, the pruning metric for blocks controls the extent to which the magnitude of each of the parameters in a block contributes to the importance of the block itself.

Input: total number of training step , total number of sparsity updates , pruning ratio at time , sparsity ratio , block size ;
Initialize: Impose random block-sparse pattern on non-embedding weights with uniform constant sparsity ratio across all layers. Initialize weights sampled from a truncated normal distribution;
for  to  do
       Train network with static sparsity pattern for steps;
       For each weight tensor:
      prune fraction of blocks with smallest -norm defined in Eq. (1);
       re-allocate the same fraction of blocks; re-allocated parameters, first and second-order moments of Adam optimizer are all initialized to zero
end for
Algorithm 1 Structured DynSparse training
Structured regularization

Group Lasso regularization is commonly used as a structured sparsity-inducing regularizer (Wen et al., 2017; Narang et al., 2017). We introduce the Group Lasso regularization in the update of weight tensor following the decoupling of weight decay and Adam optimizer from Loshchilov and Hutter (2017). More specifically, the entry of the parameter update is adjusted to as

(2)

where denotes the set of weight indices that belong to the same block as the th weight element and is the linearly decaying learning rate (see Appendix A). The remaining coefficients are the Group Lasso coefficient , and the small constant for numerical stability. The extra pre-factors and

corresponding to the standard deviation of the weights at initialization (see Appendix 

A) and the square-root of the block size respectively are chosen such as to ensure that the regularization coefficients of weight decay and group lasso are comparable in magnitude.

2.3 Pareto curve assessment of FLOPs efficiency

A recent review by Hoefler et al. (2021) pointed out the need for a rigorous framework for comparing sparse training algorithms. In the present work, we introduce a methodology for comparing the sparse task performance on the full BERT-family Pareto curve (Turc et al., 2019), beyond the Same Capacity Sparse vs. Dense Comparison approach introduced by Tessera et al. (2021). Comparing different algorithms using a Pareto curve allows to perform a multi-objective assessment under competing constraints, e.g., the desire to use little compute and achieve a high task performance. This multi-objective assessment is particularly useful for assessing the generality and scalability of different training algorithms. Furthermore, the use of Pareto curves allows us to systematically assess algorithmic differences by comparing DynSparse training with dense and static baselines on an equal FLOPs budget.

Choosing optimal learning rates for sparse and dense models of various sparsity ratios and model sizes is essential to ensure a fair comparison of different methods. Naive grid search optimization of the hyperparameters for a full Pareto investigation quickly becomes intractable. To mitigate this, we have identified and tested the applicability of scaling rules for learning rates across model sizes and sparsity ratios.

The dense and sparse BERT-family learning rates are obtained from a grid search for with shown in Figure 12 (see Appendix A.4). Interestingly, the results indicate that for a given number of parameters, the optimal learning rate of the sparse model is significantly larger than the learning rates of dense models (Figure 13). To reduce the number of hyperparameter sweeps for large model sizes, we generalize the learning rate scaling with sparsity as

(3)

where is the optimal learning rate obtained for a dense model of a given size. We tested the prediction of the unstructured static sparse learning rate fit from Eq. (3) using DynSparse training with block sparsity 1616 across both model sizes and sparsity ratio, and obtained good agreement between the predicted optimal sparse learning rate obtained from this rule and values obtained through a grid search, as shown in Figure 13. We also found that the learning rate rule generalizes from static to unstructured DynSparse, as shown in Table A.14. Identifying the mechanism allowing sparse models to profit from larger learning rates than dense models with the same number of parameters (see Figure 13) is left as an exciting direction for future research.

3 Results

3.1 Adapting DynSparse training to BERT

In order to establish a general improvement of the dynamic sparse training algorithm with both FLOPs and memory, we study dynamic sparse training of BERT family across multiple model sizes. We analyze the scaling behavior of our DynSparse models with model size (for a fixed sparsity ratio) and sparsity ratio (for a fixed model size).

Figure 3: Pareto curve of the BERT family (Turc et al., 2019), comparing validation MLM loss of unstructured DynSparse training (orange dotted line) with static sparsity (solid blue line) and the dense baseline (black dashed line, the standard deviation is not visible at this scale) as a function of FLOPs. All sparsity results are obtained for pre-training with sparsity ratio 0.9, pattern updates, and optimal pruning ratio (see Figure 5). The black arrow indicates a reduction of FLOPs for the same MLM loss by a factor of 0.48.
Figure 4: Comparing validation MLM loss of DynSparse training of BERT-Medium with various sparsity ratios (indicated by color and marker style and joint by orange dotted line) with dense training of BERT family (black dashed line) as a function of non-embedding FLOPs. For all sparsity ratios, we use the hyperparameters optimized for sparsity ratio 0.9.

We find that the DynSparse training algorithm with random re-allocation and sparsity leads to Pareto improvements compared to the dense BERT-family (see Figure 4

). The improvements of DynSparse training over the dense baseline remain for a range of model sizes, indicating that DynSparse training can achieve more efficient utilization of FLOPs or network parameters at any scale. Furthermore, we find that these performance advantages are due to the continued updates of the sparsity pattern. We do not observe any improvements of the static baseline in FLOPs efficiency of larger models when the randomly initialized sparsity pattern is kept constant. In fact, for large model sizes, static sparsity almost perfectly matches the dense baseline. This indicates that the sparse network architecture itself brings no performance advantages. Any improvements are therefore expected to arise from continuous compression and evolution of the network representation. For DynSparse BERT Base, we achieve an improvement in FLOPs efficiency by a factor of 0.48 compared to an interpolation of the dense BERT family, as indicated by the horizontal black arrow in Figure 

4. We observe task performance improvements across a range of sparsity ratios (see Figure 4). However, since the results used hyperparameters tuned for sparsity 0.9, performance for other sparsity ratios could potentially be further improved with additional tuning. In sum, we find that DynSparse training leads to more efficient utilization of parameters of parameters and FLOPs for all model sizes.

Figure 5: Characterization of the DynSparse pre-training of BERT-Medium with sparsity ratio 0.9. All layer-wise averages shown correspond to maximum value obtained during training. (a) MLM loss as a function of the fraction of explored network parameters (DOF) with changing number of sparsity pattern updates . (b) MLM loss as a function of the ratio of removed, new weights with changing pruning ratio . (c) Joint effect of pruning ratio (solid line) on the ratio of removed, new weights, and DOF covered during DynSparse training. The best performing values (, ) from (a) are marked by a circle.

To improve our understanding of the sparse training dynamics, we extract measures that can help to explain the efficiency of specific hyperparameter choices (see Appendix A.1). Given that the DynSparse task performance advantage arises from the continual update of the sparsity pattern, we begin by quantifying the amount of parameter exploration. While the DynSparse models have only a tiny fraction of parameters available at any given time, the pattern update means that they can explore all network parameters throughout training and thus increase the effective weight space. We measure the effectively covered space by tracking the fraction of network weights of the corresponding dense network that have been activated at any point during the training and compare with the parameter count of the equivalent dense network to obtain the

total explored degrees of freedom

(DOF)111A similar quantity has been independently studied in Liu et al. (2021) as "in-time over-parametrization.". All quantities shown in the following correspond to averages taken over all layers, as we did not observe a systematic layer dependence of these quantities.

The total explored degrees of freedom monotonically increases during training, starting at the fraction of non-zero at the beginning of training and then saturating at an algorithm and hyperparameter dependent value toward the end of training (see Figure 11 for a typical shape). We observe that the maximal number of explored DOF can be controlled through the pruning ratio and the number of sparsity pattern updates (Figure 5). An increase in the update frequency leads to a simultaneous saturation in both task performance and the number of explored degrees of freedom (Figure 5(a)). On the other hand, the pruning ratio reaches an optimal value and strongly influences the performance with a different fraction of removed, new weights (Figure 5(b)). Notably, we find that the best pruning ratios are reached once the ratio of DOF approaches 1, corresponding to almost complete exploration of all network parameters (Figure 5(c)). Further increases in remove trainable weights that have just been initialized in the previous update step and lead to a deterioration in the task performance. Overall, we note that the best task performance is obtained by balancing the DOF while avoiding wasted compute in the form of parameters that are being allocated and immediately removed (as demonstrated in Figure 5). Given these findings, we postulate that ideal training outcomes require an exploration of all available parameters as well as an only moderate amount of noise injection.

3.2 block-sparse DynSparse training

In structured DynSparse training, the block pruning is done according to the -norms from Eq. (1) of the respective blocks of parameters. For the -norms norms studied (), as shown in Table 7 we obtain the best performance using the -norm, which corresponds to the sum of the parameter magnitudes. Moreover, all block parameters contribute toward the block’s importance, given that -norm outperforms other norms that assign larger importance to dominating weights in a block.

Figure 6: MLM validation loss of DynSparse BERT-Medium with sparsity for block size as a function of the regularization coefficient for Group Lasso regularization (solid blue) or weight decay (orange dashed). The error bars correspond to the standard deviation over three runs. Number of updates , pruning ratio . Figure 7: Block metric dependence of DynSparse training of BERT-Medium with sparsity for block size

. Confidence interval is estimated by calculating the standard deviation over three datapoints with their numerical values given in Table 

A.7.

Next, we evaluate the use of structured regularization applied to sparse weights during DynSparse training with block size 1616. To compare potential advantages from using a structured regularization against an unstructured regularization method, we have also evaluated the task performance for tuning weight decay coefficient instead of the Group Lasso coefficients. As shown in Figure 6, we obtain the best task performance using weight decay. The regularization coefficients are only tuned for the sparse, non-embedding weights. Other sources of unstructured regularization such as dropout (and in the case of Group Lasso also weight decay) are set to zero. While our results are in agreement with the competitiveness of block pruning versus the Group Lasso experiments in Narang et al. (2017), we have not tested more advanced regularization methods (Yang et al., 2019; Mummadi et al., 2019). We find that the structured regularization does not lead to any performance advantages over tuning weight decay.

Table 1: Task performance of DynSparse training of BERT-Base with sparsity for various block sizes , compared to dense BERT-Small with similar number of FLOPs and linear interpolation of baseline values ("Matched") with exactly the same number of FLOPs. Hyperparameters are not specifically tuned for (number of updates , pruning ratio ). See Appendix Table A.8 for block size dependence. The standard deviation is estimated over three runs. Model MLM FLOPs Small (dense) - Matched (dense) - 2.350 Base () 16 Base () 8 Base () 4 Base () 1 Figure 8: Block size dependence of the reduction in FLOPs for DynSparse training compared to (interpolation) of the dense BERT family for a given task performance. Values correspond to the block sparse DynSparse training of BERT-Base given in Table 1.

Next, we compare structured DynSparse training against the baseline using both horizontal and vertical slice of a Pareto plot. For a vertical slice, e.g., a constant FLOPs comparison, we demonstrate that DynSparse training can preserve some task performance advantages when block sparsity of size 44, 88, and 1616 is used (Table 1). For a horizontal slice (see horizontal arrow in Figure 4) measuring the reduction in FLOPs for a given task performance, we achieve a reduction between 0.5 for unstructured sparsity and 0.83 for block size , as shown in Figure 8. The inverse of the FLOPs efficiency improvements gives the maximum relative execution time of FLOPs for sparse compared to dense computation to preserve Pareto improvements in terms of wallclock time and needs to be below a factor of 2 for unstructured sparsity and 1.2 for block sparsity for DynSparse training (see Appendix A.3). This compute efficiency makes DynSparse training promising for practical applications that seek to further benefit from the higher computational efficiency of block computation.

4 Related work

Lottery ticket and pruning at initialization

Weight sparsity has been traditionally viewed as a technique for compressing network representation leading to reduced FLOPs and trainable parameters. The lottery ticket hypothesis (Frankle and Carbin, 2019; Frankle et al., 2020) postulates that through iteratively pruning and re-initialization, it is often possible to identify smaller subnetworks at initialization or early on during training that can be trained to full model performance of a large over-parametrized network. Since then, there has been a significant amount of work studying techniques for identifying sparsity distributions at initialization (Lee et al., 2019; Wang et al., 2020; Tanaka et al., 2020; Zhang and Stadie, 2020; Lee et al., 2020; Frankle et al., 2021; Su et al., 2020). Recently, the identification of lottery tickets early during training has allowed achieving time-to-train savings after a short, dense training phase (Chen et al., 2021)

by pruning attention heads and neurons early during the pre-training phase 

(Chen et al., 2021).

Dynamic sparsity

In DynSparse (Mocanu et al., 2018; Bellec et al., 2017; Liu et al., 2019; Mostafa and Wang, 2019; Dettmers and Zettlemoyer, 2019; Evci et al., 2019; Liu et al., 2021), the sparse connectivity pattern is evolved during training. Most DynSparse algorithms currently rely on magnitude pruning to remove unimportant network parameters. However, the algorithm show large differences in the exact re-allocation criteria, which range from random re-allocation (Bellec et al., 2017; Mocanu et al., 2018; Liu et al., 2019, 2021) to a directed evolution based on momentum (Dettmers and Zettlemoyer, 2019) or gradients (Evci et al., 2019).

Compute-efficient sparse training

Complementary to viewing weight sparsity as a compression technique of dense networks, sparsity allows increasing network dimensions, potentially resulting in an augmentation of the effective model capacity for a given amount of compute and memory (Gray et al., 2017). However, most investigations into sparse training currently impose algorithmic constraints through the use of pre-defined sparsity patterns (Vooturi et al., 2020; Zhou et al., 2021), coarse-grained sparsity structures (Gray et al., 2017) or even result in increased compute and memory compared to dense training through the use of masking.

In the present work, we contribute towards compute-efficient training from an algorithmic point of view, by extending DynSparse training towards structure. Additionally, we leverage the 2nd generation of Graphcore’s Intelligence Processing Unit (IPU) (Graphcore, 2021) to dynamically train large, structured DynSparse models using Graphcore’s DynSparse library222https://github.com/graphcore/examples/tree/master/applications/tensorflow/dynamic_sparsity.

Structured sparsity

Simple unstructured sparse training algorithms based on magnitude pruning heuristic have shown remarkable ability to preserve the task performance of over-parametrized neural networks 

(Gale et al., 2019). Nevertheless, on the execution side, unconstrained magnitude pruning results in unstructured sparsity patterns, which remain challenging to support on traditional hardware accelerators (Narang et al., 2017). Using coarser-grained sparsity structures resulting in contiguous memory access can mitigate this problem. Nevertheless, the resulting gains in execution efficiency are often achieved at the cost of a deterioration in task performance (Narang et al., 2017; Mostafa and Wang, 2019). Approaches to improve the task performance of structured sparsity during training range from structured regularization (Wen et al., 2017; Narang et al., 2017; Yang et al., 2019; Mummadi et al., 2019; Louizos et al., 2018), threshold-based pruning using representation based on block sparsity (Narang et al., 2017), network slimming (Liu et al., 2017) and low-rank factorization (Wang et al., 2019) to frequently changing sparsity pattern with distinct granularity varying from block (Hadifar et al., 2020), channel (Gao et al., 2018) to full layers (Fan et al., 2020).

Conditional sparsity and models with large number of parameters

Unlike dynamic sparsity, conditional sparsity does not reduce the number of trainable parameters that define the model. The task performance of semi-supervised language models generally improves with model size under appropriate scaling of total computation time and dataset size (Kaplan et al., 2020). In conditional sparse training (Shazeer et al., 2017; Lepikhin et al., 2020; Fedus et al., 2021; Lewis et al., 2021), activations are dynamically routed to subsets of the network weights distributed over a large number of hardware accelerators. Conditional sparse training leverages increases in the number of network parameters to improve the task performance for a constant FLOPs budget (Fedus et al., 2021).

5 Conclusion & Future Work

In this work, we demonstrated that DynSparse training of BERT leads to a more FLOP-efficient utilization of the trainable parameters. Our experimental work has focused on BERT MLM pre-training with sequence length 128, and further research is needed to evaluate the performance of pre-training with larger sequence lengths and fine-tuning to downstream tasks.

An important direction stems from the practical opportunity to translate the FLOPs savings into reduced cost of training. Remarkably, we found that even a naive block-sparse version of the DynSparse algorithm remains FLOP-Pareto efficient, which forms the first step towards more compute-efficient training of large-scale language models. However, further task performance improvements are necessary to fully translate the task performance advantages into time-to-train win on the Pareto curve. In particular, it will be important to shed further light on the conditions that enable the performance gains in unsupervised training, particularly the relationship between the number of available parameters and achievable task performance.

Acknowledgments

We thank Simon Knowles, Badreddine Noune, Alexandros Koliousis and Antoine Labatie, and the wider software and research team at Graphcore for insightful discussions; Luke Prince for feedback on the manuscript and Mark Pupilli, Truls Stokke and Niels Pichon for technical support.

References

  • A. Aghajanyan, L. Zettlemoyer, and S. Gupta (2020) Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. arXiv Prepr. arXiv2012.13255. External Links: 2012.13255, Link Cited by: §1.2.
  • B. R. Bartoldson, A. S. Morcos, A. Barbu, and G. Erlebacher (2019) The Generalization-Stability Tradeoff in Neural Network Pruning. ICLR 2020. External Links: 1906.03728, Link Cited by: 3rd item.
  • G. Bellec, D. Kappel, W. Maass, and R. Legenstein (2017) Deep Rewiring: Training very sparse deep networks. 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc.. External Links: 1711.05136, Link Cited by: §1, §1, §4.
  • T. Chen, J. Frankle, S. Chang, S. Liu, Y. Zhang, Z. Wang, and M. Carbin (2020) The Lottery Ticket Hypothesis for Pre-trained BERT Networks. NeurIPS 2020. External Links: 2007.12223, Link Cited by: §1.2.
  • X. Chen, Y. Cheng, S. Wang, Z. Gan, Z. Wang, and J. Liu (2021) EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets. ACL-IJCNLP 11. External Links: 2101.00063, Link Cited by: §4.
  • T. Dettmers and L. Zettlemoyer (2019) Sparse Networks from Scratch: Faster Training without Losing Performance. arXiv Prepr. arXiv1907.04840. External Links: 1907.04840, Link Cited by: Figure 11, §1, §1, §1, §2.1, §4.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL HLT 2019 - 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf. 1, pp. 4171–4186. External Links: 1810.04805, Link Cited by: §1, §2.
  • U. Evci, T. Gale, J. Menick, P. S. Castro, and E. Elsen (2019) Rigging the Lottery: Making All Tickets Winners. 37th Int. Conf. Mach. Learn. ICML 2020 PartF16814, pp. 2923–2933. External Links: 1911.11134, Link Cited by: Figure 11, §1.1, §1, §1, §1, §2.1, §4.
  • U. Evci, Y. A. Ioannou, C. Keskin, and Y. Dauphin (2020) Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win. arXiv Prepr. arXiv2010.03533. External Links: 2010.03533, Link Cited by: 5th item, §1.2.
  • A. Fan, E. Grave, and A. Joulin (2020) Reducing Transformer Depth on Demand with Structured Dropout. ICLR 2020. External Links: 1909.11556, Link Cited by: §4.
  • W. Fedus, B. Zoph, and N. Shazeer (2021) Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv Prepr. arXiv2101.03961. External Links: 2101.03961, Link Cited by: §1, §4.
  • J. Frankle and M. Carbin (2019) The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. ICLR 2019. External Links: 1803.03635, Link Cited by: §1, §1, §4.
  • J. Frankle, G. K. Dziugaite, D. M. Roy, and M. Carbin (2020) Linear Mode Connectivity and the Lottery Ticket Hypothesis. ICML 2020. External Links: 1912.05671, Link Cited by: §4.
  • J. Frankle, G. K. Dziugaite, D. M. Roy, and M. Carbin (2021) Pruning Neural Networks at Initialization: Why are We Missing the Mark?. ICLR 2021. External Links: 2009.08576, Link Cited by: §4.
  • T. Gale, E. Elsen, and S. Hooker (2019) The State of Sparsity in Deep Neural Networks. arXiv Prepr. arXiv1902.09574. External Links: 1902.09574, Link Cited by: §1, §1, §2.2, §4.
  • X. Gao, Y. Zhao, Ł. Dudziak, R. Mullins, and C. Xu (2018) Dynamic Channel Pruning: Feature Boosting and Suppression. 7th Int. Conf. Learn. Represent. ICLR 2019. External Links: 1810.05331, Link Cited by: §4.
  • Graphcore (2021) Graphcore Homepage. External Links: Link Cited by: §4.
  • S. Gray, A. Radford, and D. P. Kingma (2017) GPU Kernels for Block-Sparse Weights. https://openai.com/blog/block-sparse-gpu-kernels/. Cited by: §4.
  • A. Hadifar, J. Deleu, C. Develder, and T. Demeester (2020) Block-wise Dynamic Sparseness. arXiv Prepr. arXiv2001.04686. External Links: 2001.04686, Link Cited by: §4.
  • T. Hoefler, D. Alistarh, T. Ben-Nun, N. Dryden, and A. Peste (2021) Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks. arXiv Prepr. arXiv2102.00554. External Links: 2102.00554, Link Cited by: §1, §2.3.
  • S. M. Jayakumar, W. M. Czarnecki, J. Menick, J. Schwarz, J. Rae, S. Osidnero, Y. W. Teh, T. Harley, and R. P. Deepmind (2019) Multiplicative Interactions and Where to Find Them. ICLR 2020. Cited by: §1.2.
  • S. M. Jayakumar, Razvan Pascanu, Jack W. Rae, Simon Osindero, and Erich Elsen (2020) Top-KAST: Top-K Always Sparse Training. NeurIPS 2020 34. Cited by: §1, §1, §1.
  • J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei (2020) Scaling Laws for Neural Language Models. arXiv Prepr. arXiv2001.08361. External Links: 2001.08361, Link Cited by: §A.4.1, §1, §4.
  • N. Lee, T. Ajanthan, S. Gould, and P. H. S. Torr (2020) A Signal Propagation Perspective for Pruning Neural Networks at Initialization. ICLR 2020, pp. 1–11. External Links: 1906.06307, Link Cited by: §4.
  • N. Lee, T. Ajanthan, and P. H. S. Torr (2019) SNIP: Single-shot Network Pruning based on Connection Sensitivity. ICLR. External Links: 1810.02340, Link Cited by: §4.
  • D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen (2020) GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. arXiv Prepr. arXiv2006.16668. External Links: 2006.16668, Link Cited by: §1, §4.
  • M. Lewis, S. Bhosale, T. Dettmers, N. Goyal, and L. Zettlemoyer (2021) BASE Layers: Simplifying Training of Large, Sparse Models. arXiv Prepr. arXiv2103.16716. External Links: 2103.16716, Link Cited by: §1, §4.
  • C. Li, H. Farkhoor, R. Liu, and J. Yosinski (2018) Measuring the Intrinsic Dimension of Objective Landscapes. 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc.. External Links: 1804.08838, Link Cited by: §1.2.
  • S. Liu, D. C. Mocanu, A. R. R. Matavalam, Y. Pei, and M. Pechenizkiy (2019) Sparse evolutionary Deep Learning with over one million artificial neurons on commodity hardware. arXiv Prepr. arXiv1901.09181. External Links: 1901.09181, Link Cited by: §4.
  • S. Liu, D. C. Mocanu, Y. Pei, and M. Pechenizkiy (2021) Selfish Sparse RNN Training. Proc. 38th Int. Conf. Mach. Learn.. Note: @misc{ liu2021selfish, title={Selfish Sparse {{}RNN{}} Training}, author={SHiwei Liu and Decebal Constantin Mocanu and Yulong Pei and Mykola Pechenizkiy}, year={2021}, url={https://openreview.net/forum?id=5wmNjjvGOXh} } External Links: Link Cited by: §1, §4.
  • S. Liu, L. Yin, D. C. Mocanu, and M. Pechenizkiy (2021) Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training. arXiv Prepr. arXiv2102.02887. External Links: 2102.02887, Link Cited by: footnote 1.
  • Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang (2017) Learning Efficient Convolutional Networks through Network Slimming. In Proc. IEEE Int. Conf. Comput. Vis., Vol. 2017-Octob, pp. 2755–2763. External Links: Document, 1708.06519, ISBN 9781538610329, ISSN 15505499, Link Cited by: §4.
  • I. Loshchilov and F. Hutter (2017) Decoupled Weight Decay Regularization. 7th Int. Conf. Learn. Represent. ICLR 2019. External Links: 1711.05101, Link Cited by: §2.2.
  • C. Louizos, M. Welling, and D. P. Kingma (2018) Learning Sparse Neural Networks through L_0 Regularization. ICLR 2018. External Links: 1712.01312, Link Cited by: §4.
  • E. S. Lubana and R. P. Dick (2021) A Gradient Flow Framework For Analyzing Network Pruning. ICLR 2021. External Links: 2009.11839, Link Cited by: §1.2.
  • D. C. Mocanu, E. Mocanu, P. Stone, P. H. Nguyen, M. Gibescu, and A. Liotta (2018) Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nat. Commun. 9 (1), pp. 2383. External Links: Document, ISSN 2041-1723, Link Cited by: §1, §1, §4.
  • H. Mostafa and X. Wang (2019)

    Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization

    .
    ICML 36 (PMLR 97). External Links: 1902.05967, Link Cited by: §1, §1, §4, §4.
  • C. K. Mummadi, T. Genewein, D. Zhang, T. Brox, and V. Fischer (2019) Group Pruning using a Bounded-Lp norm for Group Gating and Regularization.

    Ger. Conf. Pattern Recognit.

    .
    External Links: 1908.03463, Link Cited by: §3.2, §4.
  • S. Narang, E. Undersander, and G. Diamos (2017)

    Block-Sparse Recurrent Neural Networks

    .
    arXiv Prepr. arXiv1711.02782. External Links: 1711.02782, Link Cited by: §2.2, §3.2, §4.
  • D. Patterson, J. Gonzalez, Q. Le, C. Liang, L. Munguia, D. Rothchild, D. So, M. Texier, and J. Dean (2021) Carbon Emissions and Large Neural Network Training. arXiv Prepr. arXiv2104.10350. External Links: 2104.10350, Link Cited by: §1.
  • A. Peste, E. Iofinova, A. Vladu, and D. Alistarh (2021) AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks. arXiv Prepr. arXiv2106.12379. External Links: 2106.12379, Link Cited by: §A.5.
  • E. Radiya-Dixit and X. Wang (2020) How fine can fine-tuning be? Learning efficient language models. External Links: 2004.14129, Link Cited by: §1.2.
  • V. Sanh, T. Wolf, and A. M. Rush (2020) Movement Pruning: Adaptive Sparsity by Fine-Tuning. NeuIPS. External Links: 2005.07683, Link Cited by: §1.2.
  • N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean (2017) Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. 5th Int. Conf. Learn. Represent. ICLR 2017 - Conf. Track Proc.. External Links: 1701.06538, Link Cited by: §1, §4.
  • E. Strubell, A. Ganesh, and A. McCallum (2019) Energy and Policy Considerations for Deep Learning in NLP. ACL 2019 - 57th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., pp. 3645–3650. External Links: 1906.02243, Link Cited by: §1.
  • J. Su, Y. Chen, T. Cai, T. Wu, R. Gao, L. Wang, and J. D. Lee (2020) Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot. NeuIPS 2020. External Links: 2009.11094, Link Cited by: §4.
  • H. Tanaka, D. Kunin, D. L. K. Yamins, and S. Ganguli (2020) Pruning neural networks without any data by iteratively conserving synaptic flow. NeuIPS 2020. External Links: 2006.05467, Link Cited by: §4.
  • K. Tessera, S. Hooker, and B. Rosman (2021) Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network Optimization. arXiv Prepr. arXiv2102.01670. External Links: 2102.01670, Link Cited by: §1.2, §2.3.
  • I. Turc, M. Chang, K. Lee, and K. Toutanova (2019) Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. arXiv Prepr. arXiv1908.08962. External Links: 1908.08962, Link Cited by: §2.3, Figure 4.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Transformer: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, pp. 5998–6008. External Links: 1706.03762v5, ISSN 10495258 Cited by: §1.
  • D. T. Vooturi, G. Varma, and K. Kothapalli (2020) Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural Networks. arXiv Prepr. arXiv2006.13486. External Links: 2006.13486, Link Cited by: §4.
  • C. Wang, G. Zhang, and R. Grosse (2020) Picking winning tickets before training by preserving gradient flow. External Links: 2002.07376v1, Link Cited by: §4.
  • Z. Wang, J. Wohlwend, and T. Lei (2019) Structured Pruning of Large Language Models. Assoc. Comput. Linguist., pp. 6151–6162. External Links: Document, 1910.04732, Link Cited by: §4.
  • W. Wen, Y. He, S. Rajbhandari, M. Zhang, W. Wang, F. Liu, B. Hu, Y. Chen, and H. Li (2017)

    Learning Intrinsic Sparse Structures within Long Short-Term Memory

    .
    6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc.. External Links: 1709.05027, Link Cited by: §2.2, §4.
  • H. Yang, W. Wen, and H. Li (2019) DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures. ICLR 2020. External Links: 1908.09979, Link Cited by: §3.2, §4.
  • M. S. Zhang and B. C. Stadie (2020) One Shot Pruning of Recurrent Neural Networks by Jacobian spectrum evaluation. ICLR 2020. External Links: Link Cited by: §4.
  • A. Zhou, Y. Ma, J. Zhu, J. Liu, Z. Zhang, K. Yuan, W. Sun, and H. Li (2021) Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch. ICLR 2021. External Links: 2102.04010, Link Cited by: §4.

Appendix A Technical details

  • Optimizer: Throughout this work we use element-wise optimization based on Adam with weight decay 0.01, , ,

    and gradient clipping, which is known to work well with the sparse gradients found in NLP models.

  • Default learning rate schedule consists of linear warmup steps up to the maximum learning rate ( for BERT-Medium and for BERT-Base), followed by a linear decay over the full training run.

  • Default dropout is 0.1 for all models larger then BERT-Small. To avoid artificial performance gains through an adjustment of the regularizer in the presence of sparsity induced regularization (Bartoldson et al., 2019), we keep dropout in the sparse models identical to the one used in the corresponding baseline.

  • Default floating-point precision: We use datatype FP16.16 (16 bit compute with 16 bit partials) throughout the model. The second-order moment in the Adam optimizer is computed and stored in FP32. Embedding is kept in FP16. The default loss-scaling factor for both BERT-Medium and BERT-Base is 512.

  • Initialization scheme: The sparsity pattern is initialized randomly. The weights are initialized using a truncated normal initializer with an initialization range of . This choice was motivated by having compared different initializations for the sparse model and found that the dense default truncated normal gives the best task performance, as shown in Figure 9

    . We found that preserving the variance of the activation statistics of the sparse model compared to the dense model 

    (Evci et al., 2020) does not lead to any performance gains.

    Figure 9: BERT-Medium with static unstructured sparsity imposed on all weights using Glorot (blue) or truncated normal (orange) initialization scheme. The marker shape indicates whether the standard deviation of the weight initializiation was increased.
  • Pre-training dataset: Phase I pre-training is performed on Wikipedia and BookCorpus using Whole Word Masking with a sequence length of 128.

  • Code for DynSparse library used in this work is available at Graphcore’s Github https://github.com/graphcore/examples/tree/master/applications/tensorflow/dynamic_sparsity

a.1 Hyperparameters specific to dynamic sparsity (DynSparse)

Two very important hyper-parameters for DynSparse training are the sparsity pattern update frequency, i.e. how often the network topology is modified, and the pruning ratio, which determines the fraction of the network topology modified at each update step. The sparsity ratio per layer is kept fixed throughout training.

  • MLM loss NSP
    40 2.468 0.645
    80 2.430 0.656
    160 2.409 0.626
    320 2.419 0.649
    Table A.3: Pruning ratio dependence of unstructured (11) DynSparse BERT-Medium, , sparsity , 10 epochs, phase I (number of updates with cosine decay and random reallocation). Same hyperparameters as in Table A.3.
    MLM loss NSP loss
    0.25 2.439 0.655
    0.50 2.413 0.684
    0.75 2.411 0.668
    1.00 2.459 0.698
    Table A.2: Number of sparsity pattern updates dependence of unstructured (11) DynSparse BERT-Medium, , sparsity , 10 epochs, phase I (pruning ratio with cosine decay and random reallocation).
    MLM loss NSP loss
    40 2.616 0.731
    80 2.606 0.650
    160 2.633 0.692
    320 2.645 0.693
    Table A.5: Pruning ratio dependence of structured (1616) DynSparse BERT-Medium, , sparsity , 10 epochs, phase I (number of updates with cosine decay and random reallocation). Same hyperparameters as in Table A.5.
    MLM loss NSP loss
    0.25 2.648 0.694
    0.50 2.633 0.692
    0.75 2.634 0.745
    1.00 2.675 0.701
    Table A.4: Number of sparsity pattern updates dependence of structured (1616) DynSparse BERT-Medium, , sparsity , 10 epochs, phase I (pruning ratio with cosine decay and random reallocation).
  • Update frequency dependence: Comparing the task performance of 20, 40, 80, 160 and 320 updates at sparsity ratio 0.9, we have found that the task performance improves with the number of sparsity pattern updates (Table A.3 and  A.5). We chose the optimal number of updates as () for sparsity ratio 0.9 and block size 11 (1616).

  • Extreme sparsity regime: All hyperparameters have been optimized for sparsity 0.9. However, we tested how well the pruning ratio and update frequency dependence optimized for sparsity 0.9 translates to sparsity 0.99. We have found that increasing the pruning ratio to can lead to small performance gains, as shown in Table A.6.

    MLM loss NSP loss
    0.99 160 0.10 2.999 0.833
    0.99 160 0.25 2.939 0.789
    0.99 160 0.50 2.889 0.750
    0.99 160 0.75 2.872 0.775
    0.99 80 0.50 2.922 0.842
    0.99 160 0.50 2.889 0.750
    0.99 320 0.50 2.868 0.772
    0.99 640 0.50 2.886 0.791
    Table A.6: Pruning ratio and number of update dependence of unstructured (11) DynSparse BERT-Medium, , sparsity , 10 epochs, phase I (with cosine decay and random reallocation).
  • Total number of pruned parameters The pruning ratio and the number of updates jointly control the total number of pruned and re-allocated parameters. The total number of pruned and re-allocated parameters is proportional to their product. We obtain an optimal value of their product in terms of task performance as shown in Figure 10.

    Figure 10: MLM loss vs pruning ratio times number of sparsity pattern updates for unstructured DynSparse training of BERT-Medium with sparsity ratio 0.9 for different values of (Top panel) pruning ratio (with ) and (Bottom panel) sparsity pattern updates (with ). Same data as in Figure 5.
  • Re-allocation criteria: We found that random re-allocation outperforms gradient-based re-allocation. While the pruning criterion leads to a compression of the network topology, the growing criterion directs the evolution of the network topology and distinguishes DynSparse training as a form of neural architecture search during training from mere gradual pruning approaches. Understanding the requirements for efficient joint subspace exploration of parameter and network topology space using DynSparse training will be essential to scale towards larger language models. In Figure 11

    , we show that for gradient-based re-allocation, the dense gradient is dominated by outliers in the activation, e.g., along the input dimension of each layer, which imposes a strong bias on the available degrees of freedom during the update step. In agreement with this observation, we find that for random-based re-allocation, a significantly larger fraction of the network parameters is explored during training, while for gradient-based re-allocation the training remains constrained into a small subset of all network parameters (left panel of Figure 

    11).

    Figure 11: (Left panel) Fraction of explored degrees of freedom for static sparsity and unstructured DynSparse training using gradient based (RigL) (Evci et al., 2019) vs random re-allocation (Dettmers and Zettlemoyer, 2019). (Right panel) Corresponding sparsity patterns for the first up-projection in the feedfoward component ("Boom-up") of the second transformer block, accumulated throughout training, for sparsity ratio 0.9 using gradient based (RigL) and random based reallocation. A black (white) dot corresponds to a parameter being non-zero (zero) at any point during training. The dark horizontal blocks in the RigL updates indicate a collapse due to outliers along the input dimension, which indicates that the effect arises from the activation part of the dense gradient update. This suggests that the collapse could be mitigated by reducing the influence of the activations during DynSparse training update.
  • Block size metric The importance of blocks of parameters is assessed by evaluating the -norm of the corresponding blocks (see Table A.7 and Figure 7).

    Table A.7: Task performance of DynSparse training of BERT-Medium with sparsity 0.9 for block size for various block size metrics. Block metric MLM loss NSP loss -norm 2.611 0.684 -norm 2.623 0.684 -norm 2.627 0.664 -norm 2.632 0.686 -norm 2.635 0.720 -norm 2.637 0.670 -norm 2.603 0.665 -norm 2.606 0.650 -norm 2.615 0.731 Table A.8: Task performance of DynSparse BERT-Medium with sparsity 0.9 for various block sizes compared to dense BERT-Mini with similar number of FLOPs and linear interpolation of the baseline values ("Matched") with exaclty the same number of FLOPs. Hyperparameters are not specifically tuned for different block sizes. See also BERT-Base results in Table 1. Model MLM FLOPs Mini - 2.614 Matched (dense) - 2.603 Medium () 16 2.621 Medium () 8 2.591 Medium () 4 2.546 Medium () 1 2.408
  • Block size dependence The block size dependence of BERT-Medium with sparsity 0.9 is given in Table A.8.

  • Untrainable vs zero-valued parameters Numerical values for the results shown in the left panel of Figure 1 are given in Table (A.9).

    s type MLM loss NSP loss
    0.25 zero 0.000343 2.390 0.653
    0.50 zero 0.000589 2.485 0.687
    0.75 zero 0.001011 2.637 0.737
    0.90 zero 0.001397 2.829 0.802
    0.99 zero 0.001697 3.244 0.907
    0.25 untrained 0.000686 2.375 0.681
    0.50 untrained 0.001178 2.491 0.675
    0.75 untrained 0.002021 2.645 0.731
    0.90 untrained 0.002795 2.850 0.829
    0.99 untrained 0.003394 3.243 0.827
    Table A.9: MLM validation loss of BERT-Small for results given in Figure 1.

a.2 Sparse FLOPS: FLOPs estimates for sparse multiplication with dense input

Throughout this report we assume the FLOPs for training a dense layer with sparse weight elements to approximately scale as , where the batch dimension, is the input dimension, the output dimension and is the density of sparsity pattern imposed on the corresponding dense layer, which is to the sparsity ratio as . The FLOPs estimate can be divided into the following components:

  1. FLOPs estimate for sparse forward pass: Assuming a sparse matrix has a sparsity ratio or a density , the required matrix multiplication for a given dense input and output is

    (4)

    where has dimension and , ,

    1. Sparse Multiplication: performing the summation for gives us a reduction of the total number of FLOPs by a fraction of non-zero elements in times leading to FLOPs.

    2. Sparse Addition: performing requires us to calculate the exact number of non-zeros along the input dimension, giving

      , where we defined some probability for non-zero values along the output dimension

      and input dimension

      . Assuming a uniform distribution, we estimate the FLOPs count to scale approximately linearly with the sparsity ratio

      to first order.

    The total FLOPs of sparse multiplication used in the forward pass scales approximately linearly in the number of non-zeros, i.e. .

  2. FLOPs estimate for recursive propagation of error through the network: Involves a multiplication of the dense error with the transposed sparse matrix leading additional FLOPs.

  3. FLOPs estimates for the outer product The weight update itself is formed by a sparse outer product, where only the sparse components need to be updated, which leads to a further reduction in the number of FLOPs that scales linearly with the density of the matrix.

a.3 Relating improvements in FLOPs efficiency to implementation requirements

To relate the algorithmic improvement in FLOPs efficiency to implementation requirements in general hardware agnostic way, we consider the sparse extra cost that we define as

(5)

where () is the average time it takes to execute a FLOP of type i for a specific model size and sparsity ratio. For a given fixed number of training steps and the same task performance, DynSparse training with theoretical FLOPs (defined in Appendix A.2) is only faster than dense training with FLOPs if the time to execute a sparse training step is smaller than the time to execute a dense training step . Or formally:

(6)

In other words, the utilization of fewer but "slower" FLOPs in the sparse model still translates to a faster execution of the sparse model overall. Note that this comparison is performed at equal task performance and for the same number of training steps.

Using this formalism, we can view improvements in task performance in the context of throughput requirements for a given algorithm in a hardware agnostic way independent of the exact implementation. For , we can derive the maximum critical extra cost for a DynSparse implementation before DynSparse training loses its advantages over dense computation in terms of time-to-train win. Specifically, for a given fixed number of training step and same task performance, the critical cost factor is given by

(7)

where corresponds to the DynSparse training and to the (interpolated) dense BERT family. We emphasize, that besides this requirement for a time-to-train win sparse training also allows to run models with larger model dimensions for a given number of parameters.

a.4 Learning rate for sparse and dense models

The results of the learning rate sweep of BERT with various sparsities are given in Table A.11. The corresponding learning rate sweep for the dense BERT-family is given in Table A.11. We confirmed that the optimal learning rates for static sparsity agree with the ones for DynSparse in Table A.14. We confirmed that the predicted learning rate dependence of the DynSparse model generalizes to block sparsity and across multiple model sizes as given in Table A.13 for block sparsity 16x16.

sparsity MLM loss NSP loss
0.0001 0.00 2.179 0.610
0.0002 0.00 2.115 0.598
0.0002 0.00 2.115 0.605
0.0004 0.00 2.116 0.606
0.0008 0.00 2.164 0.633
0.0001 0.25 2.278 0.627
0.0002 0.25 2.204 0.642
0.0004 0.25 2.186 0.596
0.0008 0.25 2.223 0.638
0.0001 0.50 2.412 0.679
0.0002 0.50 2.338 0.671
0.0004 0.50 2.283 0.631
0.0008 0.50 2.298 0.648
0.0002 0.75 2.551 0.741
0.0004 0.75 2.483 0.685
0.0008 0.75 2.446 0.671
0.0016 0.75 2.449 0.647
0.0032 0.75 2.547 0.707
0.0004 0.90 2.723 0.758
0.0008 0.90 2.677 0.711
0.0016 0.90 2.648 0.706
0.0032 0.90 2.669 0.697
Table A.11: Learning rate () sweep for dense BERT-family consisting of BERT-Tiny, Mini, Small, Medium and Base.
model MLM loss NSP loss
Mini 0.000050 3.062 0.839
Mini 0.000100 2.833 0.811
Mini 0.000400 2.625 0.742
Mini 0.000800 2.606 0.775
Mini 0.001600 2.628 0.779
Mini 0.003200 2.665 0.783
Small 0.000800 2.326 0.644
Small 0.000400 2.310 0.621
Small 0.000200 2.329 0.635
Small 0.001600 2.418 0.768
Medium 0.000200 2.115 0.605
Medium 0.000400 2.116 0.606
Medium 0.000800 2.164 0.633
Medium 0.000100 2.179 0.610
Base 0.000025 2.115 0.599
Base 0.000100 1.878 0.542
Base 0.000050 1.972 0.569
Base 0.000200 1.843 0.488
Table A.10: Learning rate () sweep of static unstructured sparsity BERT-Medium, sparsity .
model MLM NSP
Base 0.000160 2.520 0.692
Base 0.000320 2.429 0.693
Base 0.000640 2.340 0.647
Base 0.001280 2.328 0.603
Base 0.002560 2.369 0.656
Medium 0.000125 2.878 0.892
Medium 0.000250 2.760 0.720
Medium 0.000500 2.670 0.730
Medium 0.002000 2.640 0.715
Mini 0.001250 3.184 0.882
Mini 0.002500 3.145 0.871
Mini 0.005000 3.147 0.869
Mini 0.005120 3.195 0.907
Small 0.000313 2.927 0.865
Small 0.000625 2.841 0.773
Small 0.001250 2.788 0.861
Small 0.005000 2.826 0.797
Table A.13: Learning rate sweep of BERT-Small alternating between dense and sparse training with either non-trainable parameter or zero-valued parameter corresponding to sparsity , 10 epochs phase I, for various pruning methods. Optimal values are given in Table A.16. We switch times between sparse/non-trainable parameters and the dense training.
non active pruning MLM NSP
non-train fixed 0.0002 2.366 0.671
non-train fixed 0.0004 2.358 0.668
non-train fixed 0.0008 7.242 0.693
non-train magnitude 0.0002 2.379 0.658
non-train magnitude 0.0004 2.354 0.675
non-train magnitude 0.0008 11.160 0.766
non-train random 0.0001 2.431 0.733
non-train random 0.0002 2.365 0.669
non-train random 0.0004 2.349 0.693
non-train random 0.0008 7.272 0.693
zero fixed 2.5e-05 3.317 0.967
zero fixed 5e-05 3.199 0.817
zero fixed 0.0001 3.277 0.819
zero fixed 0.0002 3.329 0.884
zero fixed 0.0004 3.358 0.964
zero fixed 0.0008 3.424 0.799
zero magnitude 0.0002 2.746 0.756
zero magnitude 0.0004 2.685 0.711
zero magnitude 0.0008 3.056 0.834
zero magnitude 0.0016 6.538 1.217
zero random 0.0001 6.232 1.142
zero random 0.0002 6.132 1.273
zero random 0.0004 6.094 1.185
zero random 0.0008 6.284 0.987
Table A.12: Learning rate () sweep for DynSparse BERT-family BERT-Mini, Small, Medium and Base for sparsity 0.9 with block size 1616.
MLM loss NSP loss
0.00064 2.467 0.647
0.00128 2.410 0.670
0.0026 2.429 0.674
0.0051 2.521 0.654
Table A.14: Learning rate sweep of DynSparse BERT-Medium, sparsity , 10 epochs phase I, used to confirm that the optimal learning rates for static sparsity from Table A.11 translate into optimal learning rates for DynSparse.

a.4.1 Learning rate for sparse models

Figure 12: MLM validation loss for the Top panel dense BERT family Bottom panel static BERT of different sparsity ratios between 0 and 0.9 as a function of learning rate. The solid line correspond to a cubic fit for all data with the same sparsity ratio. The minimum of the resulting fit corresponds to the optimal learning rate for a given sparsity and is indicated by the black triangles connected by blue lines.

In Figure 12, we show the learning rate sweep of the BERT Medium model with static sparsity, for various sparsity ratios. We estimate the optimal learning rate for sparse models through the minimum of a cubic interpolation of the task performance vs learning rates for a given sparsity ratio, as indicated by the triangle markers in Figure 12. We find that the optimal learning rate calculated from the interpolation is best approximated by

(8)

as a function of sparsity or equivalently as (see Figure 12)

(9)

for number of parameters . Interestingly, a linear learning rate vs logarithmic memory fit as used in Kaplan et al. (2020) ( from Eq. (D1)) is leading to qualitatively worse agreement, which might be explained by our optimization for a fixed number of training steps.

Figure 13: (Left panel) Dense fit to the optimal learning rate estimated as the position of the black triangles from Figure 12 for BERT-Medium with various sparsities and the dense BERT-family as a function of the number of trainable parameters for various model sizes (indicated by symbol style and color) and sparsity ratios (colored crosses). The black lines indicate linear fits that are best approximated by for the sparse models and for the dense models. (Right panel) Testing the prediction of the optimal sparse learning rate from Eq. 3 (markerstyle "+") on BERT-family with sparsity 0.9 and block size 1616 (values given in Table A.13).
type MLM loss NSP loss
freeze 0.000087 2.467 0.715
freeze 0.000175 2.407 0.703
freeze 0.000349 2.420 0.685
freeze 0.000699 2.540 0.695
unfreeze 0.000175 2.933 0.666
unfreeze 0.000349 2.598 0.676
unfreeze 0.000699 2.440 0.703
unfreeze 0.001397 7.251 0.693
unfreeze 0.002795 7.520 0.784
Table A.16: MLM validation loss of BERT-Small trained by alternating between dense and training only a fraction of 0.1 of the non-embedding weights with the non-trainable parameter set to either zero or just untrainable without modifications. We pick the best learning rate for each experiment using a grid search over with (Table A.13) (number of updates ). non-train selection MLM 5e-05 zero fixed 3.199 0.0004 zero magnitude 2.685 0.0004 zero random 6.094 0.0004 untrained fixed 2.358 0.0004 untrained magnitude 2.354 0.0004 untrained random 2.349
Table A.15: Learning rate sweep of DynSparse BERT-Small unfreeze (freeze) experiment with initial (final) fraction of non-trainable parameters 0.9, 10 epochs phase I.

a.5 Role of selection criteria

To understand the role of magnitude pruning criteria in the DynSparse training dynamics, we have disentangled the pruning step from the parameter re-allocation step by temporarily replacing the always sparse training algorithm with an alternation between dense and sparse training phases (Peste et al., 2021). The dense training interval removes the influence from the regrowing selection as all network parameters are periodically activated without preference. We have found that magnitude pruning ("magnitude") outperforms both pruning into a fixed subspace chosen randomly at initialization ("fixed") and a changing random subspace re-drawn each time ("random") as shown in the top half of Table A.16. The strong performance degradation of the random re-drawn sparsity patterns illustrates the importance of the large network parameter in preserving the task performance.

This picture changes if, instead of setting the parameter to zero, we make parameters non-trainable, which avoids the information loss associated with the pruning step. In fact, in this case, we find that randomly selecting the subset of trainable parameters outperforms both the selection of the parameters with the largest magnitude ("magnitude") as well as training a fixed subset of parameters randomly chosen at initialization ("fixed") shown in the bottom part of Table A.16. Our results show that magnitude pruning gives performance advantages as it preserves information. However, the increased exploration coming from random parameter selection would, in the absence of information loss due to pruning, benefit the task performance were it not associated with information loss.