1 Introduction
Pruning has served as an important technique for removing redundant structure in neural networks [han2015learning, han2015deep, li2016pruning, he2017channel]. Properly pruning can reduce cost in computation and storage without harming performance. However, pruning was until recently only used as a postprocessing procedure, while pruning at initialization was believed ineffective [han2015deep, li2016pruning]. Recently, [frankle2018the] proposed the lottery ticket hypothesis, showing that for a deep neural network there exist subnetworks, when trained from certain initialization obtained by pruning, performing equally or better than the original model with commensurate convergence rates. Such pairs of subnetworks and initialization are called winning tickets.
This phenomenon indicates it is possible to perform pruning at initialization. However, finding winning tickets still requires iterative pruning and excessive training. Its high cost limits the application of winning tickets. Although [frankle2018the]
shows that winning tickets converge faster than the corresponding full models, it is only observed on small networks, such as a convolutional neural network (CNN) with only a few convolution layers. In this paper, we show that for a variety of model architectures, there consistently exist such subnetworks that converge significantly faster when trained from certain initialization after pruning. We call these
boosting tickets.We observe the standard technique introduced in [frankle2018the] for identifying winning tickets does not always find boosting tickets. In fact, the requirements are more restrictive. We extensively investigate underlining factors that affect such boosting effect, considering three stateoftheart large model architectures: VGG16 [simonyan2014very], ResNet18 [he2016deep], and WideResNet [zagoruyko2016wide]. We conclude that the boosting effect depends principally on three factors: () learning rate, () pruning ratio, and (
) network capacity; we also demonstrate how these factors affect the boosting effect. By controlling these factors, after only one training epoch on CIFAR10, we are able to obtain 90.88%/90.28% validation/test accuracy (regularly requires
30 training epochs) on WideResNet3410 when 80% parameters are pruned.We further show that the boosting tickets have a practical application in accelerating adversarial training, an effective but expensive defensive training method for obtaining robust models against adversarial examples. Adversarial examples are carefully perturbed inputs that are indistinguishable from natural inputs but can easily fool a classifier
[szegedy2013intriguing, goodfellow2014explaining].We first show our observations on winning and boosting tickets extend to the adversarial training scheme. Furthermore, we observe that the boosting tickets pruned from a weakly robust model can be used to accelerate the adversarial training process for obtaining a strongly robust model. On CIFAR10 trained with WideResNet3410, we manage to save up to 49% of the total training time (including both pruning and training) compared to the regular adversarial training process. Our code is available at https://github.com/boostingticket/ticket_robust.
Our contributions are summarized as follows:

We demonstrate that there exists boosting tickets, a special type of winning tickets that significantly accelerate the training process while still maintaining high accuracy.

We conduct extensive experiments to investigate the major factors affecting the performance of boosting tickets.

We demonstrate that winning tickets and boosting tickets exist for adversarial training scheme as well.

We show that pruning a nonrobust model allows us to find winning/boosting tickets for a strongly robust model, which enables accelerated adversarial training process.
2 Background and Related Work
In this section, we give a brief overview of several topics that are closely related to our work.
2.1 Network Pruning
Network pruning has been extensively studied as a method for compressing neural networks and reducing resource consumption. [han2015learning] propose to prune the weights of neural networks based on their magnitudes. Their pruning method significantly reduces the size of neural networks and has become the standard approach for network pruning. This type of approach is also referred to as unstructured pruning, where the pruning happens at individual weights [han2015deep]. In contrast, structured pruning aims to remove whole convolutional filters or channels [li2016pruning, he2017channel]. While structured pruning often yields better model compression and acceleration without utilizing special hardware or libraries, it can hardly retain the same performance as the full models when the proportion of pruned weights is large. Besides the magnitudebased pruning strategies, there also exist other types of pruning algorithms like dynamic surgery [guo2016dynamic], incorporating sparse constraints [zhou2016less], and optimal brain damage [lecun1990optimal]. However, magnitudebased pruning is more stable on different pruning tasks. Therefore, in this work, we focus on unstructured pruning based on magnitudes of the weights.
2.2 Lottery Ticket Hypothesis
Surprisingly, recent research has shown it is possible to prune a neural network at the initialization and still reach similar performance as the full model [liu2018rethinking, lee2018snip]. Within this category, the lottery ticket hypothesis [frankle2018the] states a randomlyinitialized dense neural network contains a subnetwork that is initialized such that, when trained in isolation, learns as fast as the original network and matches its test accuracy.
In [frankle2018the], an iterative pruning method is proposed to find such subnetworks. Specifically, this approach first randomly initializes the model. The initialization is stored separately and the model is trained in the standard manner until convergence. Then a certain proportion of the weights with the smallest magnitudes are pruned while remaining weights are reset to the previously stored initialization and ready to be trained again. This trainprunereset procedure is performed several times until the target pruning ratio is reached. Using this pruning method, they show the resulting pruned networks can be trained to similar accuracy as the original full networks, which is better than the model with the same pruned structure but randomly initialized.
Although the lottery ticket hypothesis has been extensively investigated in the standard training setting [frankle2018the, frankle2019lottery], little work has been done in the adversarial training scheme. A recent work [ye2019second] even argues lottery ticket hypothesis fails to hold when adversarial training is used. In this paper, we show lottery ticket hypothesis still holds for adversarial training and explain the reason why [ye2019second] failed.
One of the limitations of the lottery ticket hypothesis, as pointed in [frankle2019lottery], is that winning tickets are found by unstructured pruning which does not necessarily yield faster training or executing time. In addition, finding winning tickets requires training the full model beforehand, which is timeconsuming as well, especially considering iterative pruning. In this paper, we are managed to find winning tickets with much lower time consumption while maintaining the superior performance.
2.3 Adversarial Examples
Given a classifier for an input , an adversarial example is a perturbed version of such that for some small , yet being misclassified as . is some distance metric which is often an metric, and in most of the literature metric is considered, so as in this paper.
The procedure of constructing such adversarial examples is often referred to as adversarial attacks. One of the simplest attacks is a singlestep method, Fast Gradient Sign Method (FGSM) [goodfellow2014explaining], manipulating inputs along the direction of the gradient with respect to the outputs:
where is the projection operation that ensures adversarial examples stay in the ball around . Although this method is fast, the attack is weak and can be defended easily. On the other hand, its multistep variant, Projected Gradient Descend (PGD), is one of the strongest attacks [kurakin2016adversarial, madry2017towards]:
where is initialized with a random perturbation. Since PGD requires to access the gradients for multiple steps, it will incur high computational cost.
On the defense side, currently the most successful defense approach is constructing adversarial examples via PGD during training and add them to the training sets as data augmentation, which is referred to as adversarial training [madry2017towards].
The motivation behind is that finding a robust model against adversarial examples is equivalent to solving the saddlepoint problem . The inner maximization is equivalent to constructing adversarial examples, while the outer minimization performs as a standard training procedure for loss minimization.
One caveat of adversarial training is its computational cost due to performing PGD attacks at each training step. Alternatively, using FGSM during training is much faster but the resulting model is robust against FGSM attacks but vulnerable against PGD attacks [kurakin2016adversarial]. In this paper, we show it is possible to combine the advantages of both and quickly train a strongly robust model benefited from the boosting tickets.
2.4 Connecting robustness and compactness
Prior studies have shown success in achieving both compactness and robustness of the trained networks [guo2018sparse, ye2019second, zhao2018compress, dhillon2018stochastic, sehwag2019towards, wijayanto2019towards]. However, most of them will either incur much higher training cost or sacrifice robustness from the full model. On the contrary, our framework only requires comparable or even reduced training time than standard adversarial training while obtaining similar/higher robust accuracy than the original full network.
3 Empirical Study of Boosting Tickets
Setup. As we introduce our methods or findings through experimental results, we first summarize the setup for our experiments. We use BatchNorm [batchnorm], weight decay, decreasing learning rate schedules ( at and ), and augmented training data for training models. We try to keep the setting the same as the one used in [frankle2018the] except we use oneshot pruning instead of iterative pruning. It allows the whole pruning and training process to be more practical in real applications. On CIFAR10 dataset, we randomly select 5,000 images out of 50,000 training set as validation set and train the models with the rest. The reported test accuracy is measured with the whole testing set.
All of our experiments are run on four Tesla V100s, 10 Tesla P100s, and 10 2080 Tis. For all the timesensitive experiments like adversarial training on WideResNet3410 in Section 4.4, we train each model on two Tesla V100s with data parallelism. For the rest ones measuring the final test accuracy, we use one gpu for each model without parallelism.
We first investigate boosting tickets on the standard setting without considering adversarial robustness. In this section, we show that with properly chosen hyperparameters, we are managed to find boosting tickets on VGG16 and ResNet that can be trained much faster than the original dense network. Detailed model architectures and the setup can be found in Supplementary Section A.
3.1 Existence of Boosting Tickets
To find the boosting tickets, we use a similar algorithm for finding winning tickets, which is briefly described in the previous section and will be detailed here. First, a neural network is randomly initialized and saved in advance. Then the network is trained until convergence, and a given proportion of weights with the smallest magnitudes are pruned, resulting in a mask where the pruned weights indicate 0 and remained weights indicate 1. Unless specified, we always prune the smallest weights, that is the pruning ratio is . We call this trainandprune step pruning. This mask is then applied to the saved initialization to obtain a subnetwork, which are the boosting tickets. All of the weights that are pruned (where zeros in the mask) will remain to be 0 during the whole training process. Finally, we can retrain the subnetworks.
The key differences between our algorithm and the one proposed in [frankle2018the] to find winning tickets are () we use a small learning rate for pruning and retrain the subnetwork (tickets) with learning rate warmup from this small learning rate. In particular, for VGG16 we choose 0.01 for pruning and warmup from 0.01 to 0.1 for retraining; for ResNet18 we choose 0.05 for pruning and warmup from 0.05 to 0.1 for retraining; () we find it is sufficient to prune and retrain the model only once instead of iterative pruning for multiple times. In Supplementary Section B, we show the difference of boosting effects brought from the tickets found by iterative pruning and oneshot pruning is negligible. Note warmup is also used in [frankle2018the]. However, they propose to use warmup from small learning rate to a large one during pruning as well, which hinders the boosting effect as shown in the following experiments.
First, we show the existence of boosting tickets for VGG16 and ResNet18 on CIFAR10 in Figure 1 and compare to the winning tickets. In particular, we show the boosting tickets are winning tickets, in the sense that they reach comparable accuracy with the original full models. When compared to the winning tickets, boosting tickets demonstrate equally good performance with a higher convergence rate. Similar results on MNIST can be found in Supplementary Section C.
To measure the overall convergence rate, early stopping seems to be a good fit in the literature. It is commonly used to prevent overfitting and the final number of steps are used to measure convergence rates. However, early stopping is not compatible with learning rate scheduling we used in our case where the total number of steps is determined before training.
This causes two issues in our evaluation in Figure 1: () Although the boosting tickets reach a relatively high validation accuracy much earlier than the winning ticket, the training procedure is then hindered by the large learning rate. After the learning rate drops, the performance gap between boosting tickets and winning tickets becomes negligible. As a result, the learning rate scheduling obscures the improvement on convergence rates of boosting tickets; () Due to fast convergence, boosting tickets tend to overfit, as observed in ResNet18 after 50 epochs.
To mitigate these two issues without excluding learning rate scheduling, we conduct another experiment where we mimic the early stopping procedure by gradually increasing the total number of epochs from 20 to 100. The learning rate is still dropped at the and stage. In this way, we can better understand the speed of convergence without worrying about overfitting even with learning rate scheduling involved. In figure 2, we compare the boosting tickets and winning tickets in this manner on VGG16.
While the first two plots in Figure 2 show the general trend of convergence, the improvement of convergence rates is much clearer in the last four plots. In particular, the validation accuracy of boosting tickets after 40 epochs is already on pair with the one trained for 100 epochs. Meanwhile, the winning tickets fall much behind the boosting tickets until 100 epochs where two finally match.
We further investigate the test accuracy at the end of training for boosting and winning tickets in Table 1. We find the test accuracy of winning tickets gradually increase as we allow for more training steps, while the boosting tickets achieve the highest test accuracy after 60 epochs and start to overfit at 100 epochs.
# of Epochs  20  40  60  80  100 

Winning (%)  88.10  90.03  90.96  91.79  92.00 
Boosting (%)  91.25  91.84  92.13  92.14  92.05 
Summarizing the observations above, we confirm the existence of boosting tickets and state the boosting ticket hypothesis:
A randomly initialized dense neural network contains a subnetwork that is initialized such that, when trained in isolation, converges faster than the original network and other winning tickets while matches their performance.
In the following sections, we explain the intuition of boosting tickets and investigate three major components that affect the boosting effects.
3.2 Intuition
If we think of the training procedure as a searching process from the initial weights to the optimal point in the parameter space (full path), the training procedure of the pruned model is essentially a path in a projected subspace with pruned parameters being 0 (subpath). Suppose we want the subpath to reach the optimal point, it is essential to follow the projection of the full path on the subspace. We follow this intuition, realizing that a smaller learning rate at the initial stage would help the subpath follow the projected full path by avoiding large deviation. For the same reason, using the same learning rate at the initial stages for both paths also helps align them. In this way, the subpath can quickly find the correct direction after the initial stage and start to discover a shortcut to the optimal point, resulting in boosting effects.
In Figure 3a, we calculate the relative distance between the full and the pruned model weights and generated using winning and boosting tickets . The corresponding accuracy is reported in Figure 3b. It is apparent that the distance of boosting tickets is much smaller. A recent paper [evci2019difficulty] also confirmed that a linear path, although hard to find, exists between the initialization and the optimal point for pruned neural networks, which supported our intuitions about boosting effects.
3.3 Learning Rate
As finding boosting tickets requires alternating learning rates, it is natural to assume the performance of boosting tickets relies on the choice of learning rate. Thus, we extensively investigate the influence of various learning rates.
We use similar experimental settings in the previous section, where we increase the total number of epochs gradually and use the test accuracy as a measure of convergence rates. We choose four different learning rates 0.005, 0.01, 0.05 and 0.1 for pruning to get the tickets. All of the tickets found by those learning rates obtain the accuracy improvement over randomly reinitialized submodel and thus satisfy the definition of winning tickets (i.e., they are all winning tickets).
As shown in the first two plots of Figure 4, tickets found by smaller learning rates tend to have stronger boosting effects. For both VGG16 and ResNet18, the models trained with learning rate 0.1 show the least boosting effects, measured by the test accuracy after 20 epochs of training. On the other hand, training with too small learning rate will compromise the eventual test accuracy at a certain extent. Therefore, we treat the tickets found by learning rate 0.01 as our boosting tickets for VGG16, and the one found by learning rate 0.05 as for ResNet18, which converge much faster than all of the rest while achieving the highest final test accuracy.
3.4 Pruning Ratio
Pruning ratio has been an important component for winning tickets [frankle2018the], and thus we investigate its effect on boosting tickets. Since we are only interested in the boosting effect, we use the validation accuracy at early stages as a measure of the strength of boosting to avoid drawing too many lines in the plots. In Figure 5, we show the validation accuracy after the first and fifth epochs of models for different pruning ratios for VGG16 and ResNet18.
For both VGG16 and ResNet18, boosting tickets always reach much higher accuracy than randomly reinitialized submodels, demonstrating their boosting effects. When the pruning ratio falls into the range from 60% to 90%, boosting tickets can provide the strongest boosting effects which obtain around 80% and 83% validation accuracy after the first and the fifth training epochs for VGG16 and obtain 76% and 85% validation accuracy for ResNet18. On the other hand, the increase of validation accuracy between the first training epoch and the fifth training epoch become smaller when boosting effects appear. It indicates their convergence starts to saturate due to the large learning rate at the initial stage and is ready for dropping the learning rate.
3.5 Model Capacity
We finally investigate how model capacity, including the depth and width of models, affects the performance of winning tickets in the standard training setting. We use WideResNet [zagoruyko2016wide] either with its depth or width fixed and vary the other factor. In particular, we keep the depth as 34 and increases the width from 1 to 10, comparing their boosting effect. Then we keep the width as 10 and increase the depth from 10 to 34. The changes of validation accuracy of the models are shown in Figure 6.
Overall, Figure 6 shows models with larger capacity have much better performance, though the performance keeps the same when the depth is larger than 22. Notably, we find the largest model WideResNet3410 achieves 90.88% validation accuracy after only one training epoch.
4 Lottery Ticket Hypothesis for Adversarial Training
Although the lottery ticket hypothesis is extensively studied in [frankle2018the] and [frankle2019lottery], the same phenomenon in adversarial training setting lacks thorough understanding.
In this section, we show two important facts that make boosting tickets suitable for the adversarial scheme: (1) the lottery ticket hypothesis and boosting ticket hypothesis are applicable to the adversarial training scheme; (2) pruning on a weakly robust model allows to find the boosting ticket for a strongly robust model and save training cost.
Particularly, Ye et al. [ye2019second] first attempt to apply lottery ticket hypothesis to adversarial settings. However, they concluded that the lottery ticket hypothesis fails to hold in adversarial training via experiments on MNIST. In Section 4.2, our experiments demonstrate that the results they observed are not sufficient to draw this conclusion while we observe that the lottery ticket hypothesis still holds for adversarial training under more restrictive limitations.
4.1 Applicability for Adversarial Training
In the following experiment, we use a naturally trained model, that is trained in the standard manner, and two adversarially trained models using FGSM and PGD respectively to obtain the tickets by pruning these models. Then we retrain these pruned models with the same PGDbased adversarial training from the same initialization. In Figure 7, we report the corresponding accuracy on the original validation sets and on the adversarially perturbed validation examples, noted as clean accuracy and robust accuracy. We further train the pruned model from random reinitialization to validate lottery ticket hypothesis.
Unless otherwise stated, in all the PGDbased adversarial training, we keep the same setting as [madry2017towards]. The PGD attacks are performed in 10 steps with step size (PGD10). The PGD attacks are bounded by in its norm. For the FGSMbased adversarial training, the FGSM attacks are bounded by .
Both models trained from the boosting tickets obtained with FGSM and PGDbased adversarial training demonstrate superior performance and faster convergence than the model trained from random reinitialization. This confirms the lottery ticket hypothesis and boosting ticket hypothesis are applicable to adversarial training scheme on both clean accuracy and robust accuracy. More interestingly, the performance of the models pruned with FGSM and PGDbased adversarial training are almost the same. This observation suggests it is sufficient to train a weakly robust model with FGSMbased adversarial training for obtaining the boosting tickets and retrain it with stronger attacks such as PGD.
This finding is interesting because FGSMbased adversarial trained models will suffer from label leaking problems as learning weak robustness [kurakin2016adversarial]. In fact, the FGSMbased adversarially trained model from which we obtain our boosting tickets has 89% robust accuracy against FGSM but with only 0.4% robust accuracy against PGD performed in 20 steps (PGD20). However, Figure 7 shows the following PGDbased adversarial retraining on the boosting tickets obtained from that FGSMbased trained model is indeed robust. Further discussions can be found in Section 5.
4.2 Explain the Failure in [ye2019second]
In [ye2019second], the authors argued that the lottery ticket hypothesis fails to hold in adversarial training via experiments on MNIST. We show they fail to observe winning tickets because the models they used have limited capacity.
We first reproduce their results to show that, in the adversarial setting, small models such as a CNN with two convolutional layers used in [ye2019second] can not yield winning tickets when pruning ratio is large. In Figure 9, plot (a) and (b) are the clean and robust accuracy of the pruned models when the pruning ratio is . The pruned model eventually degrades into a trivial classifier where all example are classified into the same class with 11.42%/11.42% valid/test accuracy. On the other hand, when we use VGG16, as shown in plot (c) and (d), the winning tickets are found again. This can be explained as adversarial training requires much larger model capacity than standard training, which is extensively discussed in [madry2017towards]. As the result, the pruned small models become unstable during training and yields degrading performance.
As a comparison, the experimental results from Figure 7 indicate when larger models are used, the lottery ticket hypothesis still applies to the adversarial trainig scheme.
4.3 Convergence Speedup
We then investigate how boosting tickets can accelerate the adversarial training procedure by conducting the same experiments as in Figure 2 but in the adversarial training setting. The results for validation accuracy and test accuracy are presented in Figure 8 and Table 2 respectively.
In Figure 8, all the training plots are PGDbased adversarial training on the same boosting ticket and learning rate scheduling but with different training epochs. We follow the same procedure described in Section 4.1 to obtain the boosting ticket. Specifically, we train VGG16 model with 100epoch FGSMbased adversarial training and then prune 80% of the weight connections. From Figure 8, we can see it is sufficient to train 60 epochs to achieve similar robust accuracy as the full model trained for 100 epochs on our boosting ticket.
Also, in Table 2, we have compared the original full models trained by 100epoch PGDbased adversarial training with the ones trained on our boosting ticket with different epochs. In general, our models trained on boosting ticket can obtain at most higher robust accuracy and regular accuracy than the original one. It indicates (1) the lottery ticket holds for adversarial training as well and (2) our boosting ticket can still enjoy the benefits of both lottery ticket hypothesis and convergence speedup.
# of Epochs  20  40  60  80  100  Baseline 

Robust Acc.  44.49  45.27  45.73  45.20  44.53  44.78 
Clean Acc.  75.15  76.28  76.48  77.60  78.07  77.21 
4.4 Boosting Ticket Applications on adversarially trained WideResNet3410
Until now, we have confirmed that boosting tickets exist consistently across different models and training schemes and convey important insights on the behavior of pruned models. However, in the natural training setting, although boosting tickets provide faster convergence, it is not suitable for accelerating the standard training procedure as pruning to find the boosting tickets requires training full models beforehand. On the other hand, the two observations mentioned in Section 4 enable boosting tickets to accelerate adversarial training.
In Table 3, we apply adversarial training to WideResNet3410, which has the same structure used in [madry2017towards], with the proposed approach for 40, 70 and 100 epochs and report the best accuracy/robust accuracy under various attacks among the whole training process. In particular, we perform 20step PGD, 100step PGD as whitebox attacks where the attackers have the access to the model parameters.
It might be suspicious if the resulting models from pruning and adversarial training are indeed robust against strong attacks, as the pruning mask is obtained from a weakly robust model. We conduct extensive experiments on CIFAR10 with WideResNet3410 to evaluate the robustness of this model and compare to the robust model trained with Madry et al’s method [madry2017towards]. Therefore, we include results for C&W attacks [carlini2017towards] and transfer attacks [papernot2016transferability, liu2016delving] where we attack one model with adversarial examples found by 20step PGD based on other models.
We find the adversarial examples generated from one model can transfer to another model with a slight decrease on the robust error. It indicates our models and Madry et al’s models share adversarial examples and further share decision boundaries.
Test Accuracy(%)  

Models  Madry’s  Ours40  Ours70  Ours100 
Natural  86.21  87.72  87.85  87.35 
PGD20  50.07  50.37  50.48  49.92 
PGD100  49.32  49.28  49.58  49.11 
C&W  50.46  50.92  50.82  50.37 
Madry’s    58.16  57.39  57.63 
Ours40  58.69    54.04  56.11 
Ours70  58.77  54.60    55.23 
Ours100  58.61  56.62  55.20   
Pruning Time(s)  0  15,462  15,462  15,462 
Training Time(s)  134,764  54,090  94,796  137,105 
Total Time(s)  134,764  69,552  110,258  152,567 
Ours/Madry’s    0.51  0.82  1.13 
We report the time consumption for training each model to measure how much time is saved by boosting tickets. We run all the experiments on a workstation with 2 V100 GPUs in parallel. From Table 3 we observe that while our approach requires pruning before training, it is overall faster as it uses FGSMbased adversarial training. In particular, to achieve its best robust accuracy, original Madry et al.’s training method [madry2017towards] requires 134,764 seconds on WideResNet3410. To achieve that, our boosting ticket only requires 69,552 seconds, including 15,462 seconds to find the boosting ticket and 54,090 seconds to retrain the ticket, saving 49% of the total training time.
5 Discussion and Future Work
Not knowledge distillation. It may seem that winning tickets and boosting tickets behave like knowledge distillation [ba2014deep, hinton2015distilling] where the learned knowledge from a large model is transferred to a small model. This conjecture may explain the boosting effects as the pruned model quickly recover the knowledge from the full model. However, the lottery ticket framework seems to be distinctive to knowledge distillation. If boosting tickets simply transfer knowledge from the full model to the pruned model, then an FGSMbased adversarially trained model should not find tickets that improves the robustness of the submodel against PGD attacks, as the full model itself is vulnerable to PGD attacks. Yet in Section 4.1 we observe an FGSMbased adversarially trained model still leads to boosting tickets that accelerates PGDbased adversarial training. We believe the cause of boosting tickets requires further investigation in the future.
Accelerate adversarial training. Recently, [shafahi2019adversarial] propose to reduce the training time for PGDbased adversarial training by recycling the gradients computed for parameter updates and constructing adversarial examples. While their approach focuses on reducing the computational time for each epoch, our method focuses more on the convergence rate (i.e., reduce the number of epochs required for convergence). Therefore, our approach is compatible with theirs, making it a promising future direction to combine both to further reduce the training time.
6 Conclusion
In this paper, we investigate boosting tickets, subnetworks coupled with certain initialization that can be trained with significantly faster convergence rate. As a practical application, in the adversarial training scheme, we show pruning a weakly robust model allows to find boosting tickets that can save up to 49% of the total training time to obtain a strongly robust model that matches the stateoftheart robustness. Finally, it is an interesting direction to investigate whether there is a way to find boosting tickets without training the full model beforehand, as it is technically not necessary.
References
Appendix A Model Architectures and Setup
In Table 4, we summarize the number of parameters and parameter sizes of all the model architectures that we evaluate with including VGG16 [simonyan2014very], ResNet18 [he2016deep]
, and the variance of WideResNets
[zagoruyko2016wide].# of Parameters  Size (MB)  
VGG16  29,975,444  114.35 
ResNet18  11,173,962  42.63 
WideResNet3410  46,160,474  176.09 
WideResNet2810  36,479,194  139.16 
WideResNet2210  26,797,914  102.23 
WideResNet1610  17,116,634  65.29 
WideResNet1010  7,435,354  28.36 
WideResNet345  11,554,074  44.08 
WideResNet342  1,855,578  7.08 
WideResNet341  466,714  1.78 

Appendix B One Shot Pruning
In Figure 10, we track the training of models obtained from both iterative pruning and one shot pruning. We find the performance of both, in terms of the boosting effects and final accuracy, is indistinguishable.
Appendix C Experiments on MNIST
In this section, we report experiment results on MNIST for the standard setting, where we use LeNet with two convolutions and two fully connected layers for the classification task.
As for MNIST we do not use learning rate scheduling, early stopping is then used to determine the speed of convergence. In Table 5, we report the epochs when early stopping happens and the test accuracy to illustrate the existence of boosting tickets for MNIST. While winning tickets converge at the 18th epoch, boosting tickets converge at the 11th epoch, indicating faster convergence.
Full Model  Winning  Boosting  Rand Init  

Early Stopping  20  18  11  16 
Test Accuracy  99.18  99.24  99.23  98.97 
Comments
There are no comments yet.