What is the State of Neural Network Pruning?

by   Davis Blalock, et al.

Neural network pruning—the task of reducing the size of a network by removing parameters—has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.


page 6

page 7

page 17


Modeling of Pruning Techniques for Deep Neural Networks Simplification

Convolutional Neural Networks (CNNs) suffer from different issues, such ...

Methods for Pruning Deep Neural Networks

This paper presents a survey of methods for pruning deep neural networks...

Streamlining Tensor and Network Pruning in PyTorch

In order to contrast the explosion in size of state-of-the-art machine l...

The Generalization-Stability Tradeoff in Neural Network Pruning

Pruning neural network parameters to reduce model size is an area of muc...

The Incredible Shrinking Neural Network: New Perspectives on Learning Representations Through The Lens of Pruning

How much can pruning algorithms teach us about the fundamentals of learn...

EPIC TTS Models: Empirical Pruning Investigations Characterizing Text-To-Speech Models

Neural models are known to be over-parameterized, and recent work has sh...

Emerging Paradigms of Neural Network Pruning

Over-parameterization of neural networks benefits the optimization and g...

Code Repositories


PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.

view repo

1 Introduction

Much of the progress in machine learning in the past decade has been a result of deep neural networks. Many of these networks, particularly those that perform the best

Huang et al. (2018), require enormous amounts of computation and memory. These requirements not only increase infrastructure costs, but also make deployment of networks to resource-constrained environments such as mobile phones or smart devices challenging Han et al. (2015); Sze et al. (2017); Yang et al. (2017).

One popular approach for reducing these resource requirements at test time is neural network pruning, which entails systematically removing parameters from an existing network. Typically, the initial network is large and accurate, and the goal is to produce a smaller network with similar accuracy. Pruning has been used since the late 1980s Janowsky (1989); Mozer and Smolensky (1989, 1989); Karnin (1990), but has seen an explosion of interest in the past decade thanks to the rise of deep neural networks.

For this study, we surveyed 81 recent papers on pruning in the hopes of extracting practical lessons for the broader community. For example: which technique achieves the best accuracy/efficiency tradeoff? Are there strategies that work best on specific architectures or datasets? Which high-level design choices are most effective?

There are indeed several consistent results: pruning parameters based on their magnitudes substantially compresses networks without reducing accuracy, and many pruning methods outperform random pruning. However, our central finding is that the state of the literature is such that our motivating questions are impossible to answer. Few papers compare to one another, and methodologies are so inconsistent between papers that we could not make these comparisons ourselves. For example, a quarter of papers compare to no other pruning method, half of papers compare to at most one other method, and dozens of methods have never been compared to by any subsequent work. In addition, no dataset/network pair appears in even a third of papers, evaluation metrics differ widely, and hyperparameters and other counfounders vary or are left unspecified.

Most of these issues stem from the absence of standard datasets, networks, metrics, and experimental practices. To help enable more comparable pruning research, we identify specific impediments and pitfalls, recommend best practices, and introduce ShrinkBench, a library for standardized evaluation of pruning. ShrinkBench makes it easy to adhere to the best practices we identify, largely by providing a standardized collection of pruning primitives, models, datasets, and training routines.

Our contributions are as follows:

  1. [leftmargin=5mm]

  2. A meta-analysis of the neural network pruning literature based on comprehensively aggregating reported results from 81 papers.

  3. A catalog of problems in the literature and best practices for avoiding them. These insights derive from analyzing existing work and pruning hundreds of models.

  4. ShrinkBench, an open-source library for evaluating neural network pruning methods available at

2 Overview of Pruning

Before proceeding, we first offer some background on neural network pruning and a high-level overview of how existing pruning methods typically work.

2.1 Definitions

We define a neural network architecture as a function family

. The architecture consists of the configuration of the network’s parameters and the sets of operations it uses to produce outputs from inputs, including the arrangement of parameters into convolutions, activation functions, pooling, batch normalization, etc. Example architectures include AlexNet and ResNet-56. We define a neural network

model as a particular parameterization of an architecture, i.e., for specific parameters . Neural network pruning entails taking as input a model and producing a new model . Here is set of parameters that may be different from , is a binary mask that fixes certain parameters to , and is the elementwise product operator. In practice, rather than using an explicit mask, pruned parameters of are fixed to zero or removed entirely.

2.2 High-Level Algorithm

There are many methods of producing a pruned model from an initially untrained model , where is sampled from an initialization distribution . Nearly all neural network pruning strategies in our survey derive from Algorithm 1 Han et al. (2015). In this algorithm, the network is first trained to convergence. Afterwards, each parameter or structural element in the network is issued a score, and the network is pruned based on these scores. Pruning reduces the accuracy of the network, so it is trained further (known as fine-tuning) to recover. The process of pruning and fine-tuning is often iterated several times, gradually reducing the network’s size.

Many papers propose slight variations of this algorithm. For example, some papers prune periodically during training Gale et al. (2019) or even at initialization Lee et al. (2019). Others modify the network to explicitly include additional parameters that encourage sparsity and serve as a basis for scoring the network after training Molchanov et al. (2017).

0:  , the number of iterations of pruning, and      , the dataset on which to train and fine-tune
4:  for  in to  do
7:  end for
8:  return
Algorithm 1 Pruning and Fine-Tuning

2.3 Differences Betweeen Pruning Methods

Within the framework of Algorithm 1, pruning methods vary primarily in their choices regarding sparsity structure, scoring, scheduling, and fine-tuning.

Structure. Some methods prune individual parameters (unstructured pruning). Doing so produces a sparse neural network, which—although smaller in terms of parameter-count—may not be arranged in a fashion conducive to speedups using modern libraries and hardware. Other methods consider parameters in groups (structured pruning

), removing entire neurons, filters, or channels to exploit hardware and software optimized for dense computation

Li et al. (2016); He et al. (2017).

Scoring. It is common to score parameters based on their absolute values, trained importance coefficients, or contributions to network activations or gradients. Some pruning methods compare scores locally, pruning a fraction of the parameters with the lowest scores within each structural subcomponent of the network (e.g., layers) Han et al. (2015). Others consider scores globally, comparing scores to one another irrespective of the part of the network in which the parameter resides Lee et al. (2019); Frankle and Carbin (2019).

Scheduling. Pruning methods differ in the amount of the network to prune at each step. Some methods prune all desired weights at once in a single step Liu et al. (2019). Others prune a fixed fraction of the network iteratively over several steps Han et al. (2015) or vary the rate of pruning according to a more complex function Gale et al. (2019).

Fine-tuning. For methods that involve fine-tuning, it is most common to continue to train the network using the trained weights from before pruning. Alternative proposals include rewinding the network to an earlier state Frankle et al. (2019) and reinitializing the network entirely Liu et al. (2019).

2.4 Evaluating Pruning

Pruning can accomplish many different goals, including reducing the storage footprint of the neural network, the computational cost of inference, the energy requirements of inference, etc. Each of these goals favors different design choices and requires different evaluation metrics. For example, when reducing the storage footprint of the network, all parameters can be treated equally, meaning one should evaluate the overall compression ratio achieved by pruning. However, when reducing the computational cost of inference, different parameters may have different impacts. For instance, in convolutional layers, filters applied to spatially larger inputs are associated with more computation than those applied to smaller inputs.

Regardless of the goal, pruning imposes a tradeoff between model efficiency and quality, with pruning increasing the former while (typically) decreasing the latter. This means that a pruning method is best characterized not by a single model it has pruned, but by a family of models corresponding to different points on the efficiency-quality curve. To quantify efficiency, most papers report at least one of two metrics. The first is the number of multiply-adds (often referred to as FLOPs) required to perform inference with the pruned network. The second is the fraction of parameters pruned. To measure quality, nearly all papers report changes in Top-1 or Top-5 image classification accuracy.

As others have noted Lebedev et al. (2014); Figurnov et al. (2016); Louizos et al. (2017); Yang et al. (2017); Han et al. (2015); Kim et al. (2015); Wen et al. (2016); Luo et al. (2017); He et al. (2018b), these metrics are far from perfect. Parameter and FLOP counts are a loose proxy for real-world latency, throughout, memory usage, and power consumption. Similarly, image classification is only one of the countless tasks to which neural networks have been applied. However, because the overwhelming majority of papers in our corpus focus on these metrics, our meta-analysis necessarily does as well.

3 Lessons from the Literature

After aggregating results from a corpus of 81 papers, we identified a number of consistent findings. In this section, we provide an overview of our corpus and then discuss these findings.

3.1 Papers Used in Our Analysis

Our corpus consists of 79 pruning papers published since 2010 and two classic papers LeCun et al. (1990); Hassibi et al. (1993) that have been compared to by a number of recent methods. We selected these papers by identifying popular papers in the literature and what cites them, systematically searching through conference proceedings, and tracing the directed graph of comparisons between pruning papers. This last procedure results in the property that, barring oversights on our part, there is no pruning paper in our corpus that compares to any pruning paper outside of our corpus. Additional details about our corpus and its construction can be found in Appendix A.

3.2 How Effective is Pruning?

One of the clearest findings about pruning is that it works. More precisely, there are various methods that can significantly compress models with little or no loss of accuracy. In fact, for small amounts of compression, pruning can sometimes increase accuracy Han et al. (2015); Suzuki et al. (2018). This basic finding has been replicated in a large fraction of the papers in our corpus.

Along the same lines, it has been repeatedly shown that, at least for large amounts of pruning, many pruning methods outperform random pruning Yu et al. (2018); Gale et al. (2019); Frankle et al. (2019); Mariet and Sra (2015); Suau et al. (2018); He et al. (2017). Interestingly, this does not always hold for small amounts of pruning Morcos et al. (2019). Similarly, pruning all layers uniformly tends to perform worse than intelligently allocating parameters to different layers Gale et al. (2019); Han et al. (2015); Li et al. (2016); Molchanov et al. (2016); Luo et al. (2017) or pruning globally Lee et al. (2019); Frankle and Carbin (2019). Lastly, when holding the number of fine-tuning iterations constant, many methods produce pruned models that outperform retraining from scratch with the same sparsity pattern Zhang et al. (2015); Yu et al. (2018); Louizos et al. (2017); He et al. (2017); Luo et al. (2017); Frankle and Carbin (2019) (at least with a large enough amount of pruning Suau et al. (2018)). Retraining from scratch in this context means training a fresh, randomly-initialized model with all weights clamped to zero throughout training, except those that are nonzero in the pruned model.

Another consistent finding is that sparse models tend to outperform dense ones for a fixed number of parameters. Lee et al. (2019) show that increasing the nominal size of ResNet-20 on CIFAR-10 while sparsifying to hold the number of parameters constant decreases the error rate. Kalchbrenner et al. (2018) obtain a similar result for audio synthesis, as do Gray et al. (2017) for a variety of additional tasks across various domains. Perhaps most compelling of all are the many results, including in Figure 1, showing that pruned models can obtain higher accuracies than the original models from which they are derived. This demonstrates that sparse models can not only outperform dense counterparts with the same number of parameters, but sometimes dense models with even more parameters.

3.3 Pruning vs Architecture Changes

One current unknown about pruning is how effective it tends to be relative to simply using a more efficient architecture. These options are not mutually exclusive, but it may be useful in guiding one’s research or development efforts to know which choice is likely to have the larger impact. Along similar lines, it is unclear how pruned models from different architectures compare to one another—i.e., to what extent does pruning offer similar benefits across architectures? To address these questions, we plotted the reported accuracies and compression/speedup levels of pruned models on ImageNet alongside the same metrics for different architectures with no pruning (Figure 

1).111 Since many pruning papers report only change in accuracy or amount of pruning, without giving baseline numbers, we normalize all pruning results to have accuracies and model sizes/FLOPs as if they had begun with the same model. Concretely, this means multiplying the reported fraction of pruned size/FLOPs by a standardized initial value. This value is set to the median initial size or number of FLOPs reported for that architecture across all papers. This normalization scheme is not perfect, but does help control for different methods beginning with different baseline accuracies. We plot results within a family of models as a single curve.222 The EfficientNet family is given explicitly in the original paper Tan and Le (2019), the ResNet family consists of ResNet-18, ResNet-34, ResNet-50, etc., and the VGG family consists of VGG-{11, 13, 16, 19}. There are no pruned EfficientNets since EfficientNet was published too recently. Results for non-pruned models are taken from Tan and Le (2019) and Bianco et al. (2018).

Figure 1 suggests several conclusions. First, it reinforces the conclusion that pruning can improve the time or space vs accuracy tradeoff of a given architecture, sometimes even increasing the accuracy. Second, it suggests that pruning generally does not help as much as switching to a better architecture. Finally, it suggests that pruning is more effective for architectures that are less efficient to begin with.

4 Missing Controlled Comparisons

While there do appear to be a few general and consistent findings in the pruning literature (see the previous section), by far the clearest takeaway is that pruning papers rarely make direct and controlled comparisons to existing methods. This lack of comparisons stems largely from a lack of experimental standardization and the resulting fragmentation in reported results. This fragmentation makes it difficult for even the most committed authors to compare to more than a few existing methods.

4.1 Omission of Comparison

Many papers claim to advance the state of the art, but don’t compare to other methods—including many published ones—that make the same claim.

Ignoring Pre-2010s Methods

There was already a rich body of work on neural network pruning by the mid 1990s (see, e.g., Reed’s survey Reed (1993)), which has been almost completely ignored except for Lecun’s Optimal Brain Damage LeCun et al. (1990) and Hassibi’s Optimal Brain Surgeon Hassibi et al. (1993). Indeed, multiple authors have rediscovered existing methods or aspects thereof, with Han et al. (2015) reintroducing the magnitude-based pruning of Janowsky (1989), Lee et al. (2019)

reintroducing the saliency heuristic of

Mozer and Smolensky (1989), and He et al. (2018a) reintroducing the practice of “reviving” previously pruned weights described in Tresp et al. (1997).

Figure 1: Size and speed vs accuracy tradeoffs for different pruning methods and families of architectures. Pruned models sometimes outperform the original architecture, but rarely outperform a better architecture.

Ignoring Recent Methods

Even when considering only post-2010 approaches, there are still virtually no methods that have been shown to outperform all existing “state-of-the-art” methods. This follows from the fact, depicted in the top plot of Figure 2, that there are dozens of modern papers—including many affirmed through peer review—that have never been compared to by any later study.

A related problem is that papers tend to compare to few existing methods. In the lower plot of Figure 2, we see that more than a fourth of our corpus does not compare to any previously proposed pruning method, and another fourth compares to only one. Nearly all papers compare to three or fewer. This might be adequate if there were a clear progression of methods with one or two “best” methods at any given time, but this is not the case.

Figure 2: Reported comparisons between papers.

4.2 Dataset and Architecture Fragmentation

Among 81 papers, we found results using 49 datasets, 132 architectures, and 195 (dataset, architecture) combinations. As shown in Table 1, even the most common combination of dataset and architecture—VGG-16 on ImageNet333We adopt the common practice of referring to the ILSVRC2012 training and validation sets as “ImageNet.” Deng et al. (2009)—is used in only 22 out of 81 papers. Moreover, three of the top six most common combinations involve MNIST LeCun et al. (1998a). As Gale et al. (2019)

and others have argued, using larger datasets and models is essential when assessing how well a method works for real-world networks. MNIST results may be particularly unlikely to generalize, since this dataset differs significantly from other popular datasets for image classification. In particular, its images are grayscale, composed mostly of zeros, and possible to classify with over 99% accuracy using simple models

LeCun et al. (1998b).

4.3 Metrics Fragmentation

(Dataset, Architecture) Pair Number of Papers using Pair
ImageNet VGG-16 22
ImageNet ResNet-50 15


CIFAR-10 ResNet-56 14
MNIST LeNet-300-100 12
MNIST LeNet-5 11
ImageNet CaffeNet 10


ImageNet AlexNet 8
ImageNet ResNet-18 6
ImageNet ResNet-34 6
CIFAR-10 ResNet-110 5
CIFAR-10 PreResNet-164 4
CIFAR-10 ResNet-32 4
Table 1: All combinations of dataset and architecture used in at least 4 out of 81 papers.

As depicted in Figure 3, papers report a wide variety of metrics and operating points, making it difficult to compare results. Each column in this figure is one (dataset, architecture) combination taken from the four most common combinations444We combined the results for AlexNet and CaffeNet, which is a slightly modified version of AlexNet 70, since many authors refer to the latter as “AlexNet,” and it is often unclear which model was used., excluding results on MNIST. Each row is one pair of metrics. Each curve is the efficiency vs accuracy tradeoff obtained by one method.555Since what counts as one method can be unclear, we consider all results from one paper to be one method except when two or more named methods within the paper report using at least one identical x-coordinate (i.e., when the paper’s results can’t be plotted as one curve). Methods are color-coded by year.

It is hard to identify any consistent trends in these plots, aside from the existence of a tradeoff between efficiency and accuracy. A given method is only present in a small subset of plots. Methods from later years do not consistently outperform methods from earlier years. Methods within a plot are often incomparable because they report results at different points on the x-axis. Even when methods are nearby on the x-axis, it is not clear whether one meaningfully outperforms another since neither reports a standard deviation or other measure of central tendency. Finally, most papers in our corpus do not report any results with any of these common configurations.

Figure 3: Fragmentation of results. Shown are all self-reported results on the most common (dataset, architecture) combinations. Each column is one combination, each row shares an accuracy metric (y-axis), and pairs of rows share a compression metric (x-axis). Up and to the right is always better. Standard deviations are shown for He 2018 on CIFAR-10, which is the only result that provides any measure of central tendency. As suggested by the legend, only 37 out of the 81 papers in our corpus report any results using any of these configurations.

4.4 Incomplete Characterization of Results

If all papers reported a wide range of points in their tradeoff curves across a large set of models and datasets, there might be some number of direct comparisons possible between any given pair of methods. As we see in the upper half of Figure 4, however, most papers use at most three (dataset, architecture) pairs; and as we see in the lower half, they use at most three—and often just one—point to characterize each curve. Combined with the fragmentation in experimental choices, this means that different methods’ results are rarely directly comparable. Note that the lower half restricts results to the four most common (dataset, architecture) pairs.

Figure 4: Number of results reported by each paper, excluding MNIST. Top) Most papers report on three or fewer (dataset, architecture) pairs. Bottom) For each pair used, most papers characterize their tradeoff between amount of pruning and accuracy using a single point in the efficiency vs accuracy curve. In both plots, the pattern holds even for peer-reviewed papers.

4.5 Confounding Variables

Even when comparisons include the same datasets, models, metrics, and operating points, other confounding variables still make meaningful comparisons difficult. Some variables of particular interest include:

  • [leftmargin=4mm]

  • Accuracy and efficiency of the initial model

  • Data augmentation and preprocessing

  • Random variations in initialization, training, and fine-tuning. This includes choice of optimizer, hyperparameters, and learning rate schedule.

  • Pruning and fine-tuning schedule

  • Deep learning library. Different libraries are known to yield different accuracies for the same architecture and dataset Northcutt (2019); Nola (2016) and may have subtly different behaviors Vryniotis (2018).

  • Subtle differences in code and environment that may not be easily attributable to any of the above variations J. Crall (2018); A. Jogeshwar (2017); 32.

In general, it is not clear that any paper can succeed in accounting for all of these confounders unless that paper has both used the same code as the methods to which it compares and reports enough measurements to average out random variations. This is exceptionally rare, with Gale et al. (2019) and Liu et al. (2019) being arguably the only examples. Moreover, neither of these papers introduce novel pruning methods per se but are instead inquiries into the efficacy of existing methods.

Many papers attempt to account for subsets of these confounding variables. A near universal practice in this regard is reporting change in accuracy relative to the original model, in addition to or instead of raw accuracy. This helps to control for the accuracy of the initial model. However, as we demonstrate in Section 7, this is not sufficient to remove initial model as a confounder. Certain initial models can be pruned more or less efficiently, in terms of the accuracy vs compression tradeoff. This holds true even with identical pruning methods and all other variables held constant.

There are at least two more empirical reasons to believe that confounding variables can have a significant impact. First, as one can observe in Figure 3, methods often introduce changes in accuracy of much less than 1% at reported operating points. This means that, even if confounders have only a tiny impact on accuracy, they can still have a large impact on which method appears better.

Second, as shown in Figure 5, existing results demonstrate that different training and fine-tuning settings can yield nearly as much variability as different methods. Specifically, consider 1) the variability introduced by different fine-tuning methods for unstructured magnitude-based pruning (Figure 6 top) and 2) the variability introduced by entirely different pruning methods (Figure 6 bottom). The variability between fine-tuning methods is nearly as large as the variability between pruning methods.

Figure 5: Pruning ResNet-50 on ImageNet. Methods in the upper plot all prune weights with the smallest magnitudes, but differ in implementation, pruning schedule, and fine-tuning. The variation caused by these variables is similar to the variation across different pruning methods, whose results are shown in the lower plot. All results are taken from the original papers.

5 Further Barriers to Comparison

In the previous section, we discussed the fragmentation of datasets, models, metrics, operating points, and experimental details, and how this fragmentation makes evaluating the efficacy of individual pruning methods difficult. In this section, we argue that there are additional barriers to comparing methods that stem from common practices in how methods and results are presented.

5.1 Architecture Ambiguity

It is often difficult, or even impossible, to identify the exact architecture that authors used. Perhaps the most prevalent example of this is when authors report using some sort of ResNet He et al. (2016a, b). Because there are two different variations of ResNets, introduced in these two papers, saying that one used a “ResNet-50” is insufficient to identify a particular architecture. Some authors do appear to deliberately point out the type of ResNet they use (e.g., Liu et al. (2017); Dong et al. (2017)). However, given that few papers even hint at the possibility of confusion, it seems unlikely that all authors are even aware of the ambiguity, let alone that they have cited the corresponding paper in all cases.

Perhaps the greatest confusion is over VGG networks Simonyan and Zisserman (2014). Many papers describe experimenting on “VGG-16,” “VGG,” or “VGGNet,” suggesting a standard and well-known architecture. In many cases, what is actually used is a custom variation of some VGG model, with removed fully-connected layers Changpinyo et al. (2017); Luo et al. (2017), smaller fully-connected layers Lee et al. (2019), or added dropout or batchnorm Liu et al. (2017); Lee et al. (2019); Peng et al. (2018); Molchanov et al. (2017); Ding et al. (2018); Suau et al. (2018).

In some cases, papers simply fail to make clear what model they used (even for non-VGG architectures). For example, one paper just states that their segmentation model “is composed from an inception-like network branch and a DenseNet network branch.” Another paper attributes their VGGNet to Parkhi et al. (2015), which mentions three VGG networks. Liu et al. (2019) and Frankle and Carbin (2019) have circular references to one another that can no longer be resolved because of simultaneous revisions. One paper mentions using a “VGG-S” from the Caffe Model Zoo, but as of this writing, no model with this name exists there. Perhaps the most confusing case is the Lenet-5-Caffe reported in one 2017 paper. The authors are to be commended for explicitly stating not only that they use Lenet-5-Caffe, but their exact architecture. However, they describe an architecture with an 800-unit fully-connected layer, while examination of both the Caffe .prototxt files Jia et al. (2015b, a) and associated blog post Jia et al. (2016) indicates that no such layer exists in Lenet-5-Caffe.

5.2 Metrics Ambiguity

It can also be difficult to know what the reported metrics mean. For example, many papers include a metric along the lines of “Pruned%”. In some cases, this means fraction of the parameters or FLOPs remaining Suau et al. (2018). In other cases, it means the fraction of parameters or FLOPs removed Han et al. (2015); Lebedev and Lempitsky (2016); Yao et al. (2018). There is also widespread misuse of the term “compression ratio,” which the compression literature has long used to mean Siedelmann et al. (2015); Zukowski et al. (2006); Zhao et al. (2015); Lindstrom (2014); Ratanaworabhan et al. (2006); Blalock et al. (2018), but many pruning authors define (usually without making the formula explicit) as .

Reported “speedup” values present similar challenges. These values are sometimes wall time, sometimes original number of FLOPs divided by pruned number of FLOPs, sometimes a more complex formula relating these two quantities Dong et al. (2017); He et al. (2018a), and sometimes never made clear. Even when reporting FLOPs, which is nominally a consistent metric, different authors measure it differently (e.g., Molchanov et al. (2016) vs Wang and Cheng (2016)), though most often papers entirely omit their formula for computing FLOPs. We found up to a factor of four variation in the reported FLOPs of different papers for the same architecture and dataset, with Yang et al. (2017) reporting 371 MFLOPs for AlexNet on ImageNet, Choi et al. (2019) reporting 724 MFLOPs, and Han et al. (2015) reporting 1500 MFLOPs.

6 Summary and Recommendations

In the previous sections, we have argued that existing work tends to

  • [leftmargin=4mm]

  • make it difficult to identify the exact experimental setup and metrics,

  • use too few (dataset, architecture) combinations,

  • report too few points in the tradeoff curve for any given combination, and no measures of central tendency,

  • omit comparison to many methods that might be state-of-the-art, and

  • fail to control for confounding variables.

These problems often make it difficult or impossible to assess the relative efficacy of different pruning methods. To enable direct comparison between methods in the future, we suggest the following practices:

  • [leftmargin=4mm]

  • Identify the exact sets of architectures, datasets, and metrics used, ideally in a structured way that is not scattered throughout the results section.

  • Use at least three (dataset, architecture) pairs, including modern, large-scale ones. MNIST and toy models do not count. AlexNet, CaffeNet, and Lenet-5 are no longer modern architectures.

  • For any given pruned model, report both compression ratio and theoretical speedup. Compression ratio is defined as the original size divided by the new size. Theoretical speedup is defined as the original number of multiply-adds divided by the new number. Note that there is no reason to report only one of these metrics.

  • For ImageNet and other many-class datasets, report both Top-1 and Top-5 accuracy. There is again no reason to report only one of these.

  • Whatever metrics one reports for a given pruned model, also report these metrics for an appropriate control (usually the original model before pruning).

  • Plot the tradeoff curve for a given dataset and architecture, alongside the curves for competing methods.

  • When plotting tradeoff curves, use at least 5 operating points spanning a range of compression ratios. The set of ratios is a good choice.

  • Report and plot means and sample standard deviations, instead of one-off measurements, whenever feasible.

  • Ensure that all methods being compared use identical libraries, data loading, and other code to the greatest extent possible.

We also recommend that reviewers demand a much greater level of rigor when evaluating papers that claim to offer a better method of pruning neural networks.

7 ShrinkBench

7.1 Overview of ShrinkBench

To make it as easy as possible for researchers to put our suggestions into practice, we have created an open-source library for pruning called ShrinkBench. ShrinkBench provides standardized and extensible functionality for training, pruning, fine-tuning, computing metrics, and plotting, all using a standardized set of pretrained models and datasets.

ShrinkBench is based on PyTorch

Paszke et al. (2017)

and is designed to allow easy evaluation of methods with arbitrary scoring functions, allocation of pruning across layers, and sparsity structures. In particular, given a callback defining how to compute masks for a model’s parameter tensors at a given iteration, ShrinkBench will automatically apply the pruning, update the network according to a standard training or fine-tuning setup, and compute metrics across many models, datasets, random seeds, and levels of pruning. We defer discussion of ShrinkBench’s implementation and API to the project’s documentation.

7.2 Baselines

We used ShrinkBench to implement several existing pruning heuristics, both as examples of how to use our library and as baselines that new methods can compare to:

  • [leftmargin=4mm]

  • Global Magnitude Pruning - prunes the weights with the lowest absolute value anywhere in the network.

  • Layerwise Magnitude Pruning - for each layer, prunes the weights with the lowest absolute value.

  • Global Gradient Magnitude Pruning - prunes the weights with the lowest absolute value of (weight gradient), evaluated on a batch of inputs.

  • Layerwise Gradient Magnitude Pruning - for each layer, prunes the weights the lowest absolute value of (weight gradient), evaluated on a batch of inputs.

  • Random Pruning

    - prunes each weight independently with probability equal to the fraction of the network to be pruned.

Magnitude-based approaches are common baselines in the literature and have been shown to be competitive with more complex methods Han et al. (2015, 2016); Gale et al. (2019); Frankle et al. (2019). Gradient-based methods are less common, but are simple to implement and have recently gained popularity Lee et al. (2019, 2019); Yu et al. (2018). Random pruning is a common straw man that can serve as a useful debugging tool. Note that these baselines are not reproductions of any of these methods, but merely inspired by their pruning heuristics.

7.3 Avoiding Pruning Pitfalls with Shrinkbench

Using the described baselines, we pruned over 800 networks with varying datasets, networks, compression ratios, initial weights and random seeds. In doing so, we identified various pitfalls associated with experimental practices that are currently common in the literature but are avoided by using ShrinkBench.

We highlight several noteworthy results below. For additional experimental results and details, see Appendix D. One standard deviation bars across three runs are shown for all CIFAR-10 results.

Metrics are not Interchangeable.

As discussed previously, it is common practice to report either reduction in the number of parameters or in the number of FLOPs. If these metrics are extremely correlated, reporting only one is sufficient to characterize the efficacy of a pruning method. We found after computing these metrics for the same model under many different settings that reporting one metric is not sufficient. While these metrics are correlated, the correlation is different for each pruning method. Thus, the relative performance of different methods can vary significantly under different metrics (Figure 6).

Figure 6: Top 1 Accuracy for ResNet-18 on ImageNet for several compression ratios and their corresponding theoretical speedups. Global methods give higher accuracy than Layerwise ones for a fixed model size, but the reverse is true for a fixed theoretical speedup.

Results Vary Across Models, Datasets, and Pruning Amounts

Many methods report results on only a small number of datasets, models, amounts of pruning, and random seeds. If the relative performance of different methods tends to be constant across all of these variables, this may not be problematic. However, our results suggest that this performance is not constant.

Figure 7 shows the accuracy for various compression ratios for CIFAR-VGG Zagoruyko (2015) and ResNet-56 on CIFAR-10. In general, Global methods are more accurate than Layerwise methods and Magnitude-based methods are more accurate than Gradient-based methods, with random performing worst of all. However, if one were to look only at CIFAR-VGG for compression ratios smaller than 10, one could conclude that Global Gradient outperforms all other methods. Similarly, while Global Gradient consistently outperforms Layerwise Magnitude on CIFAR-VGG, the opposite holds on ResNet-56 (i.e., the orange and green lines switch places).

Moreover, we found that for some settings close to the drop-off point (such as Global Gradient, compression 16), different random seeds yielded significantly different results (0.88 vs 0.61 accuracy) due to the randomness in minibatch selection. This is illustrated by the large vertical error bar in the left subplot.

Figure 7: Top 1 Accuracy on CIFAR-10 for several compression ratios. Global Gradient performs better than Global Magnitude for CIFAR-VGG on low compression ratios, but worse otherwise. Global Gradient is consistently better than Layerwise Magnitude on CIFAR-VGG, but consistently worse on ResNet-56.

Using the Same Initial Model is Essential.

As mentioned in Section 4.5

, many methods are evaluated using different initial models with the same architecture. To assess whether beginning with a different model can skew the results, we created two different models and evaluated Global vs Layerwise Magnitude pruning on each with all other variables held constant.

To obtain the models, we trained two ResNet-56 networks using Adam until convergence with and . We’ll refer to these pretrained weights as Weights A and Weights B, respectively. As shown on the left side of Figure 8, the different methods appear better on different models. With Weights A, the methods yield similar absolute accuracies. With Weights B, however, the Global method is more accurate at higher compression ratios.

Figure 8: Global and Layerwise Magnitude Pruning on two different ResNet-56 models. Even with all other variables held constant, different initial models yield different tradeoff curves. This may cause one method to erroneously appear better than another. Controlling for initial accuracy does not fix this.

We also found that the common practice of examining changes in accuracy is insufficient to correct for initial model as a confounder. Even when reporting changes, one pruning method can artificially appear better than another by virtue of beginning with a different model. We see this on the right side of Figure 8, where Layerwise Magnitude with Weights B appears to outperform Global Magnitude with Weights A, even though the former never outperforms the latter when initial model is held constant.

8 Conclusion

Considering the enormous interest in neural network pruning over the past decade, it seems natural to ask simple questions about the relative efficacy of different pruning techniques. Although a few basic findings are shared across the literature, missing baselines and inconsistent experimental settings make it impossible to assess the state of the art or confidently compare the dozens of techniques proposed in recent years. After carefully studying the literature and enumerating numerous areas of incomparability and confusion, we suggest concrete remedies in the form of a list of best practices and an open-source library—ShrinkBench—to help future research endeavors to produce the kinds of results that will harmonize the literature and make our motivating questions easier to answer. Furthermore, ShrinkBench results on various pruning techniques evidence the need for standardized experiments when evaluating neural network pruning methods.


We thank Luigi Celona for providing the data used in Bianco et al. (2018) and Vivienne Sze for helpful discussion. This research was supported by the Qualcomm Innovation Fellowship, the “la Caixa” Foundation Fellowship, Quanta Computer, and Wistron Corporation.


  • S. Bianco, R. Cadene, L. Celona, and P. Napoletano (2018) Benchmark analysis of representative deep neural network architectures. IEEE Access 6, pp. 64270–64277. Cited by: Acknowledgements, footnote 2.
  • D. Blalock, S. Madden, and J. Guttag (2018) Sprintz: time series compression for the internet of things. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2 (3), pp. 93. Cited by: §5.2.
  • S. Changpinyo, M. Sandler, and A. Zhmoginov (2017)

    The power of sparsity in convolutional neural networks

    arXiv preprint arXiv:1702.06257. Cited by: §5.1.
  • Y. Choi, M. El-Khamy, and J. Lee (2019) Jointly sparse convolutional neural networks in dual spatial-winograd domains. arXiv preprint arXiv:1902.08192. Cited by: §5.2.
  • J. Crall (2018) Accuracy of resnet50 is much higher than reported!. Note: https://github.com/kuangliu/pytorch-cifar/issues/45Accessed: 2019-07-22 Cited by: 6th item.
  • J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    pp. 248–255. Cited by: §4.2.
  • X. Ding, G. Ding, J. Han, and S. Tang (2018) Auto-balanced filter pruning for efficient convolutional neural networks. In

    Thirty-Second AAAI Conference on Artificial Intelligence

    Cited by: §5.1.
  • X. Dong, J. Huang, Y. Yang, and S. Yan (2017) More is less: a more complicated network with less inference complexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5840–5848. Cited by: §5.1, §5.2.
  • A. Dubey, M. Chatterjee, and N. Ahuja (2018) Coreset-based neural network compression. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 454–470. Cited by: Appendix A.
  • M. Figurnov, A. Ibraimova, D. P. Vetrov, and P. Kohli (2016) Perforatedcnns: acceleration through elimination of redundant convolutions. In Advances in Neural Information Processing Systems, pp. 947–955. Cited by: §2.4.
  • J. Frankle and M. Carbin (2019) The lottery ticket hypothesis: finding sparse, trainable neural networks. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, External Links: Link Cited by: §2.3, §3.2, §5.1.
  • J. Frankle, G. K. Dziugaite, D. M. Roy, and M. Carbin (2019) The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611. Cited by: §2.3, §3.2, §7.2.
  • T. Gale, E. Elsen, and S. Hooker (2019) The state of sparsity in deep neural networks. External Links: 1902.09574 Cited by: §2.2, §2.3, §3.2, §4.2, §4.5, §7.2.
  • S. Gray, A. Radford, and D. P. Kingma (2017) Gpu kernels for block-sparse weights. arXiv preprint arXiv:1711.09224. Cited by: §3.2.
  • S. Han, H. Mao, and W. J. Dally (2016) Deep compression: compressing deep neural network with pruning, trained quantization and huffman coding. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Y. Bengio and Y. LeCun (Eds.), External Links: Link Cited by: §7.2.
  • S. Han, J. Pool, J. Tran, and W. Dally (2015) Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143. Cited by: Appendix A, §1, §2.2, §2.3, §2.3, §2.4, §3.2, §3.2, §4.1, §5.2, §5.2, §7.2.
  • B. Hassibi, D. G. Stork, and G. J. Wolff (1993) Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pp. 293–299. Cited by: §3.1, §4.1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016a) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: §5.1.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016b) Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Cited by: §5.1.
  • Y. He, G. Kang, X. Dong, Y. Fu, and Y. Yang (2018a) Soft filter pruning for accelerating deep convolutional neural networks. In IJCAI International Joint Conference on Artificial Intelligence, Cited by: Appendix A, §4.1, §5.2.
  • Y. He, J. Lin, Z. Liu, H. Wang, L. Li, and S. Han (2018b) Amc: automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 784–800. Cited by: §2.4.
  • Y. He, X. Zhang, and J. Sun (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. Cited by: §2.3, §3.2.
  • Y. Huang, Y. Cheng, D. Chen, H. Lee, J. Ngiam, Q. V. Le, and Z. Chen (2018) Gpipe: efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965. Cited by: §1.
  • Z. Huang and N. Wang (2018) Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 304–320. Cited by: Appendix A.
  • S. A. Janowsky (1989) Pruning versus clipping in neural networks. Physical Review A 39 (12), pp. 6600–6603 (en). External Links: ISSN 0556-2791, Link, Document Cited by: §1, §4.1.
  • Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell (2015a) Lenet-train-test. Note: https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet_train_test.prototxt Cited by: §5.1.
  • Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell (2015b) Lenet. Note: https://github.com/BVLC/caffe/blob/master/examples/mnist/lenet.prototxt Cited by: §5.1.
  • Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell (2016) Training lenet on mnist with caffe. Note: https://caffe.berkeleyvision.org/gathered/examples/mnist.htmlAccessed: 2019-07-22 Cited by: §5.1.
  • A. Jogeshwar (2017) Validating resnet50. Note: https://github.com/keras-team/keras/issues/8672Accessed: 2019-07-22 Cited by: 6th item.
  • N. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart, F. Stimberg, A. v. d. Oord, S. Dieleman, and K. Kavukcuoglu (2018) Efficient neural audio synthesis. arXiv preprint arXiv:1802.08435. Cited by: §3.2.
  • E. D. Karnin (1990) A simple procedure for pruning back-propagation trained neural networks. IEEE transactions on neural networks 1 (2), pp. 239–242. Cited by: §1.
  • [32] (2017-09) Keras exported model shows very low accuracy in tensorflow serving. Note: https://github.com/keras-team/keras/issues/7848Accessed: 2019-07-22 Cited by: 6th item.
  • Y. Kim, E. Park, S. Yoo, T. Choi, L. Yang, and D. Shin (2015) Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530. Cited by: Appendix A, §2.4.
  • V. Lebedev, Y. Ganin, M. Rakhuba, I. Oseledets, and V. Lempitsky (2014) Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553. Cited by: §2.4.
  • V. Lebedev and V. Lempitsky (2016) Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554–2564. Cited by: §5.2.
  • Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, et al. (1998a) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.2.
  • Y. LeCun, C. Cortes, and C. Burges (1998b)

    The mnist database of handwritten digits

    Note: Accessed: 2019-09-6 Cited by: §4.2.
  • Y. LeCun, J. S. Denker, and S. A. Solla (1990) Optimal brain damage. In Advances in neural information processing systems, pp. 598–605. Cited by: §3.1, §4.1.
  • N. Lee, T. Ajanthan, S. Gould, and P. H. S. Torr (2019) A Signal Propagation Perspective for Pruning Neural Networks at Initialization. arXiv:1906.06307 [cs, stat] (en). Note: arXiv: 1906.06307 External Links: Link Cited by: §3.2, §7.2.
  • N. Lee, T. Ajanthan, and P. H. S. Torr (2019) Snip: single-shot network pruning based on connection sensitivity. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, External Links: Link Cited by: §2.2, §2.3, §3.2, §4.1, §5.1, §7.2.
  • H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Cited by: §2.3, §3.2.
  • P. Lindstrom (2014) Fixed-rate compressed floating-point arrays. IEEE transactions on visualization and computer graphics 20 (12), pp. 2674–2683. Cited by: §5.2.
  • Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang (2017) Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744. Cited by: §5.1, §5.1.
  • Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell (2019) Rethinking the value of network pruning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, External Links: Link Cited by: §2.3, §2.3, §4.5, §5.1.
  • C. Louizos, K. Ullrich, and M. Welling (2017) Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pp. 3288–3298. Cited by: Appendix A, §2.4, §3.2.
  • J. Luo, J. Wu, and W. Lin (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066. Cited by: §2.4, §3.2, §5.1.
  • Z. Mariet and S. Sra (2015) Diversity networks: neural network compression using determinantal point processes. arXiv preprint arXiv:1511.05077. Cited by: §3.2.
  • D. Molchanov, A. Ashukha, and D. Vetrov (2017) Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498–2507. Cited by: §2.2, §5.1.
  • P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz (2016) Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440. Cited by: §3.2, §5.2.
  • A. S. Morcos, H. Yu, M. Paganini, and Y. Tian (2019) One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. arXiv:1906.02773 [cs, stat] (en). Note: arXiv: 1906.02773 External Links: Link Cited by: §3.2.
  • M. C. Mozer and P. Smolensky (1989) Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Advances in neural information processing systems, pp. 107–115. Cited by: §1, §4.1.
  • M. C. Mozer and P. Smolensky (1989) Using Relevance to Reduce Network Size Automatically. Connection Science 1 (1), pp. 3–16 (en). External Links: ISSN 0954-0091, 1360-0494, Link, Document Cited by: §1.
  • D. Nola (2016) Keras doesn’t reproduce caffe example code accuracy. Note: https://github.com/keras-team/keras/issues/4444Accessed: 2019-07-22 Cited by: 5th item.
  • C. Northcutt (2019) Towards reproducibility: benchmarking keras and pytorch. Note: https://l7.curtisnorthcutt.com/towards-reproducibility-benchmarking-keras-pytorchAccessed: 2019-07-22 Cited by: 5th item.
  • O. M. Parkhi, A. Vedaldi, A. Zisserman, et al. (2015)

    Deep face recognition.

    In bmvc, Vol. 1, pp. 6. Cited by: §5.1.
  • A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer (2017) Automatic differentiation in pytorch. Cited by: §7.1.
  • B. Peng, W. Tan, Z. Li, S. Zhang, D. Xie, and S. Pu (2018) Extreme network compression via filter group approximation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 300–316. Cited by: Appendix A, §5.1.
  • P. Ratanaworabhan, J. Ke, and M. Burtscher (2006) Fast lossless compression of scientific floating-point data. In Data Compression Conference (DCC’06), pp. 133–142. Cited by: §5.2.
  • R. Reed (1993) Pruning algorithms-a survey. IEEE Transactions on Neural Networks 4 (5), pp. 740–747 (en). External Links: ISSN 10459227, Link, Document Cited by: §4.1.
  • H. Siedelmann, A. Wender, and M. Fuchs (2015) High speed lossless image compression. In German Conference on Pattern Recognition, pp. 343–355. Cited by: §5.2.
  • K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §5.1.
  • X. Suau, L. Zappella, and N. Apostoloff (2018) NETWORK compression using correlation analysis of layer responses. Cited by: §3.2, §5.1, §5.2.
  • T. Suzuki, H. Abe, T. Murata, S. Horiuchi, K. Ito, T. Wachi, S. Hirai, M. Yukishima, and T. Nishimura (2018) Spectral-pruning: compressing deep neural network via spectral analysis. arXiv preprint arXiv:1808.08558. Cited by: §3.2.
  • V. Sze, Y. Chen, T. Yang, and J. Emer (2017) Efficient processing of deep neural networks: a tutorial and survey. arXiv preprint arXiv:1703.09039. Cited by: §1.
  • M. Tan and Q. V. Le (2019) EfficientNet: rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946. Cited by: footnote 2.
  • V. Tresp, R. Neuneier, and H. Zimmermann (1997) Early brain damage. In Advances in neural information processing systems, pp. 669–675. Cited by: §4.1.
  • V. Vryniotis (2018) Change bn layer to use moving mean/var if frozen. Note: https://github.com/keras-team/keras/pull/9965Accessed: 2019-07-22 Cited by: 5th item.
  • P. Wang and J. Cheng (2016) Accelerating convolutional neural networks for mobile applications. In Proceedings of the 24th ACM international conference on Multimedia, pp. 541–545. Cited by: §5.2.
  • W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li (2016) Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074–2082. Cited by: §2.4.
  • [70] (2016-05) What’s the advantage of the reference caffenet in comparison with the alexnet?. Note: https://github.com/BVLC/caffe/issues/4202Accessed: 2019-07-22 Cited by: footnote 4.
  • K. Yamamoto and K. Maeno (2018) Pcas: pruning channels with attention statistics. arXiv preprint arXiv:1806.05382. Cited by: Appendix A.
  • T. Yang, Y. Chen, and V. Sze (2017) Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5687–5695. Cited by: Appendix A, §1, §2.4, §5.2.
  • Z. Yao, S. Cao, and W. Xiao (2018) Balanced sparsity for efficient dnn inference on gpu. arXiv preprint arXiv:1811.00206. Cited by: §5.2.
  • R. Yu, A. Li, C. Chen, J. Lai, V. I. Morariu, X. Han, M. Gao, C. Lin, and L. S. Davis (2018) Nisp: pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9194–9203. Cited by: §3.2, §7.2.
  • S. Zagoruyko (2015) 92.45% on cifar-10 in torch. Note: https://torch.ch/blog/2015/07/30/cifar.htmlAccessed: 2019-07-22 Cited by: §7.3.
  • X. Zhang, J. Zou, K. He, and J. Sun (2015) Accelerating very deep convolutional networks for classification and detection. IEEE transactions on pattern analysis and machine intelligence 38 (10), pp. 1943–1955. Cited by: §3.2.
  • W. X. Zhao, X. Zhang, D. Lemire, D. Shan, J. Nie, H. Yan, and J. Wen (2015) A general simd-based approach to accelerating compression algorithms. ACM Transactions on Information Systems (TOIS) 33 (3), pp. 15. Cited by: §5.2.
  • M. Zukowski, S. Heman, N. Nes, and P. Boncz (2006) Super-scalar ram-cpu cache compression. In Data Engineering, 2006. ICDE’06. Proceedings of the 22nd International Conference on, pp. 59–59. Cited by: §5.2.

Appendix A Corpus and Data Cleaning

We selected the 81 papers used in our analysis in the following way. First, we conducted an ad hoc literature search, finding widely cited papers introducing pruning methods and identifying other pruning papers that cited them using Google Scholar. We then went through the conference proceedings from the past year’s NeurIPS, ICML, CVPR, ECCV, and ICLR and added all relevant papers (though it is possible we had false dismissals if the title and abstract did not seem relevant to pruning). Finally, during the course of cataloging which papers compared to which others, we added to our corpus any pruning paper that at least one existing paper in our corpus purported to compare to. We included both published papers and unpublished ones of reasonable quality (typically on arXiv). Since we make strong claims about the lack of comparisons, we included in our corpus five papers whose methods technically do not meet our definition of pruning but are similar in spirit and compared to by various pruning papers. In short, we included essentially every paper introducing a method of pruning neural networks that we could find, taking care to capture the full directed graph of papers and comparisons between them.

Because different papers report slightly different metrics, particularly with respect to model size, we converted reported results to a standard set of metrics whenever possible. For example, we converted reported Top-1 error rates to Top-1 accuracies, and fractions of parameters pruned to compression ratios. Note that it is not possible to convert between size metrics and speedup metrics, since the amount of computation associated with a given parameter can depend on the layer in which it resides (since convolutional filters are reused at many spatial positions). For simplicity and uniformity, we only consider self-reported results except where stated otherwise.

We also did not attempt to capture all reported metrics, but instead focused only on model size reduction and theoretical speedup, since 1) these are by far the most commonly reported and, 2) there is already a dearth of directly comparable numbers even for these common metrics. This is not entirely fair to methods designed to optimize other metrics, such as power consumption Louizos et al. (2017); Yang et al. (2017); Han et al. (2015); Kim et al. (2015), memory bandwidth usage Peng et al. (2018); Kim et al. (2015), or fine-tuning time Dubey et al. (2018); Yamamoto and Maeno (2018); Huang and Wang (2018); He et al. (2018a), and we consider this a limitation of our analysis.

Lastly, as a result of relying on reading of hundreds of pages of dense technical content, we are confident that we have made some number of isolated errors. We therefore welcome correction by email and refer the reader to the arXiv version of this paper for the most up-to-date revision.

Appendix B Checklist for Evaluating a Pruning Method

For any pruning technique proposed, check if:

  • It is contextualized with respect to magnitude pruning, recently-published pruning techniques, and pruning techniques proposed prior to the 2010s.

  • The pruning algorithm, constituent subroutines (e.g., score, pruning, and fine-tuning functions), and hyperparameters are presented in enough detail for a reader to reimplement and match the results in the paper.

  • All claims about the technique are appropriately restricted to only the experiments presented (e.g., CIFAR-10, ResNets, image classification tasks, etc.).

  • There is a link to downloadable source code.

For all experiments, check if you include:

  • A detailed description of the architecture with hyperparameters in enough detail to for a reader to reimplement it and train it to the same performance reported in the paper.

  • If the architecture is not novel: a citation for the architecture/hyperparameters and a description of any differences in architecture, hyperparameters, or performance in this paper.

  • A detailed description of the dataset hyperparameters (e.g., batch size and augmentation regime) in enough detail for a reader to reimplement it.

  • A description of the library and hardware used.

For all results, check if:

  • Data is presented across a range of compression ratios, including extreme compression ratios at which the accuracy of the pruned network declines substantially.

  • Data specifies the raw accuracy of the network at each point.

  • Data includes multiple runs with separate initializations and random seeds.

  • Data includes clearly defined error bars and a measure of central tendency (e.g., mean) and variation (e.g., standard deviation).

  • Data includes FLOP-counts if the paper makes arguments about efficiency and performance due to pruning.

For all pruning results presented, check if there is a comparison to:

  • A random pruning baseline.

    • A global random pruning baseline.

    • A random pruning baseline with the same layerwise pruning proportions as the proposed technique.

  • A magnitude pruning baseline.

    • A global or uniform layerwise proportion magnitude pruning baseline.

    • A magnitude pruning baseline with the same layerwise pruning proportions as the proposed technique.

  • Other relevant state-of-the-art techniques, including:

    • A description of how the comparisons were produced (data taken from paper, reimplementation, or reuse of code from the paper) and any differences or uncertainties between this setting and the setting used in the main experiments.

Appendix C Experimental Setup

For reproducibility purposes, ShrinkBench fixes random seeds for all the dependencies (PyTorch, NumPy, Python).

c.1 Pruning Methods

For the reported experiments, we did not prune the classifier layer preceding the softmax. ShrinkBench supports pruning said layer as an option to all proposed pruning strategies. For both Global and Layerwise Gradient Magnitude Pruning a single minibatch is used to compute the gradients for the pruning. Three independent runs using different random seeds were performed for every CIFAR10 experiment. We found some variance across methods that relied on randomness, such as random pruning or gradient based methods that use a sampled minibatch to compute the gradients with respect to the weights.

c.2 Finetuning Setup

Pruning was performed from the pretrained weights and fixed from there forwards. Early stopping is implemented during finetuning. Thus if the validation accuracy repeatedly decreases after some point we stop the finetuning process to prevent overfitting.

All reported CIFAR10 experiments used the following finetuning setup:

  • [leftmargin=4mm]

  • Batch size: 64

  • Epochs: 30

  • Optimizer: Adam

  • Initial Learning Rate:

  • Learning rate schedule: Fixed

All reported ImageNet experiments used the following finetuning setup

  • [leftmargin=4mm]

  • Batch size: 256

  • Epochs: 20

  • Optimizer: SGD with Nesterov Momentum (0.9)

  • Initial Learning Rate:

  • Learning rate schedule: Fixed

Appendix D Additional Results

Here we include the entire set of results obtained with ShrinkBench. For CIFAR10, results are included for CIFAR-VGG, ResNet-20, ResNet-56 and ResNet-110. Standard deviations across three different random runs are plotted as error bars. For ImageNet, results are reported for ResNet-18.

Figure 9: Accuracy for several levels of compression for CIFAR-VGG on CIFAR-10
Figure 10: Accuracy vs theoretical speedup for CIFAR-VGG on CIFAR-10
Figure 11: Accuracy for several levels of compression for ResNet-20 on CIFAR-10
Figure 12: Accuracy vs theoretical speedup for ResNet-20 on CIFAR-10
Figure 13: Accuracy for several levels of compression for ResNet-56 on CIFAR-10
Figure 14: Accuracy vs theoretical speedup for ResNet-56 on CIFAR-10
Figure 15: Accuracy for several levels of compression for ResNet-110 on CIFAR-10
Figure 16: Accuracy vs theoretical speedup for ResNet-110 on CIFAR-10
Figure 17: Accuracy for several levels of compression for ResNet-18 on ImageNet
Figure 18: Accuracy vs theoretical speedup for ResNet-18 on ImageNet