) problems, such as computer vision, natural language processing, and robotics.CNN architectures and models have surpassed human performance in many such challenges. These advancements are a result of innovation in various research directions, including network architectures, optimization methods, and software frameworks. However, these breakthroughs have come at the cost of ever increasing model sizes and computation loads. Therefore, model compression becomes an important topic when CNN models are applied in practice, especially for edge applications.
In ML deployment scenarios, a lightweight compressed model has numerous advantages. On the server side, a smaller model reduces bandwidth usage and power consumption within a data center leading to savings in operational cost. Further, deploying these CNN models on the client side (embedded or edge device) comes with the concomitant advantages of privacy, low latency, and better customization . However, in such scenarios they face more restricted resource requirements and need to be carefully tuned for optimal performance. Hence, model compression has garnered more research interests in the recent years with advances in techniques such as Pruning, Quantization, and Low Rank Decomposition.
Since CNNs are commonly over-parameterized, pruning non-critical or redundant neurons is a reasonable option to reduce the model size and floating-point operations at runtime . Directly searching for the best combination of neurons to be pruned is an NP-hard problem and typically not feasible for a CNN with millions of parameters . Also, a pruned network with high sparsity may not lead to practical benefits. Therefore, a successful pruning algorithm needs to be efficient while reducing model size, improving inference speed, and maintaining accuracy.
In this paper, we provide a comprehensive survey of the algorithmic aspects of model pruning for CNNs with a focus on edge deployment. We identify the development trends and point out the current areas of focus. More importantly, we identify the drawbacks and challenges of these approaches and provide users with a better understanding of the trade-offs and avenues of further study.
2 Pruning Methodology
The problem of pruning is formulated as follows: given a labelled dataset with samples, , find the best light-weight CNN model that takes an input and predicts its label , where represents the model parameters. For convolution layers, a weight is the 4D kernel to convert input channels into output channels with spatial convolution over directions. The prediction performance is defined as the accuracy
and the un-pruned model minimizes the loss function (for accuracy), served as a baseline model.
The hardware limits for edge lie in Processor architecture and speed, Memory, Power/Energy consumption, and Inference latency. In practice, analytical proxies, such as FLOP or number of parameters, for theoretical and computational efficacy are commonly used. In this paper, we differentiate between the actual and analytical proxy results when necessary.
Current research in pruning is divided into two components: identifying the most promising neurons to be pruned; and training and finetuning the pruned model to recover the base model’s prediction performance. A successful pruning algorithm is an iterative progression of these components as illustrated in Algorithm 1 , and improvements in the state-of-the-art come from advances in one or both of these aspects. Therefore, we categorize existing algorithms into these two categories for clarity in the following sections. We report their compression performance in Table 2.
2.1 Pruning Criteria
Different heuristic criteria were developed to identify the promising structures to be pruned without harming the prediction performance. We classify these criteria into two categories: data-agnostic and data-driven where data-agnostic techniques compute saliency criteria without using the training data directly. Early works[26, 10] relied on the second-order Hessian matrix, , to identify the weights to be removed without harming model predictability. However, these approaches also require intensive computation of the second-order derivatives of the weights (and matrix inversion ).
To alleviate the training burden,  proposed to merge weights by their value similarity. They demonstrated their success on fully-connected layers, which may dominate the model size.
Contradictory to pruning unimportant weights all at once ,  used simple weight values as saliency to prune them iteratively. At each pruning iteration, the weights with L2 norms below a given threshold are located and masked out in the subsequent training and inference stages. Despite its simplicity, the iterative procedure helps the pruned model to recover and maintain its accuracy and has been commonly used in pruning approaches.
|Paper||Basic idea||Saliency Expression|
|[26, 10]||Minimize pruned deterioration||;|
|||Remove weights with small values|
|[40, 13]||Weight similarity and redundancy|
|[25, 44, 27, 31]||Structured L1/2-norm penalty||Penalize ; ; ;|
|||Magnitude of Batch Norm||-|
|||Remove by filter similarity||Geometric mean|
|||Remove inactive neurons (APoZ)|
|||Remove activations with flat gradient|
|||Reconstruction error on channel pruning|
is probability of activations in bins
|||Reconstruction error and L1 norm||-|
|||Neural Importance Score|
|||Remove insensitive neurons||Reset weights with small updates to initial value|
The above pruning approaches can significantly reduce both model size and compute, but reducing FLOPs is not directly linked with inference speedup, especially for deep CNNs, as the weights are replaced by sparse matrices . To alleviate such problems, pruning structural components is needed.  proposed to group weights on output channels before applying the L1-norm penalty. This structured pruning leads to a smaller and more accurate model.  extended the above channel-based approach to additional structure levels, i.e. filter count, filter shape, and layer width, to regularize the model size. Instead of grouping weights’ L2 norms,  directly treated the L1-norm of filters as criterion to prune convolution layers. 
used the scaling factor of the Batch Normalization layer as the saliency measure. proposed to prune filters close to the geometric median values, which could be represented by other filters.
Besides using model weights only, another category of approaches is to utilize the training data directly for saliency measures, which we call data-aware pruning.  analyzed average percentage of zeros (APoZ) of neurons as the criterion for weight importance.  proposed to prune model at channel level by minimizing the reconstruction error for input data in a channel pruning strategy.  proposed to use the entropy of activations to identify the channels to be removed.  proposed a neuron importance score pruning (NISP) to minimize the reconstruction error in the ‘final response layer’.
Other more sophisticated measures have also been investigated.  revisited the derivatives of the loss function that approximated the cost of dropping a feature using the first-order derivatives and pruned network structures based on grouped activations ( in Table 1) from the feature maps.  proposed to use the magnitude of gradients to prune model during training.
We summarize these criteria in Table 1. Limited by computational power, early works often focused on using the data-agnostic approach on a predefined saliency to prune weights. However, those approaches required a predefined threshold to prune neurons which make it hard to control the final compression ratio. More recent works have shifted to use the patterns in the inference process, (e.g., APoZ) to compress models while maintaining model accuracy. Also, the unstructured pruning methods that result in sparse weight matrices, have been gradually rendered outdated by structured pruning approaches that provide a more realistic performance gain (discussed later in Sec. 3.4).
2.2 Pruning Procedure
With the above-mentioned saliency measures, early studies adopted a direct threshold on model weights and masked them out in the following iterations. However, such pruning approaches typically deteriorate model predictive performance and require a retraining step to recover the model prediction accuracy before further pruning (as illustrated in Fig. 1 (b)). The iterative procedure, (training, pruning, and then retraining) is important for a successful pruning strategy . Below, we discuss the research on improving this pruning procedure.
Selecting a pre-defined threshold has three major drawbacks as 1) the threshold is not linked to sparsity directly; 2) different layers have different sensitivity; 3) a one-time threshold may cut off too much information to restore the original accuracy.  analyzed a gradual increasing threshold schedule to prune network weights automatically to the final target sparsity. Another approach to solve the challenge of selecting the best thresholds for each layers,  adopted Bayesian optimization to automatically tune the values as a Gaussian process.  proposed auto-balanced filter pruning by introducing a negative factor to L2 norm. By using both positive and negative factors to regularize the filter weights, it can automatically select the important filters while suppressing less useful ones.
Instead of pruning with a step function,  proposed a splicing function to mask the weights. Without an extreme change of the weight values, the pruning process can be fused with the retraining process and leads to a significant speed up when training the compressed model, as shown in Fig. 1 (a)). Furthermore, the soft masking helps the pruned structure to be recovered if they are critical for the model architecture. The back-propagation is rewritten as:
where gradually reduce unnecessary weights to zero:
Hyperparameters and control the strength of the threshold, and if , it corresponds to the typical binary mask. One simple choice of is to use directly. Different from pruning by saliency, this approach can be viewed as using an auxiliary masking variable.  extended this idea to prune filters and demonstrated more practical success in extensive experiments. 
applied a similar method in the frequency domain to compressCNN models.
Deciding which weights to drop can be considered as optimizing an objective function with a L0-norm on weights. Starting from this perspective,  derived an equivalent expression by representing
as a probabilistic gate following a Bernoulli distribution. proposed to iteratively train model and learn binary gates to prune filters, which resulted in a compressed model with marginal decrease in accuracy.  extended the auxiliary masking variables to be learned together with model variables for filters, where the auxiliary variables scale filters and are regularized by the L1-norm.  proposed a similar approach but with a novel tick-tock update schedule.  treated the masking variables as scaling for the channel importance and pruned unimportant filters according to their effects on the loss. These methods relied on auxiliary layers to learn a masking to reduce the model size efficiently.
Many L1/2 norm-based penalties mentioned earlier are the approximation to the L0-norm problem, however they can lead to unstable and sub-optimal solutions. Several studies worked on novel optimization methods to solve the problem.  translated this problem to an alternative one and solved it by using forward-backward splitting algorithm to optimize an L2,1-norm instead. Compared to other penalizing forms, this approach provides a better approximation to the original optimization problem.  adopted the Accelerated Proximal Gradient (APG) method to solve the problem with scaling factors similar to  but at various structure levels to improve the performance on state-of-the-art compact CNN models.  learned gating masks via Alternating Direction Method of Multipliers (ADMM)-based optimization.
With advances in Reinforcement Learning (RL) research, there are many studies applying RL to model pruning.  built an individual gradient policy learner for each layer of a CNN model to prune filters for an overall reward combining the accuracy and efficiency terms.  adopted a deep deterministic policy gradient to continuously control the compression ratio for a balanced target of accuracy and resource consumption.  adopted Neural Architecture Search (NAS) to balance model performance by exploring the width and depth of a network.
In summary, the innovations for pruning procedures are classified into the following three directions of improvement: 1) efficient iterative procedure; 2) representative masking method; 3) robust learning to prune network along with training.
provides a compilation for model compression algorithms. We collected the results for two datasets, CIFAR-10 and ImageNet[23, 38]. The former is a small dataset where we expected to see significant improvement over the baseline. We reported results with VGG or ResNet models only [39, 11]. The ImageNet dataset is challenging but more realistic and, for that reason, we included other popular CNN models, namely, AlexNet and MobileNet [24, 16]
We reported the best pruned model with the following three measures:
Accuracy reduction measures the model degradation as the difference in accuracy between the pruned model and the original model. A smaller value is better and a negative value indicates that the pruned model is better than the original model.
Size reduction measures the ratio of the reduction in size (i.e., number of parameters) over the original model size.
Time reduction measures the ratio of the reduction in time or FLOP over the original model.
Overall, we observed that pruning leads to between a 10% and 90% reduction on both size and inference time. As commonly expected, the compression for a more challenging problem such as the ImageNet dataset, is harder than for the CIFAR-10 dataset. Unexpectedly, we found that a reduction in size is not linearly correlated with an improvement in inference speed. More profoundly, we found that only few studies have tested their performance on physical devices, which is often not well captured by FLOP. Finally, we found that there is limited research using the data-aware approach for filter pruning, making that an interesting direction for further study.
The mark (c) means that the pruning is for the convolutional structure only. Occasionally, the fully connected layers have been converted to convolution layers and been pruned.
The mark (p) means that the time reduction is measured on physical devices, otherwise, it is based on the FLOP.
Top-5 accuracy difference is reported.
3.2 Other Approaches
Model compression techniques have varying advantages and disadvantages when compared with each other. Low Rank Decomposition  uses linear algebra to reduce the model’s weight matrices with rank decomposition. This approach allows for a mathematically sound reduction in model size and computation speed up. However, training each model requires a custom and complicated implementation procedure, which is a challenge. Quantization  is another popular compression technique which involves replacing high precision floating points in CNN weight matrices with lower precision representations. It has the distinct advantage of being both universal across different models and offering consistent performance improvements (depending on the implementation). However, it requires innate hardware support for these gains to be realized. Moreover, precision sensitivity can lead to performance deterioration. Finally, handcrafted smaller architectures such as MobileNet  have also enjoyed success in deployment under edge scenarios, however designing a custom architecture for different edge settings results in excessive cost and effort, and cannot be scaled.
Another interesting direction is to dynamically prune the network at runtime.  and  used additional structure to filter the unimportant features and reduced the runtime for inference. With small auxiliary connections,  boost the important features and suppress the irrelevant ones to reduce computation and improve inference speed.
3.3 Combined Approaches
We also observed few studies on combining different compression approaches.  combined low rank decomposition with pruning by threshold and successfully compressed the model size by more than 10 times for AlexNet and VGG-16 models, without losing accuracy on the ImageNet dataset.  used the minimum description length principle to achieve quantization and pruning coherently via Bayesian variational inference. However, an ablation study on the effectiveness of joint approaches is still missing and a systematic study would be extremely helpful to guide practitioners in this field.
3.4 Advantages and Limitations
Model size reduction: leveraging on the redundant information carried in a CNN model, a natural consequence of pruning is the reduced size of an input model, making it an important step for deploying models on the edge.
Inference time reduction: most of the pruning methods targeted to CNN models provide structural pruning rather than pruning at the individual weight level. These algorithms lead to a realistic inference time reduction for deployment.
Universal compression approach: pruning methods are generally model independent, which can be applied to any given model architecture with minor modification. Meanwhile, a pruned model can be deployed to current hardware environment without extensive engineering for implementation, as compared to the quantization methods.
There are still a few limitations prevent pruning becoming practical in industry settings for edge deployment.
Long training time: the typical workflow of network pruning, i.e. Algorithm 1, requires iterative model training for each pruned network, which can significantly increase the time required to build these models. Approaches such as  speed up the training and pruning with a better update schema, but there is still no one-shot solution to prune a network with minimal cost;
Extensive hyperparameter finetuning: all of the pruning strategies require a set of hyperparameters to finely balance the compression ratio and the model accuracy. Detailed analysis of the weights on each layer was required in the earlier approaches while recent techniques replace such requirements with more general global parameters and rely on hyperparameter optimization techniques (e.g. ) to speed up the tuning process;
Small but not fast: as realized in many later studies, a plain pruning strategy might result in fewer weights and FLOPs, but it does not guarantee a faster model with less energy consumption. Building a realistic model to capture the correct physical resource consumption is a critical step to achieving practical model compression at runtime [1, 36].
Benchmark: CNN architectures have experienced rapid development in recent years. Therefore, early pruning results, which are based on the over-parameterized CNN models, such as AlexNet and VGG-16 [24, 39], may not be effective for current efficient models. More recent studies have gradually focused on light-weight networks, such as ResNet and MobileNet [11, 16]. However, a commonly accepted baseline is still missing for fair comparison.
Last but not least, the improvement in energy efficiency is not included in the above analysis. Most research in pruning reduce the amount computation, theoretically. However, other factors such as transferring data on to and off the chip takes energy comparable to that used for computation making the applicability of these studies limited when it comes to deployment on low power devices. To address this discrepancy,  included energy consumption analysis in their iterative pruning procedure.  showed that a magnitude-based pruning on SqueezeNet  can lead to higher energy consumption. Therefore, they proposed to model the energy consumption for each layer and prune layers based on their energy consumption to address this issue.  further modeled the energy consumption by bilinear regression, and optimized the energy using ADMM framework with the original loss.
The consensus procedure is that taking a pre-trained model and then iteratively retraining the model to adjust to the reduced representation leads to a better model than training the pruned model from scratch [9, 27, 15]. However,  found contradicting results that a model trained and pruned iteratively does not provide a significantly better performance than a pruned model trained from scratch for a given budget. This discrepancy is another area that needs further investigation.
As neural network models get larger and the push towards edge/IoT devices becomes more pronounced, there is need for techniques and best practices that allow for the creation of smaller efficient models. Pruning is one such technique that allows us to create a smaller model from an existing larger and over-parameterized model. In this paper we examined the constraints and metrics that motivate model compression, and formulated requirements of pruning algorithms.
We organized research in the field based on pruning criteria and pruning procedure. We further classified the pruning criteria into data-agnostic and data-aware approaches. Our hope is that this breakdown will provide guidance for future researchers to develop new algorithms and allow them to compare previous works effectively (in addition to the comprehensive comparison in Table 2).
The field of deep learning is accelerated by the development of tools and frameworks for model building and training. Except for a few examples such as PocketFlow and Distiller [45, 56], a general platform for pruning is generally missing. Our work also serves as a guidance to develop a universal compression pipeline by depicting the independent components and procedures in the pruning process.
Finally, we provided an in-depth discussion on the limitations, advantages and novel research directions along with a comparison of the performance of major pruning techniques on standard datasets and models.
-  Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations, 2019.
-  Misha Denil, Babak Shakibi, Laurent Dinh, Marc’Aurelio Ranzato, and Nando De Freitas. Predicting parameters in deep learning. In Advances in neural information processing systems, pages 2148–2156, 2013.
-  Xiaohan Ding, Guiguang Ding, Jungong Han, and Sheng Tang. Auto-balanced filter pruning for efficient convolutional neural networks. In AAAI Conference on Artificial Intelligence, 2018.
Xuanyi Dong, Junshi Huang, Yi Yang, and Shuicheng Yan.
More is less: A more complicated network with less inference
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5840–5848, 2017.
-  Xuanyi Dong and Yi Yang. Network pruning via transformable architecture search. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 759–770. 2019.
-  Xitong Gao, Yiren Zhao, Łukasz Dudziak, Robert Mullins, and Cheng zhong Xu. Dynamic channel pruning: Feature boosting and suppression. In International Conference on Learning Representations, 2019.
-  Maximilian Golub, Guy Lemieux, and Mieszko Lis. Full deep neural network training on a pruned weight budget. In SysML, 2019.
-  Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pages 1379–1387, 2016.
-  Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pages 1135–1143, 2015.
-  Babak Hassibi and David G Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in neural information processing systems, pages 164–171, 1993.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 2234–2240. AAAI Press, 2018.
-  Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
-  Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784–800, 2018.
-  Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1389–1397, 2017.
-  Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
-  Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016.
Weizhe Hua, Yuan Zhou, Christopher M De Sa, Zhiru Zhang, and G. Edward Suh.
Channel gating neural networks.In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 1884–1894, 2019.
-  Qiangui Huang, Kevin Zhou, Suya You, and Ulrich Neumann. Learning to prune filters in convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 709–718. IEEE, 2018.
-  Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 304–320, 2018.
-  Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5 mb model size. arXiv preprint arXiv:1602.07360, 2016.
-  MIT Technology Review Insights. On-device processing and ai go hand-in-hand. https://www.technologyreview.com/s/610421/on-device-processing-and-ai-go-hand-in-hand/, March 2018. Accessed: 2019-09-11.
-  Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009.
-  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2554–2564, 2016.
-  Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pages 598–605, 1990.
-  Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
-  Tuanhui Li, Baoyuan Wu, Yujiu Yang, Yanbo Fan, Yong Zhang, and Wei Liu. Compressing convolutional neural networks via factorized convolutional filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3977–3986, 2019.
Jiayi Liu, Samarth Tripathi, Unmesh Kurup, and Mohak Shah.
Auptimizer – an extensible, open-source framework for hyperparameter tuning.In The IEEE BigData 2019, 2019.
-  Zhenhua Liu, Jizheng Xu, Xiulian Peng, and Ruiqin Xiong. Frequency-domain dynamic pruning for convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1043–1053, 2018.
-  Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pages 2736–2744, 2017.
-  Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
-  Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through regularization. 2018.
-  Jian-Hao Luo and Jianxin Wu. An entropy-based pruning method for cnn compression. arXiv preprint arXiv:1706.05791, 2017.
-  Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pages 5058–5066, 2017.
-  Diana Marculescu, Dimitrios Stamoulis, and Ermao Cai. Hardware-aware machine learning: modeling and optimization. In Proceedings of the International Conference on Computer-Aided Design, page 137. ACM, 2018.
-  Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016.
-  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
-  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
-  Suraj Srinivas and R. Venkatesh Babu. Data-free parameter pruning for deep neural networks. In Procedings of the British Machine Vision Conference 2015, pages 31.1–31.12. British Machine Vision Association, 2015.
-  Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, and Joel S Emer. Efficient processing of deep neural networks: A tutorial and survey. Proceedings of the IEEE, 105(12):2295–2329, 2017.
-  Frederick Tung, Srikanth Muralidharan, and Greg Mori. Fine-pruning: Joint fine-tuning and compression of a convolutional network with bayesian optimization. arXiv preprint arXiv:1707.09102, 2017.
-  Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. 2017.
-  Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pages 2074–2082, 2016.
-  Jiaxiang Wu, Yao Zhang, Haoli Bai, Huasong Zhong, Jinlong Hou, Wei Liu, and Junzhou Huang. Pocketflow: An automated framework for compressing and accelerating deep neural networks. In Advances in Neural Information Processing Systems (NIPS), Workshop on Compact Deep Neural Networks with Industrial Applications, 2018.
-  XIA XIAO, Zigeng Wang, and Sanguthevar Rajasekaran. Autoprune: Automatic network pruning by regularizing auxiliary parameters. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13681–13691. 2019.
-  Haichuan Yang, Yuhao Zhu, and Ji Liu. Ecc: Platform-independent energy-constrained deep neural network compression via a bilinear regression model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11206–11215, 2019.
-  Haichuan Yang, Yuhao Zhu, and Ji Liu. Energy-constrained compression for deep neural networks via weighted sparse projection and layer input masking. In International Conference on Learning Representations, 2019.
-  Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze. Designing energy-efficient convolutional neural networks using energy-aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5687–5695, 2017.
-  Zhonghui You, Kun Yan, Jinmian Ye, Meng Ma, and Ping Wang. Gate decorator: Global filter pruning method for accelerating deep convolutional neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 2130–2141. 2019.
-  Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, and Larry S Davis. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9194–9203, 2018.
-  Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. On compressing deep models by low rank and sparse decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7370–7379, 2017.
-  Chenglong Zhao, Bingbing Ni, Jian Zhang, Qiwei Zhao, Wenjun Zhang, and Qi Tian. Variational convolutional neural network pruning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
-  Hao Zhou, Jose M Alvarez, and Fatih Porikli. Less is more: Towards compact cnns. In European Conference on Computer Vision, pages 662–677. Springer, 2016.
-  Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.
-  Neta Zmora, Guy Jacob, and Gal Novik. Neural network distiller, June 2018.