Large and deep neural networks, despite of their great successes in a wide variety of applications, call for compact and efficient model representations to reduce the vast amount of network parameters and computational operations, that are resource-hungry in terms of memory, energy and communication bandwidth consumption. This need is imperative especially for resource constrained devices such as mobile phones, wearable and Internet of Things (IoT) devices. Neural network compression is a set of techniques that address these challenges raised in real life industrial applications.
Minimizing network sizes without compromising original network performances has been pursued by a wealth of methods, which often adopt a three-phase learning process, i.e. training-pruning-tuning. In essence, network features are first learned, followed by the pruning stage to reduce network sizes. The subsequent fine-tuning phase aims to restore deteriorated performances incurred by undue pruning. This ad hoc three phase approach, although empirically justified e.g. in [14, 17, 12, 20, 22], was recently questioned with regards to its efficiency and effectiveness. Specifically [15, 3] argued that the network architecture should be optimized first, and then features should be learned from scratch in subsequent steps.
In contrast to the two aforementioned opposing approaches, the present paper illustrates a novel method which simultaneously learns both the number of filters and network features over multiple optimization epochs. This integrated optimization process brings about immediate benefits and challenges — on the one hand, separated processing steps such as training, pruning, fine-tuning etc, are no longer needed and the integrated optimization step guarantees consistent performances for the given neural network compression scenarios. On the other hand, the dynamic change of network architectures has significant influences on the optimization of features, which in turn might affect the optimal network architectures. It turns out the interplay between architecture and feature optimizations plays a crucial role in improving the final compressed models.
2 Related Work
Network pruning was pioneered [11, 6, 4] in the early development of neural network, since when a broad range of methods have been developed. We focus on neural network compression methods that prune filters or channels. For thorough review of other approaches we refer to a recent survey paper .
Li et al.  proposed to prune filters with small effects on the output accuracy and managed to reduce about one third of inference cost without compromising original accuracy on CIFAR-10 dataset. Wen et al.  proposed a structured sparsity regularization framework, in which the group lasso constrain term was incorporated to penalize and remove unimportant filters and channels. Zhou et al. 
also adopted a similar regularization framework, with tensor trace norm and group sparsity incorporated to penalize the number of neurons. Up to 70% of model parameters were reduced without scarifying classification accuracies on CIFAR-10 datasets. Recently Liuet al.  proposed an interesting network slimming method, which imposes L1 regularization on channel-wise scaling factors
in batch-normalization layers and demonstrated remarkable compression ratio and speedup using a surprisingly simple implementation. Nevertheless, network slimming based on scaling factors is not guaranteed to achieve desired accuracies and separate fine-tunings are needed to restore reduced accuracies. Qinet al.  proposed a functionality-oriented filter pruning method to remove less important filters, in terms of their contributions to classification accuracies. It was shown that the efforts for model retraining is moderate but still necessary, as in the most of state-of-the-art compression methods.
DIVNET adopted Determinantal Point Process (DPP) to enforce diversities between individual neural activations . Diversity of filter weights defined in (4) is related to orthogonality of weight matrix, which has been extensively studied. An example being , proposed to learn Stiefel layers, which have orthogonal weights, and demonstrated its applicability in compressing network parameters. Interestingly, the notion of diversity regularized machine (DRM) has been proposed to generate an ensemble of SVMs in the PAC learning framework , yet its definition of diversity is critically different from our definition in (4), and its applicability to deep neural networks is unclear.
3 Simultaneous Learning of Architecture and Feature
The proposed compression method belongs to the general category of filter-pruning approaches. In contrast to existing methods [14, 17, 12, 20, 22, 15, 3], we adopt following techniques to ensure that simultaneous optimization of network architectures and features is a technically sound approach. First, we introduce an explicit pruning lossestimation as an additional regularization term in the optimization objective function. As demonstrated by experiment results in Section 4, the introduced pruning loss enforces the optimizer to focus on promising candidate filters while suppressing contributions of less relevant ones. Second, based on the importance of filters, we explicitly turn-off unimportant filters below given percentile threshold. We found the explicit shutting down of less relevant filters is indispensable to prevent biased estimation of pruning loss. Third, we also propose to enforce the diversities between filters and this diversity-based regularization term improves the trade-off between model sizes and accuracies, as demonstrated in various applications.
Our proposed method is inspired by network slimming  and main differences from this prior art are two-folds: a) we introduce the pruning loss and incorporate explicit pruning into the learning process, without resorting to the multi-pass pruning-retraining cycles; b) we also introduce filter-diversity based regularization term which improves the trade-off between model sizes and accuracies.
3.1 Loss Function
Liu et al.  proposed to push towards zero the scaling factor in batch normalization (BN) step during learning, and subsequently, the insignificant channels with small scaling factors are pruned. This sparsity-induced penalty is introduced by regularizing L1-norm of the learnable parameter in the BN step i.e.,
in which denote filter inputs,
the filter-wise mean and variance of inputs,the scaling and offset parameters of batch normalization (BN) and a small constant to prevent numerical un-stability for small variance. It is assumed that there is always a BN filter appended after each convolution and fully connected filter, so that the scaling factor is directly leveraged to prune unimportant filters with small values. Alternatively, we propose to directly introduce scaling factor to each filter since it is more universal than reusing BN parameters, especially considering the networks which have no BN layers.
By incorporating a filter-wise sparsity term, the object function to be minimized is given by:
where the first term is the task-based loss, and denotes the set of scaling factors for all filters. This pruning scheme, however, suffers from two main drawbacks: 1) since scaling factors are equally minimized for all filterers, it is likely that the pruned filters have unignorable contributions that should not be unduly removed. 2) the pruning process, i.e., architecture selection, is performed independantly w.r.t. the feature learning; the performance of pruned network is inevitably compromised and has to be recovered by single-pass or multi-pass fine-tuning, which impose additional computational burdens.
3.1.1 An integrated optimization
Let denote the sets of neural network weights for, respectively, all filters, those pruned and remained ones i.e. . In the same vein, denote the sets of scaling factors for all filters, those removed and remained ones respectively.
To mitigate the aforementioned drawbacks, we propose to introduce two additional regularization terms to Eq. 2,
where and are defined as in Eq. 2, the third term is the pruning loss and the forth is the diversity loss which are elaborated below. are weights of corresponding regularization terms.
3.1.2 Estimation of pruning loss
The second regularization term in (3.1.1) i.e. (and its compliment ) is closely related to performance deterioration incurred by undue pruning111In the rest of the paper we refer to it as the estimated pruning loss.. The scaling factors of pruned filters , as in , are determined by first ranking all and taking those below the given percentile threshold. Incorporating this pruning loss enforces the optimizer to increase scaling factors of promising filters while suppressing contributions of less relevant ones.
The rationale of this pruning strategy can also be empirically justified in Figure 1, in which scaling factors of three different methods are illustrated. When the proposed regularization terms are added, clearly, we observed a tendency for scaling factors being dominated by few number of filters — when 70% of filters are pruned from a VGG network trained with CIFAR-10 dataset, the estimated pruning loss equals 0.2994, 0.0288, , respectively, for three compared methods. Corresponding accuracy deterioration are 60.76% and 0% for network-slimming  and the proposed methods. Therefore, retraining of pruned network is no longer needed for the proposed method, while  has to retain the original accuracy through single-pass or multi-pass of pruning-retraining cycles.
3.1.3 Turning off candidate filters
It must be noted that the original loss is independent of the pruning operation. If we adopt this loss in (3.1.1), the estimated pruning loss might be seriously biased because of undue assignments of not being penalized. It seems likely some candidate filters are assigned with rather small scaling factors, nevertheless, they still retain decisive contributions to the final classifications. Pruning these filters blindly leads to serious performance deterioration, according to the empirical study, where we observe over 50 accuracy loss at high pruning ratio.
In order to prevent such biased pruning loss estimation, we therefore explicitly shutdown the outputs of selected filters by setting corresponding scaling factors to absolute zero. The adopted loss function becomes. This way, the undue loss due to the biased estimation is reflected in , which is minimized during the learning process. We found the turning-off of candidate filters is indispensable.
3.1.4 Online pruning
We take a global threshold for pruning which is determined by percentile among all channel scaling factors. The pruning process is performed over the whole training process, i.e., simultaneous pruning and learning. To this end, we compute a linearly increasing pruning ratio from the first epoch (e.g., 0%) to the last epoch (e.g., 100%) where the ultimate pruning target ratio is applied. Such an approach endows neurons with sufficient evolutions driven by diversity term and pruning loss, to avoid mis-pruning neurons prematurely which produces crucial features. Consequently our architecture learning is seamlessly integrated with feature learning. After each pruning operation, a narrower and more compact network is obtained and its corresponding weights are copied from the previous network.
3.1.5 Filter-wise diversity
The third regularization term in (3.1.1) encourages high diversities between filter weights as shown below. Empirically, we found that this term improves the trade-off between model sizes and accuracies (see experiment results in Section 4).
We treat each filter weight, at layer
, as a weight (feature) vectorof length , where are filter width and height, the number of channels in the filter. The diversity between two weight vectors of the same length is based on the normalized cross-correlation of two vectors:
in which are normalized weight vectors, and is the dot product of two vectors. Clearly, the diversity is bounded , with value close 0 indicating low diversity between highly correlated vectors and values near 1 meaning high diversity between uncorrelated vectors. In particular, diversity equals 1 also means that two vectors are orthogonal with each other.
The diversities between filters at the same layer is thus characterized by a N-by-N matrix in which elements are pairwise diversities between weight vectors . Note that for diagonal elements are constant 0. The total diversity between all filters is thus defined as the sum of all elements
|Models / Pruning Ratio||0.0||0.5||0.6||0.7||0.8|
|Models / Pruning Ratio||0.0||0.3||0.4||0.5||0.6|
4 Experiment Results
In this section, we evaluate the effectiveness of our method on various applications with both visual and audio data.
For visual tasks, we adopt ImageNet and CIFAR datasets. The ImageNet dataset contains 1.2 million training images and 50,000 validation images of 1000 classes. CIFAR-10 which consists of 50K training and 10K testing RGB images with 10 classes. CIFAR-100 is similar to CIFAR-10, except it has 100 classes. The input image is 32
32 randomly cropped from a zero-padded 4040 image or its flipping. For audio task, we adopt ISMIR Genre dataset  which has been assembled for training and development in the ISMIR 2004 Genre Classification contest. It contains 1458 full length audio recordings from Magnatune.com distributed across the 6 genre classes: Classical, Electronic, JazzBlues, MetalPunk, RockPop, World.
4.2 Image Classification
We evaluate the performance of our proposed method for image classification on CIFAR-10/100 and ImageNet. We investigate both classical plain network, VGG-Net , and deep residual network i.e., ResNet . We evaluate our method on two popular network architecture i.e., VGG-Net , and ResNet . We take variations of the original VGG-Net, i.e., VGG-19 used in  for comparison purpose. ResNet-164 which has 164-layer pre-activation ResNet with bottleneck structure is adopted. As base-line networks, we compare with the original networks without regularization terms and their counterparts in network-slimming . For ImageNet, we adopt VGG-16 and ResNet-50 in order to compare with the original networks.
To make a fair comparison with 
, we adopt BN based scaling factors for optimization and pruning. On CIFAR, we train all the networks from scratch using SGD with mini-batch size 64 for 160 epochs. The learning rate is initially set to 0.1 which is reduced twice by 10 at 50% and 75% respectively. Nesterov momentum of 0.9 without dampening and a weight decay of are used. The robust weight initialization method proposed by 
is adopted. We use the same channel sparse regularization term and its hyperparameteras defined in .
4.2.1 Overall performance
The results on CIFAR-10 and CIFAR-100 are shown in Table 1 and Table 2 respectively. On both datasets, we can observe when typically 50-70% fitlers of the evaluated networks are pruned, the new networks can still achieve accuracy higher than the original network. For instance, with 70% filters pruned VGG-19 achieves an accuracy of 0.9393, compared to 0.9366 of the original model on CIFAR-10. We attribute this improvement to the introduced diversities between filter weights, which naturally provides discriminative feature representations in intermediate layers of networks.
As a comparison, our method consistently outperforms network-slimming without resorting to fine-tuning or multi-pass pruning-retraining cycles. It is also worth-noting that our method is capable of pruning networks with prohibitively high ratios which are not possible in network-slimming. Take VGG-19 network on CIFAR-10 dataset as an example, network-slimming prunes as much as 70%, beyond which point the network cannot be reconstructed as some layers are totally destructed. On the contrary, our method is able to reconstruct a very narrower network by pruning 80% filters while producing a marginally degrading accuracy of 0.9302. We conjecture this improvement is enabled by our simultaneous feature and architecture learning which can avoid pruning filters prematurely as in network-slimming where the pruning operation (architecture selection) is isolated from the feature learning process and the performance of the pruned network can be only be restored via fine-tuning.
The results on ImageNet are shown in Table 4 where we also present comparison with  which reported top-1 and top-5 errors on ImageNet. On VGG-16, our method provides 1.2% less accuracy loss while saving additionally 20.5M parameters and 0.8B FLOPs compared with . On ResNet-50, our method saves 5M more parameters and 1.4B more FLOPs than  while providing 0.21% higher accuracy.
4.2.2 Ablation study
In this section we investigate the contribution of each proposed component through ablation study.
Fig. 2 (a) shows the sorted scaling factors of VGG-19 network trained with the proposed filter diversity loss at various training epochs. With the progress of training, the scaling factors become increasingly sparse and the number of large scaling factors, i.e., the area under the curve, is decreasing. Fig. 1 shows the sorted scaling factors of VGG-19 network for the baseline model with no regularization, network-slimming , and the proposed method with diversified filters, trained with CIFAR-10 and CIFAR-100. We observe significantly improved sparsity by introducing filter diversity to the network compared with network-slimming, indicated by nsf. Remember the scaling factors essentially determine the importance of filters, thus, maximizing nsf ensures that the deterioration due to filter pruning is minimized. Furthermore, the number of filters associated with large scaling factor is largely reduced, rendering more irrelevant filter to be pruned harmlessly. This observation is quantitatively confirmed in Table 3 which lists the accuracies of three schemes before and after pruning for both CIFAR-10 and CIFAR-100 datasets. It is observed that retraining of pruned network is no longer needed for the proposed method, while network-slimming has to restore the original accuracy through single-pass or multi-pass of pruning-retraining cycles. Accuracy deterioration are 60.76% and 0% for network-slimming and the proposed method respectively, whilst the baseline networks completely fails after pruning, due to insufficient preserved filters at certain layers.
We firstly empirically investigate the effectiveness of the proposed pruning loss. After setting , we train VGG-19 network by switching off/on respectively (set and ) the pruning loss on CIFAR-10 dataset. By adding the proposed pruning loss, we observe improved accuracy of 0.9325 compared to 0.3254 at pruning ratio of 70%. When pruning at 80%, the network without pruning loss can not be constructed due to insufficient preserved filters at certain layers, whereas the network trained with pruning loss can attain an accuracy of 0.9298. This experiment demonstrates that the proposed pruning loss enables online pruning which dynamically selects the architectures while evolving filters to achieve extremely compact structures.
Fig. 2 (b) shows the sorted scaling factors of VGG-19 network trained with pruning loss subject to various target pruning ratios on CIFAR-10. We can observe that given a target pruning ratio, our algorithm adaptively adjusts the distribution of scaling factors to accommodate the pruning operation. Such a dynamic evolution warrants little accuracy loss at a considerably high pruning ratio, as opposed to the static offline pruning approaches, e.g., network-slimming, where pruning operation is isolated from the training process causing considerable accuracy loss or even network destruction.
|Models||PSNR||Params||Pruned (%)||FLOPs||Pruned (%)|
4.3 Image Compression
The proposed approach is applied on end-to-end image compression task which follows a general autoencoder architecture as illustrated in Fig.3. We utilize general scaling layer which is added after each convolutional layer, with each scaling factor initialized as 1. The evaluation is performed on CIFAR-100 dataset. We train all the networks from scratch using Adam with mini-batch size 128 for 600 epochs. The learning rate is set to 0.001 and MSE loss is used. The results are listed in Table. 5 where both parameters and floating-point operations (FLOPs) are reported. Our method can save about 40% - 60% parameters and 50% - 60% computational cost with minor lost of performance (PSNR).
4.4 Audio Classification
We further apply our method in audio classification task, particularly music genre classification. The preprocessing of audio data is similar with  and produces Mel spectrogram matrix of size 8080. The network architecture is illutrated in Fig. 4, where the scaling layer is added after both convolutional layers and fully connected layers. The evaluation is performed on ISMIR Genre dataset. We train all the networks from scratch using Adam with mini-batch size 64 for 50 epochs. The learning rate is set to 0.003. The results are listed in Table. 6 where both parameters and FLOPs are reported. Our approach saves about 92% parameters while achieves 1% higher accuracy, saving 80% computational cost. With a minor loss of about 1%, 99.5% parameters are pruned, resulting in an extreme narrow network with 50 times speedup.
|Models||Accuracy||Params||Pruned (%)||FLOPs||Pruned (%)|
In this paper, we have proposed a novel approach to simultaneously learning architectures and features in deep neural networks. This is mainly underpinned by a novel pruning loss and online pruning strategy which explicitly guide the optimization toward an optimal architecture driven by a target pruning ratio or model size. The proposed pruning loss enabled online pruning which dynamically selected the architectures while evolving filters to achieve extremely compact structures. In order to improve the feature representation power of the remaining filters, we further proposed to enforce the diversities between filters for more effective feature representation which in turn improved the trade-off between architecture and accuracies. We conducted comprehensive experiments to show that the interplay between architecture and feature optimizations improved the final compressed models in terms of both models sizes and accuracies for various tasks on both visual and audio data.
-  Cano, P., Gómez Gutiérrez, E., Gouyon, F., Herrera Boyer, P., Koppenberger, M., Ong, B.S., Serra, X., Streich, S., Wack, N.: Ismir 2004 audio description contest (2006)
-  Cheng, Y., Wang, D., Zhou, P., Zhang, T.: A Survey of Model Compression and Acceleration for Deep Neural Networks. ArXiv e-prints (Oct 2017)
-  Crowley, E.J., Turner, J., Storkey, A., O’Boyle, M.: Pruning neural networks: is it time to nip it in the bud? ArXiv e-prints arXiv:1810.04622 (Oct 2018)
-  Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in neural information processing systems. pp. 1135–1143 (2015)
Harandi, M., Fernando, B.: Generalized BackPropagation, Étude De Cas: Orthogonality. ArXiv e-prints (Nov 2016)
-  Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain surgeon. In: Advances in neural information processing systems. pp. 164–171 (1993)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision. pp. 1026–1034 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016)
-  Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. In: ECCV (2018)
-  Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Tech. rep., Citeseer (2009)
-  LeCun, Y., Denker, J.S., Solla, S.A.: Optimal brain damage. In: Advances in neural information processing systems. pp. 598–605 (1990)
-  Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning Filters for Efficient ConvNets. ArXiv e-prints (Aug 2016)
Lidy, T., Schindler, A.: Parallel convolutional neural networks for music genre and mood classification. MIREX2016 (2016)
-  Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C.: Learning Efficient Convolutional Networks through Network Slimming. ICCV pp. 2736–2744 (2017), http://arxiv.org/abs/1708.06519
-  Liu, Z., Sun, M., Zhou, T., Huang, G., Darrell, T.: Rethinking the Value of Network Pruning. ArXiv e-prints arXiv:1810.05270 (Oct 2018)
-  Mariet, Z., Sra, S.: Diversity Networks: Neural Network Compression Using Determinantal Point Processes. ArXiv e-prints arXiv:1511.05077 (Nov 2015)
-  Qin, Z., Yu, F., Liu, C., Zhao, L., Chen, X.: Interpretable Convolutional Filter Pruning. ArXiv e-prints (Oct 2018)
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
-  Wen, W., Wu, C., Wang, Y., Chen, Y., Li, H.: Learning Structured Sparsity in Deep Neural Networks. NIPS 521(12), 61–4 (2016)
-  Yu, Y., Li, Y.f., Zhou, Z.h.: Diversity Regularized Machine. IJCAI pp. 1603–1608 (2011)
-  Zhou, H.: Less is More : Towards Compact CNNs. ECCV pp. 1–5 (2016)