I Introduction
Balancing the tradeoff between the size of a deep network and achieving high performance is the most important constraint when designing deep neural networks (DNN) that can easily be translated to hardware. Although deep learning yields remarkable performance in realworld problems like medical diagnosis
[27, 1, 3], autonomous vehicles [46, 35, 9], and others, they consume a large amount of memory and computational resources which limit their largescale deployment. With current stateoftheart deep networks spanning hundreds of millions if not billions of parameters [41, 17], compressing them while maintaining high performance is challenging.In this work, we approach DNN compression using network pruning [13]. There are two broad approaches to network pruning, (a) unstructured pruning, where a filter’s importance is evaluated using weights [13] or constraints like the norm [29] on them, without considering the overall structure of sparsity induced, and (b) structured pruning, where the objective function is modified to include structured sparsity constraints [48]. Most structured pruning approaches ignore the dependency between layers and the impact of pruning on downstream layers while unstructured pruning methods force the network to optimize a harder and more sensitive optimization objective. The underlying common theme between both approaches is the use of filter weights as a proxy for their importance.
Evaluating a filter’s importance purely from its weights is insufficient since it does not take into account the dependencies between filters or account for any form of uncertainty. These factors are critical since higher weight values do not always represent its true importance and a filter’s contribution can be compensated elsewhere in the network. Consider the example shown in Fig. 1, where a simple weightbased criterion suggests the removal of small valued weights. However, the mutual information (MI) score, which we use to measure the dependency between pairs of filters and emphasize their importance, values the smaller weights over the large ones. Pruning based on the MI scores would ensure a network where the retained filters pass on as much information as possible to the next layer.
To overcome these issues, we propose Mutual Informationbased Neuron Trimming (MINT) as a novel approach to compressionviapruning in deep networks which stochastically accounts for the dependency between layers. Fig. 2
outlines our approach. In MINT, we use an estimator for conditional geometric mutual information (GMI), inspired by
[49], to measure the dependency between filters of successive layers. Specifically, we use a graphbased criterion (FriedmanRafsky Statistic [6]) to measure the conditional GMI between the activations of filters at layer and , denoted by , given the remaining filters in layer . On evaluating all such dependencies, we sort the importance scores between filters of every pair of layers in the network. Finally, we threshold a desired percentage of these values to retain filters that contribute the majority of the information to successive layers. Hence, MINT maintains high performance with compressed and retrained networks.Through MINT, we contribute a network pruning method that addresses the need to use dependency between layers as an important factor to prune networks. By maintaining filters that contribute the majority of the information passed between layers, we ensure that the impact of pruning on downstream layers is minimized. In doing so, we achieve stateoftheart performance across multiple DatasetCNN architecture combinations, highlighting the general applicability of our method.
Further, we empirically analyze our approach using adversarial attacks, expected calibration error, and visualizations that illustrate the focus of learned representations, to provide a better understanding of our approach. We highlight the common denominator between its security vulnerability and decrease in calibration error while illustrating the intended effects of retaining filters that contribute the majority of information between layers.
Ii Related Works
Deep network compression offers a number of strategies to help reduce network size while maintaining high performance, such as lowrank approximations [20, 54, 53], quantization [4, 19, 39, 55], knowledge distillation [33, 50], and network pruning [13, 34, 31]
. In this work, we focus on network pruning since it offers a controlled set up to study and compare changes in the dynamics of a network when filters are removed. We broadly classify network pruning methods into two categories, unstructured, and structured, which we describe below. Also, we highlight common strategies to calculate multivariate dependencies and how they vary from our method.
Iia Network Pruning
IiA1 Unstructured pruning
Some of the earliest in this line of work used the secondorder relationship between the objective function and weights of a network to determine which weights to remove or retain [26, 14]. Although these methods provide deep insights into the relationships within networks, their dense computational requirements and large runtimes made them less practical. They were surpassed by an alternative approach that thresholded the weight values themselves to obtain a desired level of sparsity before retraining [13]. Apart from the simplicity of this approach, it also highlighted the importance of retraining from known minima as opposed to from scratch. Instead of pruning weights in one shot, [11] offered a continuous and recursive strategy of using miniiterations to evaluate the importance of connections using their weights, remove unimportant ones and train the network. Similarly, [29] proposed pruning filters using the norm of their weight values as a measure of importance. By melding standard network pruning using weights with network quantization and Huffman coding, [12] showed superior compression performance compared to any individual pipeline. However, the direct use of weight values across all these methods does not capture the relationships between different layers or the impact of removing weights on downstream layers. In MINT, we address this issue by explicitly computing the dependency between filters of successive layers and only retaining filters that contribute a majority of the information. This ensures that there isn’t a severe impact on downstream layers.
A subset of methods uses data to derive the importance of filter weights. Among them, ThiNet [34] posed the reconstruction of outcomes with the removal of weights as an optimization objective to decide on weights. More recently, NISP [52] used the contribution of neurons towards the reconstruction of outcomes in the second to last layer as a metric to retain/remove filters. These works represent a shift to datadriven logic to consider the downstream impact of pruning, with the use of external computations (e.g., feature ranking in [52]) combined with the quality of features approximated using weights from a single trial. Compared to the deterministic relationship between the weights and feature ranking methods, our method uses a probabilistic approach to measure the dependency between filters, thereby accounting for some form of uncertainty. Further, our method uses one single pruneretrain step compared to multiple iterations of finetuning performed in these methods.
IiA2 Structured Pruning
The shift to structured pruning was based on the idea of seamlessly interfacing with hardware systems as opposed to relying on software accelerators. One of the first methods to do this extended the original brain damage [26] formulation to include fixed pattern masks for specified groups while using a groupsparsity regularizer [24]. This idea was further extended in works that used the grouplasso formulation [48], individual or multiple norm constraints [32] on the channel parameters and in works that balanced individual vs. group relationships [16, 51] to induce sparsity across desired structures. These methods explicitly affect the training phase of a chosen network by optimizing a harder and more sensitive objective function. Further, sideeffects likestability to adversarial attacks, calibration, and difference in learned representations have not been fully quantified. In our work, we optimize the standard crossentropy objective function and characterize the behaviour of the original network and their compressed counterparts.
Hybrid methods combine the notion of a modified objective function with the measurement of downstream impact of pruning by enforcing sparsity constraints on the outcomes of groups [18], using a custom grouplasso formulation with a squared dependency on weight values as the importance measure [30] or an adversarial pruned network generator to compete with the features derived from the original network [31]. The disadvantages of these methods include their multipass scheme and large training times as well as those inherited from modifying the objective function.
IiB Multivariate Dependency Measures
The accurate estimation of multivariate dependency in highdimensional settings is a hard task. The first in this line of work involved Shannon Mutual Information which was succeeded by a number of plugin estimators including Kernel Density Estimators
[21]and KNN estimators
[36]. However, their dependence on density estimates and large runtime complexity means they are not suitable for large scale applications including neural networks. A faster plugin method based on graph theory and nearest neighbour ratios [38] was proposed as an alternative. More solutions that use statistics like KL divergence like [28] or Renyi [7] were proposed to help bypass density estimation fully. Instead, in this work, we focus on a conditional GMI estimator, similar to [49], which bypasses the difficult task of density estimation and is nonparametric and scalable.Iii Mint
MINT is a datadriven approach to pruning networks by learning the dependency between filters/nodes of successive layers. For every pair of layers in a deep network, MINT uses the conditional GMI (Section IIIA) between the activations of every filter from a chosen layer and a filter from layer , given the existence of every other possible filter in layer to compute an importance score. Here, data flows from layer to . Once all such importance scores are evaluated, we remove a fixed portion of filters with the lowest scores. This induces the desired level of sparsity in the network before retraining the network to maintain high accuracy. The core algorithm is outlined and explained in the sections below.
Iiia Conditional Geometric Mutual Information
In this section, we review conditional GMI estimation [49] and use a close approximation to their method to calculate multivariate dependencies in our proposed algorithm.
Definition
We first define a general form of GMI denoted by : For parameters and
consider two random variables
and with joint and marginal distributions , , and respectively. The GMI between and is given by(1) 
Considering the special case of in Eqn. 1 we obtain,
(2) 
Estimator In general, for a set of samples drawn from , we estimate as follows: (1) split data into two subsets and , (2) use the Nearest Neighbour Bootstrap algorithm [44] to generate conditionally independent samples from points and name the new set . (3) Merge and i.e. . (4) Construct a Minimum Spanning Tree (MST) on . (5) Compute FriedmanRafsky statistic [5], , which is the number of edges on the MST linking dichotomous points i.e. edges connecting points in to points in . (6) The estimate for , denoted by , is obtained by ). Note that within the MINT algorithm, we apply the conditional GMI estimator on a subsampled set of activations from each filter considered.
IiiB Approach
Setup In a deep network containing a total of layers, we compute the dependency () between filters in every consecutive pair of layers. Here, the layer is closer to the input while layer is closer to the output among the chosen pair of consecutive layers. The activations for a given node in layer , are computed as,
(5) 
where , is the total number of samples, is the feature dimension and is the input to a given layer used to compute the activations.
is an activation function,
, andare the weight vector and bias.
Notations

: The activations from the selected filter from layer .

: Total number of filters in layer .

: The set of indices that indicate the values that are retained in the weight vector of the selected filter.

: The dependency between two filters (importance score).

: The set of all filters excluding .

: Threshold on importance score to ensure only strong contributions are retained.
Description In every iteration of MINT (Alg. 1), we find the set of weight values in to retain while the remaining are zeroed out.

For a given pair consecutive of layers , we compute the dependency between every filter in layer in relation to filters in layer . The main intent of framing the algorithm in this perspective is that the activations from layers closer to the input have a direct effect on downstream layers while the reverse is not true for a forward pass of the network.

Using the activations for the selected filters, we compute the conditional GMI between them given all the remaining filters in layer , as shown in Fig. 3. This dependency captures the relationship between filters in the context of all the contributions from the preceding layer. Since the activations of layer are the result of contributions from all filters in the preceding layer, we need to account for this when considering the dependence of activations between two selected filters.

Based on the strength of each , the contribution of filters from the previous layer is either retained/removed. We define a threshold for this purpose, a key hyperparameter.

stores the indices of all filters from layer that are retained for a selected filter . The weights for retained filters are left the same while the weights for the entire kernel in the other filters are zeroed out. In the context of fully connected layers, we retain or zero out specific weight values in the weight matrix.
Group Extension
While evaluating dependencies between every pair of filters allows us to take a close look at their relationships, it does not scale well to deeper or wider architectures.
To address this issue, we evaluate filters in groups rather than onebyone.
We define as the total number of groups in a layer, where each group contains an equal number of filters.
We explore in detail the impact of varying the number of groups in Section IVD.
Although there are multiple approaches to grouping filters, in this work we restrict ourselves to sequential grouping, where groups are constructed from consecutive filters.
There is no explicit requirement for a pregrouping step before our algorithm so long as a balanced grouping of filters is used.
Finer Details MINT is constructed on the assumption that the majority of information from the preceding layer is available and the filter in consideration can selectively retain contributions for a subset of previous filters. This allows us to work on isolated pairs of layers with minimal interference on downstream layers since retaining filters with high MI will ensure the retention of filters that contribute the most information to the next layer. By maintaining as much information as possible between layers, the amount of critical information passed to layers further on is maintained.
Iv Experimental Results
We breakdown the experimental results into three major sections. Section IVC focuses on the comparison of our method to stateoftheart deep network compression algorithms, Section IVD highlights the significance of various hyperparameters used in MINT, and finally, Section IVE characterizes MINTcompressed networks w.r.t. their response to adversarial attacks, calibration statistics and learned representations in comparison to their original counterparts. As a prelude to all these three parts, we describe the datasets, models, and metrics used across our experiments. We restrict the implementation details to the appendices.
Iva Datasets and Models
IvB Metrics
The key metrics we use to evaluate the performance of various methods are,

Parameters Pruned (%): The ratio of parameters removed to the total number of parameters in the trained baseline network. A higher value in conjunction with good performance indicates a superior method.

Performance (%): This performance indicates the best testing accuracy upon training, for baseline networks, and retraining, for pruning methods. A value closer to the baseline performance is preferred.
Method  Params. Pruned()  Performance()  Memory(Mb)  

MLP MNIST  Baseline  N.A.  98.59  0.539 
SSL [48]  90.95  98.47  N.A.  
Network Slimming [32]  96.00  98.51  N.A.  
MINT (ours)()  96.01  98.47  0.025  
VGG16 CIFAR10  Baseline  N.A.  93.98  53.904 
Pruning Filters [29]  64.00  93.40  N.A  
SSS [18]  73.80  93.02  N.A  
GAL [31]  82.20  93.42  N.A.  
MINT (ours) ()  83.43  93.43  9.057  
ResNet56 CIFAR10  Baseline  N.A.  92.55  3.110 
GAL [31]  11.80  93.38  N.A.  
Pruning Filters [29]  13.70  93.06  N.A.  
NISP [52]  42.40  93.01  N.A.  
OED [48]  43.50  93.29  N.A.  
MINT (ours) ()  52.41  93.47  1.553  
MINT (ours) ()  55.39  93.02  1.462  
ResNet50 ILSVRC2012  Baseline  N.A.  76.13  91.163 
GAL [31]  16.86  71.95  N.A.  
OED [48]  25.68  73.55  N.A.  
SSS [18]  27.05  74.18  N.A.  
ThiNet [34]  51.45  71.01  N.A.  
MINT (ours) ()  43.00  71.50  52.371  
MINT (ours) ()  49.00  71.12  47.519  
MINT (ours) ()  49.62  71.05  46.931 
Calibration statistics measure the agreement between the confidence output of the network and the true probability. The red line indicate the ideal trend if the confidence and probability matched. We observe that MINTcompressed networks act as a regularizer to decrease the Expected Calibration Error (ECE) when compared to the original network as well as better match the ideal curve
IvC Comparison against existing methods
As a first step to showcasing MINT’s abilities, we compare it against stateoftheart baselines in network pruning. The baselines in Table I are arranged in ascending order of the percentage of parameters pruned. Our algorithm clearly outperforms most of the SOTA pruning baselines across the number of pruned parameters while maintaining high accuracy and reducing the memory footprint of the network. We note that while some of the pruning baselines listed use multiple pruneretrain steps to achieve their result, we use only a single step to match and outperform them.
In Fig. 5 we take a deeper look at how the overall compression is spread throughout the network. Comparing Figs. (a)a, (b)b and (c)c, we can establish the strong influence of datasets and network architecture on where redundancies are stored. In the cases of VGG16 and ResNet56, training with CIFAR10 leads to the storage of possibly redundant information in the latter portion of the networks. The early portions of the network are extremely sensitive to pruning. ResNet50 when trained on ILSVRC2012 forces compression to be more spread out across the network, possibly indicating the spread of redundant features at different levels of the network.
IvD Hyperparameter Empirical Analysis
We take a closer look at two important hyperparameters that help MINT scale well to deep networks, (a) number of groups in a layer , and (b) the number of samples per class, , used to compute the conditional GMI. Below, we look into how each of them impacts the maximum number of parameters pruned to achieve accuracy for MNIST on MLP.
Group size directly corresponds to the number of filters that are grouped together when computing conditional GMI and thresholding. More groups lead to lesser filters per group, which allows for more finegrained computation of multivariate dependency and thereby, more precise pruning. In this experiment, . Results in Fig. (a)a match our expectation by illustrating the increase in the upper limit for parameters pruned to achieve the desired performance..
Samples per class The number of samples per class directly impacts the final number of activations used to compute the conditional GMI. The GMI estimator should improve its estimates as the number of samples per class and the total number of samples is increased. In this experiment, . Fig. (b)b confirms our expectation by showing a steady improvement in the parameters pruned as the number of samples per class and thereby the total number of samples is increased.
IvE Characterization
The standard metric used to compare deep network compression methods is the percentage of parameters pruned while maintaining recognition performance close to the baseline. However, the original intent of compressing networks is to deploy them in realworld scenarios which necessitate other characterizations like robustness to adversarial attacks and an ability to reflect true confidence in predictions.
To understand the impact of pruning networks in the context of adversarial attacks we use two common adversarial attacks, Iterative FGSM [8], which doesn’t exclusively target a desired class, and IterativeLL [23], which targets the selection of the least likely class. Fig. 6 shows the response of other the original and MINTcompressed networks to both attacks. We clearly observe that MINTcompressed networks are more vulnerable to targeted and nontargeted attacks.
While the core idea behind MINT is to retain filters that contribute the majority of the information passed to the next layer, in using a subset of the available filters we remove a certain portion of the information passed down. Fig. 7 compares the portions of the image that contribute towards a desired target class, between the original (top row) and MINTcompressed networks (bottom row). We observe that the use of a subset of filters in the compressed network has reduced the effective portions of the image that contribute towards a decision, not to mention minor modifications to the features used themselves. We posit that the reduction in the number of filters used and the available redundant features is the reason for MINTcompressed networks being vulnerable to adversarial attacks.
Calibration statistics [37, 10] measure the agreement between the confidence provided by the network and the actual probability. These measures provide an orthogonal perspective to adversarial attacks since they measure statistics only for indomain images while adversarial attacks alter the input. Fig. 8 highlights the decrease in Expected Calibration Error (ECE) for the MINTcompressed networks when compared to their original counterparts. The plot illustrates that the histogram trend is closer to matching the ideal trend indicated by the linear red curve. After pruning, the sparse networks seem to behave similarly to a regularizer by focusing on a smaller subset of features and decreasing the ECE. On the other hand, the original networks contain many levels of redundancies which translates to overfitting and having higher ECE.
V Conclusion
In this work, we propose MINT as a novel approach to network pruning in which the dependency between filters of successive layers is used as a measure of importance. We use conditional GMI to evaluate importance and incorporate stochasticity in our algorithm in order to retain filters that pass the majority of the information through layers. In doing so, MINT achieves better pruning performance than SOTA baselines, using a single pruneretrain step. When characterizing the behaviour of MINTpruned networks, we observe that it behaves like a regularizer and improves the calibration of the network. However, a reduction in the number of filters used and redundancies makes pruned networks susceptible to adversarial attacks. Our future direction of work includes improving the robustness of compressed networks to adversarial attacks as well as detailing the sensitivity of layers in various architectures to network pruning.
Acknowledgment
This work has been partially supported (Madan Ravi Ganesh and Jason J. Corso) by NSF IIS 1522904 and NIST 60NANB17D191 and (Salimeh Yasaei Sekeh) by NSF 1920908; the findings are those of the authors only and do not represent any position of these funding bodies. The authors would also like to thank Stephan Lemmer for his invaluable input on the calibration of deep networks.
References

[1]
(2016)
Breast cancer classification using deep belief networks
. Expert Systems with Applications 46, pp. 139–144. Cited by: §I. 
[2]
(1997)
C1. 2 multilayer perceptrons
. Handbook of Neural Computation C 1. Cited by: §IVA. 
[3]
(2016)
Lung nodule detection using 3d convolutional neural networks trained on weakly labeled data
. In Medical Imaging 2016: ComputerAided Diagnosis, Vol. 9785, pp. 978532. Cited by: §I.  [4] (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123–3131. Cited by: §II.
 [5] (1979) Multivariate generalizations of the waldwolfowitz and smirnov twosample tests. Ann. Statist., pp. 697–717. Cited by: §IIIA.
 [6] (1983) Graphtheoretic measures of multivariate association and prediction. The Annals of Statistics 11 (2), pp. 377–391. Cited by: §I.
 [7] (2015) Efficient estimation of mutual information for strongly dependent variables. In Artificial intelligence and statistics, pp. 277–286. Cited by: §IIB.
 [8] (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §IVE.
 [9] (2019) NeuroTrajectory: a neuroevolutionary approach to local state trajectory learning for autonomous vehicles. IEEE Robotics and Automation Letters 4 (4), pp. 3441–3448. Cited by: §I.

[10]
(2017)
On calibration of modern neural networks.
In
Proceedings of the 34th International Conference on Machine Learning  Volume 70
, ICML’17, pp. 1321–1330. Cited by: §IVE.  [11] (2016) Dynamic network surgery for efficient dnns. In Advances in neural information processing systems, pp. 1379–1387. Cited by: §IIA1.
 [12] (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: §IIA1.
 [13] (2015) Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143. Cited by: §I, §IIA1, §II.
 [14] (1993) Second order derivatives for network pruning: optimal brain surgeon. In Advances in neural information processing systems, pp. 164–171. Cited by: §IIA1.

[15]
(2016)
Deep residual learning for image recognition.
In
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, External Links: Document, Link Cited by: §IVA.  [16] (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. Cited by: §IIA2.
 [17] (2019) Gpipe: efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, pp. 103–112. Cited by: §I.
 [18] (2018) Datadriven sparse structure selection for deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 304–320. Cited by: §IIA2, TABLE I.
 [19] (2017) Quantized neural networks: training neural networks with low precision weights and activations. The Journal of Machine Learning Research 18 (1), pp. 6869–6898. Cited by: §II.
 [20] (2014) Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference 2014, External Links: Document, Link Cited by: §II.
 [21] (2004) Estimating mutual information. Physical review E 69 (6), pp. 066138. Cited by: §IIB.
 [22] (2009) Learning multiple layers of features from tiny images. Cited by: §IVA.
 [23] (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §IVE.
 [24] (2016) Fast convnets using groupwise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554–2564. Cited by: §IIA2.
 [25] (1998) Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §IVA.
 [26] (1990) Optimal brain damage. In Advances in neural information processing systems, pp. 598–605. Cited by: §IIA1, §IIA2.
 [27] (2017) Deep learning in medical imaging: general overview. Korean journal of radiology 18 (4), pp. 570–584. Cited by: §I.
 [28] (2008) A class of rényi information estimators for multidimensional densities. The Annals of Statistics 36 (5), pp. 2153–2182. Cited by: §IIB.
 [29] (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Cited by: §I, §IIA1, TABLE I.
 [30] (2019) OICSR: outinchannel sparsity regularization for compact deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7046–7055. Cited by: §IIA2.
 [31] (2019) Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2799. Cited by: §IIA2, §II, TABLE I.
 [32] (2017) Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744. Cited by: §IIA2, TABLE I.
 [33] (2017) Knowledge distillation for smallfootprint highway networks. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), External Links: Document, Link Cited by: §II.
 [34] (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066. Cited by: §IIA1, §II, TABLE I.
 [35] (2019) Deep grid net (dgn): a deep learning system for realtime driving context understanding. In 2019 Third IEEE International Conference on Robotic Computing (IRC), pp. 399–402. Cited by: §I.
 [36] (2017) Ensemble estimation of mutual information. In 2017 IEEE International Symposium on Information Theory (ISIT), pp. 3030–3034. Cited by: §IIB.
 [37] (2015) Obtaining well calibrated probabilities using bayesian binning. In TwentyNinth AAAI Conference on Artificial Intelligence, Cited by: §IVE.
 [38] (2017) Direct estimation of information divergence using nearest neighbor ratios. In 2017 IEEE International Symposium on Information Theory (ISIT), pp. 903–907. Cited by: §IIB.
 [39] (2017) Weightedentropybased quantization for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.
 [40] (2019) PyTorch: an imperative style, highperformance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. External Links: Link Cited by: 3rd item.
 [41] (2019) Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, Vol. 33, pp. 4780–4789. Cited by: §I.
 [42] (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Document Cited by: §IVA.
 [43] (2017) Gradcam: visual explanations from deep networks via gradientbased localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: Fig. 7.
 [44] (2017) Modelpowered conditional independence test. In Advances in neural information processing systems, pp. 2951–2961. Cited by: §IIIA.
 [45] (2014) Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §IVA.
 [46] (2019) Learning to see the wood for the trees: deep laser localization in urban and natural environments on a cpu. IEEE Robotics and Automation Letters 4 (2), pp. 1327–1334. Cited by: §I.
 [47] (2020) SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, pp. 261–272. External Links: Document Cited by: 3rd item.
 [48] (2016) Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074–2082. Cited by: §I, §IIA2, TABLE I.
 [49] (2019) Geometric estimation of multivariate dependency. Entropy 21 (8), pp. 787. Cited by: §I, §IIB, §IIIA, §IIIA.

[50]
(2017)
A gift from knowledge distillation: fast optimization, network minimization and transfer learning
. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.  [51] (2017) Combined group and exclusive sparsity for deep neural networks. In Proceedings of the 34th International Conference on Machine LearningVolume 70, pp. 3958–3966. Cited by: §IIA2.
 [52] (2018) Nisp: pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9194–9203. Cited by: §IIA1, TABLE I.
 [53] (2016) Accelerating very deep convolutional networks for classification and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (10), pp. 1943–1955. External Links: Document, Link Cited by: §II.
 [54] (2015) Efficient and accurate approximations of nonlinear convolutional networks. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.
 [55] (2018) Adaptive quantization for deep neural network. In ThirtySecond AAAI Conference on Artificial Intelligence, Cited by: §II.
Comments
There are no comments yet.