MINT: Deep Network Compression via Mutual Information-based Neuron Trimming

by   Madan Ravi Ganesh, et al.

Most approaches to deep neural network compression via pruning either evaluate a filter's importance using its weights or optimize an alternative objective function with sparsity constraints. While these methods offer a useful way to approximate contributions from similar filters, they often either ignore the dependency between layers or solve a more difficult optimization objective than standard cross-entropy. Our method, Mutual Information-based Neuron Trimming (MINT), approaches deep compression via pruning by enforcing sparsity based on the strength of the relationship between filters of adjacent layers, across every pair of layers. The relationship is calculated using conditional geometric mutual information which evaluates the amount of similar information exchanged between the filters using a graph-based criterion. When pruning a network, we ensure that retained filters contribute the majority of the information towards succeeding layers which ensures high performance. Our novel approach outperforms existing state-of-the-art compression-via-pruning methods on the standard benchmarks for this task: MNIST, CIFAR-10, and ILSVRC2012, across a variety of network architectures. In addition, we discuss our observations of a common denominator between our pruning methodology's response to adversarial attacks and calibration statistics when compared to the original network.



There are no comments yet.


page 1

page 7

page 8


Slimming Neural Networks using Adaptive Connectivity Scores

There are two broad approaches to deep neural network (DNN) pruning: 1) ...

Deep Model Compression based on the Training History

Deep Convolutional Neural Networks (DCNNs) have shown promising results ...

Understanding Individual Neuron Importance Using Information Theory

In this work, we characterize the outputs of individual neurons in a tra...

Comprehensive Online Network Pruning via Learnable Scaling Factors

One of the major challenges in deploying deep neural network architectur...

Investigating Channel Pruning through Structural Redundancy Reduction - A Statistical Study

Most existing channel pruning methods formulate the pruning task from a ...

Multi-layer Pruning Framework for Compressing Single Shot MultiBox Detector

We propose a framework for compressing state-of-the-art Single Shot Mult...

Graph Pruning for Model Compression

Previous AutoML pruning works utilized individual layer features to auto...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Balancing the trade-off between the size of a deep network and achieving high performance is the most important constraint when designing deep neural networks (DNN) that can easily be translated to hardware. Although deep learning yields remarkable performance in real-world problems like medical diagnosis 

[27, 1, 3], autonomous vehicles [46, 35, 9], and others, they consume a large amount of memory and computational resources which limit their large-scale deployment. With current state-of-the-art deep networks spanning hundreds of millions if not billions of parameters [41, 17], compressing them while maintaining high performance is challenging.

Fig. 1: Weight-based pruning does not consider the dependency between layers. Instead it suggests the removal of low weight values. Mutual information (MI)-based pruning computes the value of information passed between layers, quantified by the MI value, and suggests the removal of higher weight values

In this work, we approach DNN compression using network pruning [13]. There are two broad approaches to network pruning, (a) unstructured pruning, where a filter’s importance is evaluated using weights [13] or constraints like the norm [29] on them, without considering the overall structure of sparsity induced, and (b) structured pruning, where the objective function is modified to include structured sparsity constraints [48]. Most structured pruning approaches ignore the dependency between layers and the impact of pruning on downstream layers while unstructured pruning methods force the network to optimize a harder and more sensitive optimization objective. The underlying common theme between both approaches is the use of filter weights as a proxy for their importance.

Evaluating a filter’s importance purely from its weights is insufficient since it does not take into account the dependencies between filters or account for any form of uncertainty. These factors are critical since higher weight values do not always represent its true importance and a filter’s contribution can be compensated elsewhere in the network. Consider the example shown in Fig. 1, where a simple weight-based criterion suggests the removal of small valued weights. However, the mutual information (MI) score, which we use to measure the dependency between pairs of filters and emphasize their importance, values the smaller weights over the large ones. Pruning based on the MI scores would ensure a network where the retained filters pass on as much information as possible to the next layer.

Fig. 2: Illustration of the experimental setup highlighting the components of MINT. Between every pair of filters in consecutive layers () we compute the conditional geometric mutual information (GMI), using the activations from each filter, as the importance score. The total number of filters in each layer is defined by and . The conditional GMI score indicates the importance of a filter in layer ’s contribution towards a filter in layer . We then threshold filters based on the importance scores to ensure that we retain only filters that pass the majority of the information to successive layers. Finally, we retrain the network once to maintain a desired level of performance

To overcome these issues, we propose Mutual Information-based Neuron Trimming (MINT) as a novel approach to compression-via-pruning in deep networks which stochastically accounts for the dependency between layers. Fig. 2

outlines our approach. In MINT, we use an estimator for conditional geometric mutual information (GMI), inspired by

[49], to measure the dependency between filters of successive layers. Specifically, we use a graph-based criterion (Friedman-Rafsky Statistic [6]) to measure the conditional GMI between the activations of filters at layer and , denoted by , given the remaining filters in layer . On evaluating all such dependencies, we sort the importance scores between filters of every pair of layers in the network. Finally, we threshold a desired percentage of these values to retain filters that contribute the majority of the information to successive layers. Hence, MINT maintains high performance with compressed and retrained networks.

Through MINT, we contribute a network pruning method that addresses the need to use dependency between layers as an important factor to prune networks. By maintaining filters that contribute the majority of the information passed between layers, we ensure that the impact of pruning on downstream layers is minimized. In doing so, we achieve state-of-the-art performance across multiple Dataset-CNN architecture combinations, highlighting the general applicability of our method.

Further, we empirically analyze our approach using adversarial attacks, expected calibration error, and visualizations that illustrate the focus of learned representations, to provide a better understanding of our approach. We highlight the common denominator between its security vulnerability and decrease in calibration error while illustrating the intended effects of retaining filters that contribute the majority of information between layers.

Ii Related Works

Deep network compression offers a number of strategies to help reduce network size while maintaining high performance, such as low-rank approximations [20, 54, 53], quantization [4, 19, 39, 55], knowledge distillation [33, 50], and network pruning [13, 34, 31]

. In this work, we focus on network pruning since it offers a controlled set up to study and compare changes in the dynamics of a network when filters are removed. We broadly classify network pruning methods into two categories, unstructured, and structured, which we describe below. Also, we highlight common strategies to calculate multivariate dependencies and how they vary from our method.

Ii-a Network Pruning

Ii-A1 Unstructured pruning

Some of the earliest in this line of work used the second-order relationship between the objective function and weights of a network to determine which weights to remove or retain [26, 14]. Although these methods provide deep insights into the relationships within networks, their dense computational requirements and large runtimes made them less practical. They were surpassed by an alternative approach that thresholded the weight values themselves to obtain a desired level of sparsity before retraining [13]. Apart from the simplicity of this approach, it also highlighted the importance of re-training from known minima as opposed to from scratch. Instead of pruning weights in one shot, [11] offered a continuous and recursive strategy of using mini-iterations to evaluate the importance of connections using their weights, remove unimportant ones and train the network. Similarly, [29] proposed pruning filters using the -norm of their weight values as a measure of importance. By melding standard network pruning using weights with network quantization and Huffman coding, [12] showed superior compression performance compared to any individual pipeline. However, the direct use of weight values across all these methods does not capture the relationships between different layers or the impact of removing weights on downstream layers. In MINT, we address this issue by explicitly computing the dependency between filters of successive layers and only retaining filters that contribute a majority of the information. This ensures that there isn’t a severe impact on downstream layers.

A subset of methods uses data to derive the importance of filter weights. Among them, ThiNet [34] posed the reconstruction of outcomes with the removal of weights as an optimization objective to decide on weights. More recently, NISP [52] used the contribution of neurons towards the reconstruction of outcomes in the second to last layer as a metric to retain/remove filters. These works represent a shift to data-driven logic to consider the downstream impact of pruning, with the use of external computations (e.g., feature ranking in [52]) combined with the quality of features approximated using weights from a single trial. Compared to the deterministic relationship between the weights and feature ranking methods, our method uses a probabilistic approach to measure the dependency between filters, thereby accounting for some form of uncertainty. Further, our method uses one single prune-retrain step compared to multiple iterations of fine-tuning performed in these methods.

Ii-A2 Structured Pruning

The shift to structured pruning was based on the idea of seamlessly interfacing with hardware systems as opposed to relying on software accelerators. One of the first methods to do this extended the original brain damage [26] formulation to include fixed pattern masks for specified groups while using a group-sparsity regularizer [24]. This idea was further extended in works that used the group-lasso formulation [48], individual or multiple norm constraints [32] on the channel parameters and in works that balanced individual vs. group relationships [16, 51] to induce sparsity across desired structures. These methods explicitly affect the training phase of a chosen network by optimizing a harder and more sensitive objective function. Further, side-effects like-stability to adversarial attacks, calibration, and difference in learned representations have not been fully quantified. In our work, we optimize the standard cross-entropy objective function and characterize the behaviour of the original network and their compressed counterparts.

Hybrid methods combine the notion of a modified objective function with the measurement of downstream impact of pruning by enforcing sparsity constraints on the outcomes of groups [18], using a custom group-lasso formulation with a squared dependency on weight values as the importance measure [30] or an adversarial pruned network generator to compete with the features derived from the original network [31]. The disadvantages of these methods include their multi-pass scheme and large training times as well as those inherited from modifying the objective function.

Ii-B Multivariate Dependency Measures

The accurate estimation of multivariate dependency in high-dimensional settings is a hard task. The first in this line of work involved Shannon Mutual Information which was succeeded by a number of plug-in estimators including Kernel Density Estimators 


and KNN estimators 

[36]. However, their dependence on density estimates and large runtime complexity means they are not suitable for large scale applications including neural networks. A faster plug-in method based on graph theory and nearest neighbour ratios [38] was proposed as an alternative. More solutions that use statistics like KL divergence like  [28] or Renyi- [7] were proposed to help bypass density estimation fully. Instead, in this work, we focus on a conditional GMI estimator, similar to [49], which bypasses the difficult task of density estimation and is non-parametric and scalable.

Iii Mint

MINT is a data-driven approach to pruning networks by learning the dependency between filters/nodes of successive layers. For every pair of layers in a deep network, MINT uses the conditional GMI (Section III-A) between the activations of every filter from a chosen layer and a filter from layer , given the existence of every other possible filter in layer to compute an importance score. Here, data flows from layer to . Once all such importance scores are evaluated, we remove a fixed portion of filters with the lowest scores. This induces the desired level of sparsity in the network before retraining the network to maintain high accuracy. The core algorithm is outlined and explained in the sections below.

Iii-a Conditional Geometric Mutual Information

In this section, we review conditional GMI estimation [49] and use a close approximation to their method to calculate multivariate dependencies in our proposed algorithm.

Definition We first define a general form of GMI denoted by : For parameters and

consider two random variables

and with joint and marginal distributions , , and respectively. The GMI between and is given by


Considering the special case of in Eqn. 1 we obtain,


The conditional form of this measure is proposed in  [49] as,

Fig. 3: Example of computing multivariate dependencies, , between activations of filter and each filter in the previous layer . In each computation, the highlighted filters in layer represent the conditioned variables while the arrows indicate the filters whose actual dependence we compute. These steps are repeated for every possible combination of filters across every pair of consecutive layers in the network

Estimator In general, for a set of samples drawn from , we estimate as follows: (1) split data into two subsets and , (2) use the Nearest Neighbour Bootstrap algorithm [44] to generate conditionally independent samples from points and name the new set . (3) Merge and i.e. . (4) Construct a Minimum Spanning Tree (MST) on . (5) Compute Friedman-Rafsky statistic [5], , which is the number of edges on the MST linking dichotomous points i.e. edges connecting points in to points in . (6) The estimate for , denoted by , is obtained by ). Note that within the MINT algorithm, we apply the conditional GMI estimator on a sub-sampled set of activations from each filter considered.

Iii-B Approach

Setup In a deep network containing a total of layers, we compute the dependency () between filters in every consecutive pair of layers. Here, the layer is closer to the input while layer is closer to the output among the chosen pair of consecutive layers. The activations for a given node in layer , are computed as,


where , is the total number of samples, is the feature dimension and is the input to a given layer used to compute the activations.

is an activation function,

, and

are the weight vector and bias.


  • : The activations from the selected filter from layer .

  • : Total number of filters in layer .

  • : The set of indices that indicate the values that are retained in the weight vector of the selected filter.

  • : The dependency between two filters (importance score).

  • : The set of all filters excluding .

  • : Threshold on importance score to ensure only strong contributions are retained.

Description In every iteration of MINT (Alg. 1), we find the set of weight values in to retain while the remaining are zeroed out.

  • For a given pair consecutive of layers , we compute the dependency between every filter in layer in relation to filters in layer . The main intent of framing the algorithm in this perspective is that the activations from layers closer to the input have a direct effect on downstream layers while the reverse is not true for a forward pass of the network.

  • Using the activations for the selected filters, we compute the conditional GMI between them given all the remaining filters in layer , as shown in Fig. 3. This dependency captures the relationship between filters in the context of all the contributions from the preceding layer. Since the activations of layer are the result of contributions from all filters in the preceding layer, we need to account for this when considering the dependence of activations between two selected filters.

  • Based on the strength of each , the contribution of filters from the previous layer is either retained/removed. We define a threshold for this purpose, a key hyper-parameter.

  • stores the indices of all filters from layer that are retained for a selected filter . The weights for retained filters are left the same while the weights for the entire kernel in the other filters are zeroed out. In the context of fully connected layers, we retain or zero out specific weight values in the weight matrix.

for Every pair of layers  do
       for  do
             Initialize ;
             for  do
                   = ;
                   if  then
                   end if
             end for
       end for
end for
Algorithm 1 MINT pruning between filters of layers ()

Group Extension While evaluating dependencies between every pair of filters allows us to take a close look at their relationships, it does not scale well to deeper or wider architectures. To address this issue, we evaluate filters in groups rather than one-by-one. We define as the total number of groups in a layer, where each group contains an equal number of filters. We explore in detail the impact of varying the number of groups in Section IV-D. Although there are multiple approaches to grouping filters, in this work we restrict ourselves to sequential grouping, where groups are constructed from consecutive filters. There is no explicit requirement for a pre-grouping step before our algorithm so long as a balanced grouping of filters is used.

Finer Details MINT is constructed on the assumption that the majority of information from the preceding layer is available and the filter in consideration can selectively retain contributions for a subset of previous filters. This allows us to work on isolated pairs of layers with minimal interference on downstream layers since retaining filters with high MI will ensure the retention of filters that contribute the most information to the next layer. By maintaining as much information as possible between layers, the amount of critical information passed to layers further on is maintained.

Iv Experimental Results

We breakdown the experimental results into three major sections. Section IV-C focuses on the comparison of our method to state-of-the-art deep network compression algorithms, Section IV-D highlights the significance of various hyper-parameters used in MINT, and finally, Section IV-E characterizes MINT-compressed networks w.r.t. their response to adversarial attacks, calibration statistics and learned representations in comparison to their original counterparts. As a prelude to all these three parts, we describe the datasets, models, and metrics used across our experiments. We restrict the implementation details to the appendices.

Iv-a Datasets and Models

Our experiments are divided into the following Dataset-Architecture combinations in order of increasing complexity in dataset and architectures, MNIST [25]

+ Multi-Layer Perceptron 

[2], CIFAR10 [22] + VGG16 [45], CIFAR10 + ResNet56 [15] and finally ILSVRC2012 [42] + ResNet50.

Iv-B Metrics

The key metrics we use to evaluate the performance of various methods are,

  • Parameters Pruned (%): The ratio of parameters removed to the total number of parameters in the trained baseline network. A higher value in conjunction with good performance indicates a superior method.

  • Performance (%): This performance indicates the best testing accuracy upon training, for baseline networks, and re-training, for pruning methods. A value closer to the baseline performance is preferred.

  • Memory Footprint (Mb): The amount of memory consumed when storing the weights of a network in PyTorch’s 

    [40] sparse matrix format (CSR). We compute this value by using SciPy [47] to convert weight matrices to CSR format and storing them in “npz” files.

Fig. 4: (a) An increase in the number of groups per layer allows for finer grouping of filters which in turn leads to more accurate GMI estimates and thresholding. Thus, there is a steady increase in the number of parameters that can be removed to achieve performance. (b) Keeping , we observe that increasing the number of samples per class improves the GMI estimate accuracy which in turn allows for better thresholding and an increase in the parameters pruned. The values on top of the bar plots are the recognition accuracies
Method Params. Pruned() Performance() Memory(Mb)
MLP MNIST Baseline N.A. 98.59 0.539
SSL [48] 90.95 98.47 N.A.
Network Slimming [32] 96.00 98.51 N.A.
MINT (ours)() 96.01 98.47 0.025
VGG16 CIFAR-10 Baseline N.A. 93.98 53.904
Pruning Filters [29] 64.00 93.40 N.A
SSS [18] 73.80 93.02 N.A
GAL [31] 82.20 93.42 N.A.
MINT (ours) () 83.43 93.43 9.057
ResNet56 CIFAR-10 Baseline N.A. 92.55 3.110
GAL [31] 11.80 93.38 N.A.
Pruning Filters [29] 13.70 93.06 N.A.
NISP [52] 42.40 93.01 N.A.
OED [48] 43.50 93.29 N.A.
MINT (ours) () 52.41 93.47 1.553
MINT (ours) () 55.39 93.02 1.462
ResNet50 ILSVRC2012 Baseline N.A. 76.13 91.163
GAL [31] 16.86 71.95 N.A.
OED [48] 25.68 73.55 N.A.
SSS [18] 27.05 74.18 N.A.
ThiNet [34] 51.45 71.01 N.A.
MINT (ours) () 43.00 71.50 52.371
MINT (ours) () 49.00 71.12 47.519
MINT (ours) () 49.62 71.05 46.931
TABLE I: MINT is easily able to match (single-step on ILSVRC2012) or outperform (remaining datasets) SOTA pruning methods across the evaluated benchmarks, using only a single prune-retrain step. Baselines are arranged in increasing order of Parameters Pruned . indicates comparison of layer 2’s weights
Fig. 5: Illustration of compression per layer for the smallest MINT-compressed networks. We observe a characteristic peak in compression towards the later layers of both VGG16 and ResNet56 when trained on CIFAR-10. However, compression is spread over the course the entire network for ResNet50. The green stars indicate layers we avoid pruning due to high sensitivity
Fig. 6: By enforcing the use of an important subset of filters from all the available ones, MINT-compressed networks begin to overvalue their importance. By emphasizing a small set of features, it makes MINT-compressed networks more susceptible to both targeted and non-targeted adversarial attacks when compared to the original network. Here, refers to the ball in norm
Fig. 7: Visualizations using GradCAM [43] illustrate the decrease in effective portions of the image that contribute towards the target decisions in MINT-compressed ResNet56 (row 2) when compared to the original network (row 1)
(a) ECE:0.0077
(b) ECE:0.0517
(c) ECE:0.0762
(d) ECE:0.0305
(e) ECE:0.0054
(f) ECE:0.0500
(g) ECE:0.0383
(h) ECE:0.0069
Fig. 8:

Calibration statistics measure the agreement between the confidence output of the network and the true probability. The red line indicate the ideal trend if the confidence and probability matched. We observe that MINT-compressed networks act as a regularizer to decrease the Expected Calibration Error (ECE) when compared to the original network as well as better match the ideal curve

Iv-C Comparison against existing methods

As a first step to showcasing MINT’s abilities, we compare it against state-of-the-art baselines in network pruning. The baselines in Table I are arranged in ascending order of the percentage of parameters pruned. Our algorithm clearly outperforms most of the SOTA pruning baselines across the number of pruned parameters while maintaining high accuracy and reducing the memory footprint of the network. We note that while some of the pruning baselines listed use multiple prune-retrain steps to achieve their result, we use only a single step to match and outperform them.

In Fig. 5 we take a deeper look at how the overall compression is spread throughout the network. Comparing Figs. (a)a, (b)b and (c)c, we can establish the strong influence of datasets and network architecture on where redundancies are stored. In the cases of VGG16 and ResNet56, training with CIFAR-10 leads to the storage of possibly redundant information in the latter portion of the networks. The early portions of the network are extremely sensitive to pruning. ResNet50 when trained on ILSVRC2012 forces compression to be more spread out across the network, possibly indicating the spread of redundant features at different levels of the network.

Iv-D Hyper-parameter Empirical Analysis

We take a closer look at two important hyper-parameters that help MINT scale well to deep networks, (a) number of groups in a layer , and (b) the number of samples per class, , used to compute the conditional GMI. Below, we look into how each of them impacts the maximum number of parameters pruned to achieve accuracy for MNIST on MLP.

Group size directly corresponds to the number of filters that are grouped together when computing conditional GMI and thresholding. More groups lead to lesser filters per group, which allows for more fine-grained computation of multivariate dependency and thereby, more precise pruning. In this experiment, . Results in Fig. (a)a match our expectation by illustrating the increase in the upper limit for parameters pruned to achieve the desired performance..

Samples per class The number of samples per class directly impacts the final number of activations used to compute the conditional GMI. The GMI estimator should improve its estimates as the number of samples per class and the total number of samples is increased. In this experiment, . Fig. (b)b confirms our expectation by showing a steady improvement in the parameters pruned as the number of samples per class and thereby the total number of samples is increased.

Iv-E Characterization

The standard metric used to compare deep network compression methods is the percentage of parameters pruned while maintaining recognition performance close to the baseline. However, the original intent of compressing networks is to deploy them in real-world scenarios which necessitate other characterizations like robustness to adversarial attacks and an ability to reflect true confidence in predictions.

To understand the impact of pruning networks in the context of adversarial attacks we use two common adversarial attacks, Iterative FGSM [8], which doesn’t exclusively target a desired class, and Iterative-LL [23], which targets the selection of the least likely class. Fig. 6 shows the response of other the original and MINT-compressed networks to both attacks. We clearly observe that MINT-compressed networks are more vulnerable to targeted and non-targeted attacks.

While the core idea behind MINT is to retain filters that contribute the majority of the information passed to the next layer, in using a subset of the available filters we remove a certain portion of the information passed down. Fig. 7 compares the portions of the image that contribute towards a desired target class, between the original (top row) and MINT-compressed networks (bottom row). We observe that the use of a subset of filters in the compressed network has reduced the effective portions of the image that contribute towards a decision, not to mention minor modifications to the features used themselves. We posit that the reduction in the number of filters used and the available redundant features is the reason for MINT-compressed networks being vulnerable to adversarial attacks.

Calibration statistics [37, 10] measure the agreement between the confidence provided by the network and the actual probability. These measures provide an orthogonal perspective to adversarial attacks since they measure statistics only for in-domain images while adversarial attacks alter the input. Fig. 8 highlights the decrease in Expected Calibration Error (ECE) for the MINT-compressed networks when compared to their original counterparts. The plot illustrates that the histogram trend is closer to matching the ideal trend indicated by the linear red curve. After pruning, the sparse networks seem to behave similarly to a regularizer by focusing on a smaller subset of features and decreasing the ECE. On the other hand, the original networks contain many levels of redundancies which translates to overfitting and having higher ECE.

V Conclusion

In this work, we propose MINT as a novel approach to network pruning in which the dependency between filters of successive layers is used as a measure of importance. We use conditional GMI to evaluate importance and incorporate stochasticity in our algorithm in order to retain filters that pass the majority of the information through layers. In doing so, MINT achieves better pruning performance than SOTA baselines, using a single prune-retrain step. When characterizing the behaviour of MINT-pruned networks, we observe that it behaves like a regularizer and improves the calibration of the network. However, a reduction in the number of filters used and redundancies makes pruned networks susceptible to adversarial attacks. Our future direction of work includes improving the robustness of compressed networks to adversarial attacks as well as detailing the sensitivity of layers in various architectures to network pruning.


This work has been partially supported (Madan Ravi Ganesh and Jason J. Corso) by NSF IIS 1522904 and NIST 60NANB17D191 and (Salimeh Yasaei Sekeh) by NSF 1920908; the findings are those of the authors only and do not represent any position of these funding bodies. The authors would also like to thank Stephan Lemmer for his invaluable input on the calibration of deep networks.


  • [1] A. M. Abdel-Zaher and A. M. Eldeib (2016)

    Breast cancer classification using deep belief networks

    Expert Systems with Applications 46, pp. 139–144. Cited by: §I.
  • [2] L. B. Almeida (1997)

    C1. 2 multilayer perceptrons

    Handbook of Neural Computation C 1. Cited by: §IV-A.
  • [3] R. Anirudh, J. J. Thiagarajan, T. Bremer, and H. Kim (2016)

    Lung nodule detection using 3d convolutional neural networks trained on weakly labeled data

    In Medical Imaging 2016: Computer-Aided Diagnosis, Vol. 9785, pp. 978532. Cited by: §I.
  • [4] M. Courbariaux, Y. Bengio, and J. David (2015) Binaryconnect: training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123–3131. Cited by: §II.
  • [5] J. H. Friedman and L. C. Rafsky (1979) Multivariate generalizations of the wald-wolfowitz and smirnov two-sample tests. Ann. Statist., pp. 697–717. Cited by: §III-A.
  • [6] J. H. Friedman, L. C. Rafsky, et al. (1983) Graph-theoretic measures of multivariate association and prediction. The Annals of Statistics 11 (2), pp. 377–391. Cited by: §I.
  • [7] S. Gao, G. Ver Steeg, and A. Galstyan (2015) Efficient estimation of mutual information for strongly dependent variables. In Artificial intelligence and statistics, pp. 277–286. Cited by: §II-B.
  • [8] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §IV-E.
  • [9] S. M. Grigorescu, B. Trasnea, L. Marina, A. Vasilcoi, and T. Cocias (2019) NeuroTrajectory: a neuroevolutionary approach to local state trajectory learning for autonomous vehicles. IEEE Robotics and Automation Letters 4 (4), pp. 3441–3448. Cited by: §I.
  • [10] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger (2017) On calibration of modern neural networks. In

    Proceedings of the 34th International Conference on Machine Learning - Volume 70

    ICML’17, pp. 1321–1330. Cited by: §IV-E.
  • [11] Y. Guo, A. Yao, and Y. Chen (2016) Dynamic network surgery for efficient dnns. In Advances in neural information processing systems, pp. 1379–1387. Cited by: §II-A1.
  • [12] S. Han, H. Mao, and W. J. Dally (2015) Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149. Cited by: §II-A1.
  • [13] S. Han, J. Pool, J. Tran, and W. Dally (2015) Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143. Cited by: §I, §II-A1, §II.
  • [14] B. Hassibi and D. G. Stork (1993) Second order derivatives for network pruning: optimal brain surgeon. In Advances in neural information processing systems, pp. 164–171. Cited by: §II-A1.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    External Links: Document, Link Cited by: §IV-A.
  • [16] Y. He, X. Zhang, and J. Sun (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. Cited by: §II-A2.
  • [17] Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu, et al. (2019) Gpipe: efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, pp. 103–112. Cited by: §I.
  • [18] Z. Huang and N. Wang (2018) Data-driven sparse structure selection for deep neural networks. In Proceedings of the European conference on computer vision (ECCV), pp. 304–320. Cited by: §II-A2, TABLE I.
  • [19] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio (2017) Quantized neural networks: training neural networks with low precision weights and activations. The Journal of Machine Learning Research 18 (1), pp. 6869–6898. Cited by: §II.
  • [20] M. Jaderberg, A. Vedaldi, and A. Zisserman (2014) Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference 2014, External Links: Document, Link Cited by: §II.
  • [21] A. Kraskov, H. Stögbauer, and P. Grassberger (2004) Estimating mutual information. Physical review E 69 (6), pp. 066138. Cited by: §II-B.
  • [22] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §IV-A.
  • [23] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §IV-E.
  • [24] V. Lebedev and V. Lempitsky (2016) Fast convnets using group-wise brain damage. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2554–2564. Cited by: §II-A2.
  • [25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §IV-A.
  • [26] Y. LeCun, J. S. Denker, and S. A. Solla (1990) Optimal brain damage. In Advances in neural information processing systems, pp. 598–605. Cited by: §II-A1, §II-A2.
  • [27] J. Lee, S. Jun, Y. Cho, H. Lee, G. B. Kim, J. B. Seo, and N. Kim (2017) Deep learning in medical imaging: general overview. Korean journal of radiology 18 (4), pp. 570–584. Cited by: §I.
  • [28] N. Leonenko, L. Pronzato, V. Savani, et al. (2008) A class of rényi information estimators for multidimensional densities. The Annals of Statistics 36 (5), pp. 2153–2182. Cited by: §II-B.
  • [29] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf (2016) Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Cited by: §I, §II-A1, TABLE I.
  • [30] J. Li, Q. Qi, J. Wang, C. Ge, Y. Li, Z. Yue, and H. Sun (2019) OICSR: out-in-channel sparsity regularization for compact deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7046–7055. Cited by: §II-A2.
  • [31] S. Lin, R. Ji, C. Yan, B. Zhang, L. Cao, Q. Ye, F. Huang, and D. Doermann (2019) Towards optimal structured cnn pruning via generative adversarial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2790–2799. Cited by: §II-A2, §II, TABLE I.
  • [32] Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang (2017) Learning efficient convolutional networks through network slimming. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744. Cited by: §II-A2, TABLE I.
  • [33] L. Lu, M. Guo, and S. Renals (2017) Knowledge distillation for small-footprint highway networks. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), External Links: Document, Link Cited by: §II.
  • [34] J. Luo, J. Wu, and W. Lin (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066. Cited by: §II-A1, §II, TABLE I.
  • [35] L. A. Marina, B. Trasnea, T. Cocias, A. Vasilcoi, F. Moldoveanu, and S. M. Grigorescu (2019) Deep grid net (dgn): a deep learning system for real-time driving context understanding. In 2019 Third IEEE International Conference on Robotic Computing (IRC), pp. 399–402. Cited by: §I.
  • [36] K. R. Moon, K. Sricharan, and A. O. Hero (2017) Ensemble estimation of mutual information. In 2017 IEEE International Symposium on Information Theory (ISIT), pp. 3030–3034. Cited by: §II-B.
  • [37] M. P. Naeini, G. Cooper, and M. Hauskrecht (2015) Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, Cited by: §IV-E.
  • [38] M. Noshad, K. R. Moon, S. Y. Sekeh, and A. O. Hero (2017) Direct estimation of information divergence using nearest neighbor ratios. In 2017 IEEE International Symposium on Information Theory (ISIT), pp. 903–907. Cited by: §II-B.
  • [39] E. Park, J. Ahn, and S. Yoo (2017) Weighted-entropy-based quantization for deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.
  • [40] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala (2019) PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.), pp. 8024–8035. External Links: Link Cited by: 3rd item.
  • [41] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le (2019) Regularized evolution for image classifier architecture search. In Proceedings of the aaai conference on artificial intelligence, Vol. 33, pp. 4780–4789. Cited by: §I.
  • [42] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. External Links: Document Cited by: §IV-A.
  • [43] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Cited by: Fig. 7.
  • [44] R. Sen, A. T. Suresh, K. Shanmugam, A. G. Dimakis, and S. Shakkottai (2017) Model-powered conditional independence test. In Advances in neural information processing systems, pp. 2951–2961. Cited by: §III-A.
  • [45] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §IV-A.
  • [46] G. Tinchev, A. Penate-Sanchez, and M. Fallon (2019) Learning to see the wood for the trees: deep laser localization in urban and natural environments on a cpu. IEEE Robotics and Automation Letters 4 (2), pp. 1327–1334. Cited by: §I.
  • [47] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. Jarrod Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. Carey, İ. Polat, Y. Feng, E. W. Moore, J. Vand erPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and S. 1. Contributors (2020) SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17, pp. 261–272. External Links: Document Cited by: 3rd item.
  • [48] W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li (2016) Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074–2082. Cited by: §I, §II-A2, TABLE I.
  • [49] S. Yasaei Sekeh and A. O. Hero (2019) Geometric estimation of multivariate dependency. Entropy 21 (8), pp. 787. Cited by: §I, §II-B, §III-A, §III-A.
  • [50] J. Yim, D. Joo, J. Bae, and J. Kim (2017)

    A gift from knowledge distillation: fast optimization, network minimization and transfer learning

    In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.
  • [51] J. Yoon and S. J. Hwang (2017) Combined group and exclusive sparsity for deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3958–3966. Cited by: §II-A2.
  • [52] R. Yu, A. Li, C. Chen, J. Lai, V. I. Morariu, X. Han, M. Gao, C. Lin, and L. S. Davis (2018) Nisp: pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9194–9203. Cited by: §II-A1, TABLE I.
  • [53] X. Zhang, J. Zou, K. He, and J. Sun (2016) Accelerating very deep convolutional networks for classification and detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (10), pp. 1943–1955. External Links: Document, Link Cited by: §II.
  • [54] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun (2015) Efficient and accurate approximations of nonlinear convolutional networks. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), External Links: Document, Link Cited by: §II.
  • [55] Y. Zhou, S. Moosavi-Dezfooli, N. Cheung, and P. Frossard (2018) Adaptive quantization for deep neural network. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §II.