Paoding: Supervised Robustness-preserving Data-free Neural Network Pruning

by   Mark Huasong Meng, et al.

When deploying pre-trained neural network models in real-world applications, model consumers often encounter resource-constraint platforms such as mobile and smart devices. They typically use the pruning technique to reduce the size and complexity of the model, generating a lighter one with less resource consumption. Nonetheless, most existing pruning methods are proposed with a premise that the model after being pruned has a chance to be fine-tuned or even retrained based on the original training data. This may be unrealistic in practice, as the data controllers are often reluctant to provide their model consumers with the original data. In this work, we study the neural network pruning in the data-free context, aiming to yield lightweight models that are not only accurate in prediction but also robust against undesired inputs in open-world deployments. Considering the absence of the fine-tuning and retraining that can fix the mis-pruned units, we replace the traditional aggressive one-shot strategy with a conservative one that treats the pruning as a progressive process. We propose a pruning method based on stochastic optimization that uses robustness-related metrics to guide the pruning process. Our method is implemented as a Python package named Paoding and evaluated with a series of experiments on diverse neural network models. The experimental results show that it significantly outperforms existing one-shot data-free pruning approaches in terms of robustness preservation and accuracy.


page 1

page 2

page 3

page 4


Towards Compact and Robust Deep Neural Networks

Deep neural networks have achieved impressive performance in many applic...

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

In recent years, deep neural networks have known a wide success in vario...

"Understanding Robustness Lottery": A Comparative Visual Analysis of Neural Network Pruning Approaches

Deep learning approaches have provided state-of-the-art performance in m...

Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization

When approaching a novel visual recognition problem in a specialized ima...

Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer

When large scale training data is available, one can obtain compact and ...

Convolutional Network Fabric Pruning With Label Noise

This paper presents an iterative pruning strategy for Convolutional Netw...

Streamlining Tensor and Network Pruning in PyTorch

In order to contrast the explosion in size of state-of-the-art machine l...

1 Introduction

Deep learning is usually realized by a neural network

model that is trained with a large amount of data. Compared with other machine learning models such as linear models or support vector machine (SVM) models, neural networks, or more specifically deep neural networks, are empirically proven to gain an advantage in handling more complicated tasks due to their superior capability to precisely approximate an arbitrary non-linear computation 

Hornik et al. (1989); Goodfellow et al. (2016).

In order to achieve a favorable accuracy and generalization, the common practice to train a neural network is to initialize a model that is large and deep in size. This causes the contemporary models over-parameterized. For example, many models in image classification or natural language processing contain millions or even billions of trainable parameters 

Szegedy et al. (2016); Krizhevsky et al. (2012); He et al. (2016); Tolstikhin et al. (2021); You et al. (2020). Deploying them on resource-constraint platforms, such as the Internet of Things (IoT) or mobile devices, thus become challenging. To resolve this issue, the neural network pruning technique LeCun et al. (1990); Gale et al. (2019); Blalock et al. (2020) is extensively used. It aims to remove parameters that are redundant or useless, so as to reduce the model size as well as the demand for computational resources.

Most existing research on model pruning assumes the pruning is performed by the model owner who has the original training dataset. The majority of existing pruning techniques are discussed with a premise that the models after being pruned are going to be fine-tuned or even retrained using the original dataset Wang et al. (2020); Molchanov et al. (2017b); Luo et al. (2017); Suau et al. (2020

); TensorFlow (

2021). As a result, they tend to use aggressive and coarse-grained one-shot

pruning strategy with the belief that the mis-pruned neurons, if any, could be fixed by fine-tuning and retraining.

This strategy, however, seriously compromises the applicability of pruning. In practice, the model pruning is mostly performed by the model consumers to adapt the model for the actual deployment environment. We refer to this stage as the deployment stage, to differentiate it from the training and tuning stages occurring at the data controller side Yoshioka et al. (2021); Guo et al. (2019). In the deployment stage, the model consumers typically have no access to the original training data that are mostly private and proprietary Papernot et al. (2017); Taigman et al. (2014). In addition, data controllers even have to refrain from providing their data due to strict data protection regulations like the EU General Data Protection Regulation (GDPR) The European Parliament (2016). Therefore, pruning without its original training data, which we refer to as data-free pruning, is desirable.

In this work, we approach this problem through the lens of software engineering methodologies. To address the challenge of the lack of post-pruning fine-tuning, we design our pruning as a supervised iterative and progressive process, rather than in a one-shot manner. In each iteration, it cuts off a small set of units and evaluates the effect, so that the mis-pruning of units that are crucial for the network’s decision making can be minimized. We propose a two-stage approach to identify the units to be cut off. At the first stage, it performs a candidates prioritizing based on the relative significance of the units. At the second stage, it carries out a stochastic sampling with the simulated annealing algorithm Van Laarhoven and Aarts (1987), guided by metrics quantifying the desired property. This allows our method to prune the units that have a relatively low impact on the property, and eventually approaches the optimum.

Our pruning method is designed to pursue robustness preservation, given that the model may be exposed to unexpected or even adversarial inputs Bastani et al. (2016); Goodfellow et al. (2015); Madry et al. (2018); Zhang et al. (2020); Li et al. (2021) after being deployed in a real-world application scenario.

Our solution is to encode the robustness as metrics and embed them into the stochastic sampling to guide the pruning process. It stems from the insight that a small and uniformly distributed pruning impact on each output unit is favored to preserve the robustness of the pruned model. We use two metrics to quantify the pruning impact on the model robustness, namely

-norm and entropy. The -norm measures the overall scale of pruning impact on the model’s output, in a way that a smaller value tends to incur less uncertainty in the network’s decision making. The entropy measures the similarity of the pruning impact on each output unit. A smaller entropy is obtained in the scenario that the pruning impact is more uniformly distributed in each output unit, and therefore implies that the pruned model is less sensitive when dealing with undesired perturbations in inputs.

We implement our supervised data-free pruning method in a toolkit named Paoding111The name of Paoding originates from an anecdote collected by an ancient Chinese text of Taoism named Zhuang Zi. Pao Ding is a dexterous cook well known for his excellent skill in cutting up an ox..

We evaluate it with a series of experiments on diverse neural network models. The experimental results show that our supervised pruning method offers promising robustness preservation even after 60% of hidden units have been pruned, and meanwhile incurs no significant trade-off in accuracy. It significantly outperforms existing one-shot data-free approaches in terms of both robustness preservation and accuracy, with improvements up to 50% and 30%, respectively. The evaluation also demonstrates that it can generalize on a wide range of neural network architectures, including the fully connected (FC) multilayered perceptron (MLP) models and convolutional neural network (CNN) models.


In summary, the contributions of this work are as follows.

  • [noitemsep,topsep=2pt]

  • A robustness-preserving data-free pruning framework. We investigate the robustness-preserving neural network pruning in the data-free context. To the best of our knowledge, this is the first work of this kind.

  • A stochastic pruning method. We reduce the pruning problem into a stochastic process, to replace the coarse-grained one-shot pruning strategy. The stochastic pruning is solved with the simulated annealing algorithm. This avoids mis-cutting off those hidden units that play crucial roles in the neural network’s decision making.

  • Implementation and evaluation. We implement our punning method into Paoding

    and evaluate it with a series of experiments on representative datasets and models. To demonstrate the generalization of our method, our evaluation covers not only those models trained on datasets commonly used in the research community such as MNIST and CIFAR-10, but also models designed to solve real-world problems such as credit card fraud detection and pneumonia diagnosis, both of which are robustness-sensitive.

We have released Paoding in the Python Package Index (PyPI) repository ( under the package ID paoding-dl. We also have made its source code available online ( to facilitate future research on the model pruning area.

2 Background

In this section, we present a brief overview of neural network pruning. We also recap the stochastic optimization and simulated annealing algorithm that are used in our work.

2.1 Neural Network Pruning

A typical deep neural network is a

multilayered perceptron

architecture that contains multiple fully connected layers Hastie et al. (2009). For this reason, deep neural networks are widely recognized as an over-parameterized and computationally intensive machine learning technique Denton et al. (2014); Ba and Caruana (2014). Neural network pruning was introduced in the early 1990s as an effective relief to the performance demand of running them with a limited computational budget LeCun et al. (1990). In recent years, as deep neural networks are increasingly applied in dealing with complex tasks such as image recognition and natural language processing, network pruning and quantization are identified as two key model compression techniques and have been widely studied Choi et al. (2020); Tang et al. (2020); Liu et al. (2019); Han et al. (2015); He et al. (2017); Yu et al. (2018); Chin et al. (2018); Hu et al. (2016); Li et al. (2017); Suau et al. (2020); TensorFlow (2021)

. Existing pruning techniques could be grouped into two genres. One genre of pruning is done by selectively zeroing out weight parameters (also known as synapses). This type often does not really reduce the size and computational scale of a neural network model, but only increases the sparsity (i.e., the density of zero parameters) 

Wen et al. (2016). Therefore, that genre of pruning is categorized as unstructured pruning in the literature Liu et al. (2019); Han et al. (2015). In contrast, the other genre of pruning called structured pruning emphasizes cutting of entire hidden unit with all its synapses off from which layer it is located, or removal of specific channel or filter from a convolutional layer Luo et al. (2017); Srinivas and Babu (2015); Tang et al. (2020).

Pruning target is the common metric to assess neural network pruning. It indicates the percentage of parameters or hidden units to be removed during the pruning process, and therefore it is also known as sparsity in some literature on unstructured pruning. Fidelity is another metric that describes how well the pruned model mimics the behavior of its original status and is usually calculated through accuracy. An ideal pruning algorithm with promising fidelity should not incur a significant accuracy decline when compared with the original model. However, the discussion of the impact of pruning to measurements beyond fidelity, such as robustness, is still in its nascent phase Dinh et al. (2020); Denninnart et al. (2019); Liebenwein et al. (2021). As robustness is a representative property specification of a neural network model that concerns the security of its actual deployment, unveiling the influence of pruning on robustness could provide a guarantee to the trustworthiness of pruning techniques.

2.2 Stochastic Optimization

Stochastic optimization refers to solving an optimization problem when randomness is present. In recent years, stochastic optimization has been increasingly used in solving software engineering problems such as testing Su et al. (2017); Whittaker (1997) and debugging Le et al. (2015); Littlewood (1981). The stochastic process offers an efficient way to find the optimum in a dynamic system when it is too complex for traditional deterministic algorithms. The core of stochastic optimization is the probabilistic decision of its transition function in determining whether and how the system moves to the next state. Due to the presence of randomness, stochastic optimization has an advantage in escaping a local optimum and eventually approaching the global optimum.

The simulated annealing algorithm Van Laarhoven and Aarts (1987) is an extensively used method for stochastic optimization. It is essentially proposed as a Monte Carlo method that adapts the Metropolis-Hastings algorithm Chib and Greenberg (1995) in generating new states of a thermodynamic system. At each step, the simulated annealing calculates an acceptance rate

based on the current temperature, generates a random probability, and then makes a decision based on these two variables. In case the generated probability is less than the acceptance rate, the system accepts the currently available neighboring state and accordingly moves to the next state; otherwise, it stays at the current step and then considers the next available neighboring candidate. In general, the simulated annealing algorithm provides an efficient approach to drawing samples from a complex distribution.

3 Problem Definition

In this section, we present the definition of neural network robustness and robustness-preserving pruning.

3.1 Robustness of Neural Networks

Unlike the traditional metrics such as accuracy and loss that mainly focus on the prediction performance during the testing, robustness is a feature representing the trustworthiness of the model against real-world inputs. The real-world inputs may be from an undesired distribution Zhou et al. (2020), and are often with distortions or perturbations, either intentionally (e.g., adversarial perturbations Goodfellow et al. (2015); Szegedy et al. (2014)) or unintentionally (e.g., blur, weather condition, and signal noise Hendrycks and Dietterich (2019); Guo et al. (2020)). For this reason, robustness is particularly crucial in the open-world deployment of a neural network model.

Figure 1: Sample of an adversarial perturbation generated by FGSM, illustrated with a successful attack to the prediction of the MNIST dataset

The evaluation of robustness is discussed against adversarial models, such as projected gradient descent (PGD) attack Madry et al. (2018), and fast-gradient sign method (FGSM) Goodfellow et al. (2015). Take FGSM as an example, the adversary can generate an -norm untargeted perturbation

for an arbitrary test sample. The untargeted perturbation is calculated with the negative sign of the loss function’s gradient and then multiplied with

before adding to the benign input. The is usually a very small fraction to ensure the adversarial samples are visually indistinguishable from those benign ones Goodfellow et al. (2015). By doing that, an adversarial input tends to maximize the loss function of the victim neural network model and thereby leads the model to misclassify. Fig. 1 illustrates a successful attack that causes the victim model to misclassify an image from the MNIST dataset. The FGSM is regarded as a strong attack model to evaluate the robustness preservation of a neural network model and has been extensively applied in both literature Moosavi-Dezfooli et al. (2016); Huang et al. (2017); S. et al. (2019) and mainstream toolkits TensorFlow (2021); Inkawhich (2017). Accordingly, we adopt FGSM as the default attack model and assume input perturbations are measured in -norm in this work.

Given an adversarial strategy, the attacker can modify an arbitrary benign input with a crafted perturbation to produce an adversarial input. We formalize the robustness property of the neural network model as follows.

Definition 3.1 (Robustness against adversarial perturbations).

Given a neural network model , an arbitrary benign instance sampled from the input distribution (e.g., a dataset) , and an adversarial input which is produced by a specific adversarial strategy based on , written as . The model satisfies the robustness property with respect to , if it makes consistent predictions on both and , i.e., .

3.2 Robustness-preserving Pruning

Our pruning method aims to preserve the robustness of a given neural network model. Following a previous study Bastani et al. (2016), we define this preservation as the extent that the pruned model can obtain the maximum number of consistent predictions of both benign and adversarial inputs, and accordingly, we name those inputs of consistent predictions as robust instances. Thus, we propose an objective function specifying the number of robust instances from a given distribution. The robustness-preserving pruning then becomes an optimization problem that aims to identify a pruning strategy towards maximizing the objective function. We formalize our goal of robustness-preserving pruning as follows.

Definition 3.2 (Robustness-preserving pruning).

Given a neural network model that takes inputs and labels from distribution . Each input has a corresponding label , written as . Let be the adversarial input that adds perturbation to a benign input . Our goal is to find a pruning method that transforms the original neural network model to a pruned one , which maximizes the objective function that counts the occurrence of robust input instances from the distribution , written as:


4 Approach Overview

In this section, we introduce our primitive pruning operation to show how individual unit is pruned, and then give a brief overview of our approach.

Figure 2: The workflow of our pruning method

4.1 Saliency-based Primitive Pruning Operation

When attempting to prune a hidden unit (denoted by the nominee, i.e., the one chosen to be pruned), our method uses a pair-wise strategy rather than simply deleting the nominee. In particular, our primitive pruning operation considers another unit (denoted by the delegate, i.e., the one to cover the nominee’s duty) from the nominee’s layer that tends to play a similar role in making a prediction. It removes the nominee and adjusts the parameters of the delegate so that the impact of a single pruning operation on the subsequent layers can be reduced. Given a nominee and delegate pair , which are the -th and -th hidden units at the layer , the primitive pruning operation performs the following two steps.

  1. [noitemsep,topsep=0pt]

  2. The nominee is pruned. To this end, we zero out all parameters connecting from and to ;

  3. We modify the parameters connecting from the delegate to the next layer with the sum of the parameters of both and .

The parameter update in Step (2) is carried out to offset the impact caused by pruning the nominee. Fig. 3 illustrates our primitive pruning operation.

Figure 3: An illustration of the primitive pruning operation on

To find the delegate, we use a metric called saliency, which is proposed in a previous study Srinivas and Babu (2015) to assess the “importance” of a unit when it is to be replaced by another unit in its layer. A lower saliency means that the nominee can be replaced by the delegate with less impact on the network. Let be the weight parameter connecting the -th hidden unit at the layer with the -th hidden unit at the layer , and be the bias parameter of the -th hidden unit at the layer . Given the nominee , its saliency with respect to the delegate is measured as follows.


4.2 Workflow of the Pruning Method

Fig. 2

shows the workflow of our pruning method. It begins with reading a pre-trained model and loading its architecture and parameters (Stage ❶). Then it traverses the model layer by layer and iteratively performs hidden unit pruning. The pruning process (Stage ❷-❹) might be executed in multiple epochs, depending on the pruning target and pruning batch size per epoch. Once the pruning target has been reached, our method saves the pruned model (Stage ❺).

Below we brief each component in the pruning process. The outer loop specifies an epoch, in which a fixed portion of fully connected units (i.e., the batch size) will be cut off from the model. The inner loop represents the iteration of all layers in a forward direction. A pruning iteration is composed by three stages, i.e., candidates prioritizing (Stage ❷), stochastic sampling (Stage ❸) and pruning and model updating (Stage ❹). The former two stages identify the units to be pruned at each iteration, and then our method invokes the primitive pruning operation to prune each of them.

  • The candidates prioritizing stage evaluates the saliency for every pair of hidden units at the beginning of each iteration and generates a saliency matrix. Considering that pruning a nominee that is hard to find a proper delegate (i.e., the nominee has high saliency with respect to every hidden unit in its layer) is unfavorable, we sort the list of hidden unit pairs according to their saliency values in ascending order and pass that list to the next stage. Those candidates with the least saliency values are given priority to be processed in the next stage.

  • The stochastic sampling stage takes the list of pruning candidates as input and identifies the units to be pruned. The basic idea is to estimate how pruning a unit impacts the prediction at the output layer, to decide whether to keep or discard it. A naive way is to evaluate the impact of each candidate, but this is too costly since calculating each impact requires a forward propagation till the output layer. We thus employ a stochastic sampling strategy with the estimated impact as a guide in this process. Our impact estimation and corresponding sampling strategy are detailed in Section 


5 Supervised Data-free Pruning

In this section, we introduce our supervised pruning method. We detail our approach of estimating how pruning a unit impacts the prediction at the output layer (Section 5.1). With this, we can approximate the cumulative impact on the robustness of the final model, and thus we embed it into our sampling criterion (Section 5.2). To prevent the sampling method from being stuck at a local optimum, we employ the simulated annealing algorithm in determining which candidate(s) to prune (Section 5.3).

5.1 Estimation of Pruning Impact

Because the primitive pruning operation prunes the nominee and modifies the value of parameters connecting from the delegate to the next layer, it affects the computation of the hidden units in subsequent layers. Such impact would eventually propagate to the output layer. In this section, we discuss how we estimate this impact.

For an original model that performs an -class classification, its output, when given a sample

, is a vector of

numbers denoted by . The model , which is derived by pruning a unit of , outputs another vector of the same size, denoted by . We aim to estimate the impact at the output layer as a vector of items, i.e., for any legitimate input. To achieve this, we first approximate the valuation of hidden units involved in the candidate pair (i.e., nominee and delegate) by interval arithmetic based on the bounds of normalized input. Next, we assess the impact caused by a primitive pruning operation on the subsequent layer where the pruning operation is performed. In this process, we quantify the impact as an interval. After obtaining the impact on the subsequent layer, we apply the forward propagation until the output layer, so that the pruning impact on the output layer can be derived.

We adopt the abstract interpretation that is commonly used in the literature of neural network verification Wang et al. (2018); Singh et al. (2019); Wang (2019) to estimate the upper and lower bounds of an arbitrary hidden unit. To achieve that, we need to define the scope of a legitimate input as an interval. As input normalization is a common preprocessing practice prior to training a neural network, the value of an input feature is usually restricted to a fixed range (e.g., ). With a vector of intervals provided as the input, we perform the forward propagation to approximate the valuation of the involved hidden units. This propagation simulates the computation within a neural network model with a specific input. During the propagation, we leverage the interval arithmetic rules Wang (2019) to calculate the upper and lower bounds. In the actual implementation, we build a map of intervals for all hidden units of the neural network at the beginning of pruning. Due to each primitive pruning modifies parameters at the next layer, we update the map with the latest estimation after each iteration specifies the batch pruning at the same layer.

For an arbitrary hidden unit at the -th layer , its impact caused by a primitive pruning of can be formulated as Eq.3 below.


We can obtain the latest estimation of both and from the map of intervals that we have built at the beginning. Since all weight parameters are known in the white-box setting, we can also quantify the impact as an interval.

Next, we perform another round of forward propagation to simulate the impact of affected hidden units from the layer to the output layer. The value to be propagated in this round is no longer the interval of input, but the impact of affected hidden units as intervals. The propagated impact at the output layer could be treated as the estimated result of for the current pruning operation. The propagated impact on the output for each pruning operation will be accumulated along with the pruning progress. We call it cumulative impact to the output layer and use to denote it in the remaining of this section.

5.2 Sampling Criterion

Our sampling criterion is proposed based on an insight that a small and uniformly distributed cumulative impact is less possible to drive the pruned model to generate an output that is different from the original one, even the input is with an adversarial perturbation. On the contrary, the pruning impact with a variety of scales and values is considered to impair the robustness because it makes the pruned model sensitive that its prediction may flip when encountered a perturbation in the input. Our proposed criterion is composed of two sampling metrics.

  • One metric accounts for the scale of cumulative impact on the output layer. A greater scale means the current pruning operation generates a larger magnitude of impact on the output layer.

  • The other is based on the entropy that assesses the degree of similarity of cumulative impact on each output unit. A greater entropy implies the pruning impact on each output node shows a lower similarity.

Our sampling strategy jointly considers both metrics and favors both to be small.

Metric #1: Scale

As we can obtain the cumulative impact as a vector of intervals, we adopt the -norm to assess the scale of cumulative impact. Here we use to represent the impact bounds of an arbitrary node at the output layer and let the term denote the -norm of the intervals. The formula to calculate is shown in Eq. 4.


Metric #2: Entropy

We apply Shannon’s information entropy Shannon (1948) to measure the similarity of cumulative impact on each output unit. Our measurement of the similarity for a pair of intervals is adopted from existing literature Zhang and Fu (2006); Chen (1995); Dai et al. (2016), as defined below.

Definition 5.1 (Similarity of interval-valued data).

Given a list of intervals , and each interval is composed of its lower and upper bounds such as . Let be the global minimum of , i.e., the minimum of lower bounds, and similarly, be the global maximum. The similarity degree of relative bound difference between two intervals and is defined as:


With this definition, we say two intervals and are -similar if for any similarity threshold .

Next, we measure the overall similarity of an interval with all other intervals in a list, and we call it density of similarity. We adopt the calculation of the density of -similarity for an interval from an existing study Dai et al. (2016), which is defined as follows.

Definition 5.2 (Density of similarity).

For an interval from a list of intervals , its density of -similarity among is measured by the probability of an arbitrary interval (other than ) is -similar with itself, calculated as:


With the density of similarity, we define the metric as the entropy of the cumulative impact on the output layer, written as . The formula of calculation is presented in Eq.7.


The similarity threshold is in the range . With the same set of intervals, a higher results in a lower density of similarity such that it makes entropy calculation more sensitive to the difference of those intervals. In our work, the cumulative impact is obtained after a forward propagation of several layers and therefore might be in a large magnitude. Accordingly, we set 0.9 as the default value of to maintain a variety of similarity densities rather than all equal to one222A comparably greater value of is needed to maintain a favorable distinguishable degree among hidden units’ outputs rather than always producing a similarity density equals 1. For this reason, the value 0.9 is used. .


We introduce a pair of parameters to specify the weight of these two metrics. Considering these two metrics may have different magnitude, and particularly, the

is unbounded (i.e., no upper bound), we use a sigmoid function (

) to normalize these two metrics in the final criterion. Due to the concave and monotonic nature of sigmoid logistic function for values greater than zero, it can output bounded results within with their values’ order the same with input, denoted as . On the whole, the definition of our sampling criterion is given in Eq. 8 below.


We use the term energy to represent our sampling criterion to echo the simulated annealing algorithm used in our guided stochastic sampling strategy, which will be presented in the next subsection.

5.3 Guided Stochastic Sampling

Since the sampling criterion can reflect the impact of the unit pruning on the model robustness, a naive way is to calculate (Eq. 8) for every pair and prune the unit with the least value. However, this is too expensive because each calculation requires a forward propagation in a fully connected manner. To address this, we use a stochastic sampling guided by the

-based heuristic to identify the candidates to be pruned. Our method is presented in Algorithm

1, and we discuss it in the remainder of this section.

0:  An -layer neural network model to be pruned (), cumulative impact of all previous pruning (), weights used in sampling criterion (), batch size (), current temperature
0:  A pruned deep learning model , updated cumulative impact of pruning , the list of hidden units pruned , the updated temperature
1:  for layer in all hidden layers do
2:     load parameters of the current layer
3:     build a saliency matrix for the unit pairs
4:     sort the saliency matrix in ascending order
5:     set
6:     for hidden unit pair in the first values in  do
7:        simulate a pruning of and calculate the impact on the output layer
8:        calculate the temporary cumulative impact
9:        calculate and with
10:        calculate the sampling criterion
11:        if () and (then
12:           calculate acceptance rate based on temperature and
13:           generate a random probability
14:           if (then
15:              reject the current and go to the next one
16:           end if
17:        end if
18:        accept and perform pruning
19:        add into a pruning list
20:        update
21:     end for
22:     update with pruning
23:     update temperature
24:  end for
Algorithm 1 Supervised pruning with a stochastic heuristic

We exploit the idea of simulated annealing to implement our sampling strategy through the lens of stochastic optimization. In particular, our method traverses the hidden unit pairs from the candidates prioritizing result one by one. In the beginning, our method by default accepts the first candidate from the prioritizing result and records its as the evaluation of the current state. Upon receiving a new pruning candidate, the method calculates the of that candidate, compares it with the current state, and decides whether to prune it during the current iteration, according to an acceptance rate calculated based on a temperature variable . The temperature variable is adopted from the thermodynamic model. The descent of temperature value reflects the solving progress of the optimization problem – as temperature decreases, our method would less possibly accept a pruning candidate with an greater than the current state. Here we define the temperature used in our method as the portion of the remaining pruning task, which equals 1 at first and approaches 0 when the pruning target is reached. Given the temperature of the current iteration written as , the assessment of the last drawn (accepted) candidate , we can obtain the acceptance rate of the next candidate (line 12 of Algorithm 1) once we calculate its energy (written as , line 10) according to Eq. 8. The formula of acceptance rate is provided as follows.


As Eq. 9 shows, our method automatically accepts a candidate if its is lower than the one in the current state; otherwise, a random probability will be generated and tested against the acceptance rate to determine whether we accept or discard the candidate. This procedure is reflected as lines 11-20 of Algorithm 1.

There are two obvious advantages of this stochastic process. First, applying such randomization in sampling is less expensive than computing of all candidates and sorting. Moreover, the stochastic process through simulated annealing enables us to probabilistically accept a candidate that may not have the lowest at the current step. This helps prevent our pruning method from being stuck at a local optimum and eventually achieves our objective.

6 Evaluation

This section presents the evaluation of our pruning method. We aim to answer the following three research questions.

  • RQ1: Robustness Preservation. How effective is our pruning method in terms of robustness preservation? Does it cause significant decay on the model accuracy? Does our method generalize on diverse neural network models?

  • RQ2: Pruning Efficiency. Can our method complete the pruning within an acceptable time?

  • RQ3: Benchmarking. Can our method outperform one-shot strategies in terms of robustness preservation?

6.1 Implementation and Experiment Settings

We implement our pruning method into a toolkit named Paoding using Python v3.8 and release as a public package in Python Package Index (PyPI)333The package profile can be found at All neural network models are trained, pruned, and evaluated based on TensorFlow v2.3.0. Our toolkit accepts any legitimate format of neural network models trained by TensorFlow. It also allows the user to configure the pruning target and the number of pruning per epoch. Given a model as input, it automatically identifies the fully connected hidden layers, prunes the hidden units from those layers, and stops once the pruning has reached the setting threshold (e.g., 80% of hidden units have been cut off). Our source code is made available online444The source code of Paoding is hosted at to facilitate future research on similar topics.

To evaluate our method on diverse mainstream neural network applications, we select four representative datasets ranging from structured tabular data to images, with labels for both binary classification and multi-class classification. For each dataset, we select a unique architecture of neural network model to fit the classification task. To this end, we refer to the most popular example models on Kaggle555 (accessed in July 2021).

. We have trained four models with different architectures, covering both purely fully connected MLPs and CNNs. All models are trained with a 0.001 learning rate and 20 epochs. The number of fully connected layers and hidden units per layer varies among these models. Besides that, we also test our method with models that use different activation functions. We have trained another two models separately with ReLU and sigmoid for each of MNIST and CIFAR-10 datasets respectively. The diversification of the models is to evaluate the generalizability of our method (

RQ1). The details of the four used datasets and six pre-trained models are listed in Table 1.

Models Model Architecture Activation
No. Dataset Type & Input Size
1 Credit Card Fraud Detection ULB Machine Learning Group (2018) Tabluar data (30 columns) 4 layer MLP () ReLU
2 Chest X-ray Images (Pneumonia) Mooney (2018) Colored images (various sizes) 19 layer CNN (w. 2 FC layers) () ReLU
3 MNIST Handwritten Digits LeCun and Cortes (2010) Greyscale images () 5 layer MLP () ReLU
4 MNIST Handwritten Digits LeCun and Cortes (2010) Greyscale images () 5 layer MLP () sigmoid
5 CIFAR-10 Images Krizhevsky et al. (2009) Colored images () 9 layer CNN (w. 3 FC layers) () ReLU
6 CIFAR-10 Images Krizhevsky et al. (2009) Colored images () 9 layer CNN (w. 3 FC layers) () sigmoid
Table 1: Datasets and models used in evaluation

We empirically select the values of the parameters and through a tuning process. 666

The fine-tuning performed in this paper is not the hyperparameter fine-tuning of model training.

. We observe the pruning of those multi-class models like MNIST and CIFAR-10 is more sensitive to their values compared with binary classification models. We also find that suits the models that use ReLU as activation, and suits those sigmoid models. The reason for such difference lies in the importance of -norm of pruning impact on the output layer. As the sigmoid activation function always tends to converge to a fixed interval, different sampling decision does not make much difference in the -norm of cumulative pruning impact at the output layer. Thus, we give more weight to the entropy when pruning sigmoid models. Our experiments run on an Ubuntu 20.04 LTS machine with an 8-core Intel CPU (2.9GHz Core (TM) i7-10700F), 64GB RAM and an NVIDIA GeForce RTX 3060Ti GPU.

Figure 4: Robustness decay of six models when applying our supervised pruning method (up to 80% pruning)
Figure 5: Accuracy decay of six models when applying our supervised pruning method (up to 80% pruning)

6.2 RQ1: Robustness Preservation

Our first set of experiments is conducted to investigate the robustness preservation of our pruning method. We apply the method on all six models. The used metric of robustness preservation is based on Definition 3.2. Our evaluation calculates the number of consistent and correct classification of adversarial inputs on both the original and pruned models, and compares these two numbers to determine the degree of robustness preservation. In addition, we notice that the models trained on CIFAR-10 tend to be more sensitive to adversarial inputs compared with the counterparts trained on simpler datasets like MNIST, as revealed by previous studies Bastani et al. (2016); Madry et al. (2018). Therefore, for models trained on it, i.e., models #5 and #6, we also evaluate the preservation of the top- ( in our experiment) prediction results, which is another metric that has been commonly adopted in machine learning evaluation Krizhevsky et al. (2012); Chatfield et al. (2014); He and Sun (2015).

As previous studies have shown that pruning may cause decay of classification accuracy Srinivas and Babu (2015); Liu et al. (2019), we also test the accuracy of the pruned models to explore the impact of our method on it. This is crucial because a poor accuracy would undermine the validity of robustness which only requires the model not to produce inconsistent outputs for a given benign input and its adversarial variant, regardless of whether the benign input is correctly predicted or not. With the overall pruning target set as 80% of hidden units being pruned, our method prunes the same proportion of units per layer at each iteration. After each pruning epoch, we evaluate the robustness and accuracy of the models.

Fig. 4 shows the robustness preservation of our method on all six models listed in Table 1, against untargeted FGSM adversaries with different epsilon () options. This parameter is used in FGSM to measure the variation between the adversarial and benign samples. We refer to the literature Bastani et al. (2016); Goodfellow et al. (2015); TensorFlow (2021) to find proper values to use in our experiment777We use a large epsilon value () for models #1 and #2 due to their classification tasks are comparably simple and select two smaller values ( and ) for models #3-6 to examine their robustness against perturbations in different sizes.. We perform 15 rounds of experiments on all six models and plot the range (the shaded area) and the median (the curve) of the experimental results in the figure. We also track the change of accuracy for each model during the pruning process and present the results in Fig. 5. In general, our method performs well on all six models. The pruning imposes almost no impact on the robustness and accuracy of models #1 and #2, even when 80% of units are cut off. On the models for binary classification, i.e., models #1 and #2, our method imposes almost no impact on the robustness and accuracy, even when 80% of units are pruned. For those models with more complex classification tasks, i.e., models #2-#6, our method still achieves favorable results. All four models preserve at least 50% of their original robustness even after 60% of hidden units are pruned. The change of the classification accuracy generally shares the same trend as robustness (see Fig. 5). All models still preserve 50% of their original accuracy when 70% of their units have been pruned.

In some cases, e.g., model #6 with sigmoid, we observe that the robustness slightly grows as the number of pruned units increases. This is because these models are not trained with robustness preservation as part of the objective functions, and our pruning guided by that metrics which incorporate robustness preservation may enhance their robustness. This on the other hand demonstrates the effectiveness of the robustness preservation of our method.

Our first set of experiments has responded to RQ1. In summary, our pruning method shows favorable robustness preservation against adversarial perturbations. Although our method is not designated to preserve the model accuracy, it does not show any drastic performance decay along with the pruning progress. Our method is also able to generalize on many types of models, as shown by the evaluation outcomes of all six representative models.

Model Number of parameters Batch size per layer Elapsed time ( seconds)
#1 (Credit card fraud) 6,145 3.13% 0.350
#2 (Chest x-ray) 131,329 1.56% 1.840
#3 (MNIST, ReLU) 125,898 1.56% 4.354
#4 (MNIST, sigmoid) 125,898 1.56% 4.371
#5 (CIFAR-10, ReLU) 140,106 1.56% 2.653
#6 (CIFAR-10, sigmoid) 140,106 1.56% 2.693
Table 2: Time consumed in pruning 80% of fully connected parameters (average of 10 executions)

6.3 RQ2: Pruning Efficiency

To explore the efficiency performance of our method, we run it on the six models with a “worst-case setting”. Specifically, we examine the case of pruning a large proportion (80%) of the entire model, in the slowest pace (1 or 2 per layer per epoch). This gives our method disadvantages, but when applied in practice, it could be much more efficient.

Table 2 details the time consumed by our method on each of the six models. In general, our method can prune a model within an acceptable time. In the multi-class prediction models, the pruning process can be completed within 8 minutes, while in the binary classification models, the process can be completed much faster.

Figure 6: Improvement of our method against saliency-based one-shot pruning on six models

6.4 RQ3: Benchmarking

Our second set of experiments is conducted to explore whether our method can outperform existing one-shot data-free pruning methods. We compare its performance with that of the saliency-based one-shot pruning, which is a commonly used approach to perform data-free neural network pruning Srinivas and Babu (2015); Liu et al. (2019). We note that our evaluation focuses on the comparison of data-free pruning techniques, and we refer the reader to the existing study Han et al. (2015) that compares performance between data-driven and data-free pruning techniques.

We reuse the same six models and take the saliency-based one-shot pruning as the baseline. The improvement of robustness is calculated as the growth in the number of robust instances observed from running our method relative to the counterpart observed from running the baseline method. The improvement of accuracy is equal to the growth of accuracy of the pruned model produced by our method relative to the one pruned by the baseline method.

The experimental results show our method outperforms the saliency-based one-shot pruning in overall settings. In the experiments on MNIST and CIFAR-10 models (i.e., models #3-#6) with ReLU activation, as shown in Fig. 6, our method achieves a significant improvement in both robustness preservation (up to 50%) and accuracy (up to 30%). Compared with the binary classification models (i.e., models #1 and #2), our method achieves more significant improvement on those with complicated structures and are sensitive to adversarial inputs. Five out of the six models can be pruned with a higher accuracy and robustness preservation compared with the baseline. There is an exception found in model #4 that our method fails to outperform the baseline after 40% units have been pruned, but it still well preserves the robustness during the pruning process as shown in Fig. 4 (the subplot at row 2 and column 3).

Table 3: Demonstration of robustness original and pruned models (misclassification results are shown in bold and italic font)

To demonstrate our improvement, we randomly select two samples from both MNIST and CIFAR-10 datasets, apply FGSM attack on them, and test them on our method and the baseline method. The classification outcome of those adversarial inputs compared with the original labels are depicted in Table 3

. The models after our pruning have shown better robustness against adversarial perturbations than those pruned by the baseline method. We observe that all four instances are correctly classified after 25% pruning, and three out of four instances are correctly classified even after 50% pruning.

We also observe the change in robustness preservation of a model is dependent on the utilization of its hidden units. In particular, the improvement of our pruning starts declining after 38% and 25% units are pruned in the two models of CIFAR-10, but this phenomenon does not appear in the remaining models. Models #1-#4 are trained as fully connected MLPs, while the models of CIFAR-10 are trained as CNNs with only a small portion of their units are fully connected. This makes the former models contain more computationally negligible parameters than the models of CIFAR-10, and therefore the pruning of the former yields less impact on the robustness than the latter ones.

7 Threats to Validity

Our work focuses on robustness-preserving data-free pruning, which has not been well studied by our research community, compared with the pruning with a retraining option. To the best of our knowledge, this is the first one that uses the stochastic approach to address the pruning problem. However, it carries several limitations that should be addressed in future work.

First, our method is primarily designed for fully connected components of a neural network model. Fully connected layers are fundamental components of deep learning and have been increasingly used in state-of-the-art designs such as MLP-Mixer Tolstikhin et al. (2021). Nevertheless, our method may be limited when applied to models with convolutional and relevant layers (e.g., pooling and normalization) playing a major role. We still need to explore more regarding how to effectively prune diverse models that are not built in conventional fully connected architecture, such as transformer models.

Second, our pruning heavily relies on interval arithmetic in approximating the valuation of hidden units and pruning impact, so the precision of those intervals determines both the effectiveness and correctness of our method. When it is applied to a ReLU-only model with a fully connected multilayered perceptron, there may be a magnitude explosion issue during our evaluation of propagated impact on the output layer. Besides, pruning a model mixed with both convergent (e.g., sigmoid) and non-convergent (e.g., ReLU) activation may be challenging for our method, because a convergent activation may reduce the quantitative difference from the previous assessment and output a similar result close to — this may reduce the effectiveness of our sampling criterion.

We share our insight for future work to mitigate these limitations in two aspects. First, the data-free pruning could be extended to more layer types especially those over-parameterized layer types like 2D convolutional layer. Additional pruning criteria may address the first limitation. Second, a more precise interval approximation or refinement technique could be applied to optimize the pruning criteria. By doing this the magnitude explosion issue of the propagated impact on the output layer may be relieved.

8 Related Work

Unstructured Pruning & Structured Pruning

Existing pruning approaches can be classified into two classes, namely unstructured pruning, and structured pruning Liu et al. (2019). Unstructured pruning is also known as individual weight pruning, which is performed to cut one specific (redundant) parameter off the target neural network model at a time. Unstructured pruning could date back to the early era when network pruning was first introduced and covers many well-known representative studies such as Optimal Brain Damage LeCun et al. (1990) and Optimal Brain Surgeon Hassibi and Stork (1993). They typically prune weight parameters based on a Hessian of the loss function. Other studies that can be categorized as unstructured pruning include Han et al. (2015); Molchanov et al. (2017a).

Structured pruning is proposed to prune a neural network model at the hidden unit, channel, or even layer level. Hu et al. Hu et al. (2016) proposed a channel pruning technique according to the average percentage of zero outputs of each channel, while another study by Li et al. Li et al. (2017) presented a similar channel pruning but according to the filter weight norm. Besides that, pruning a channel or layer with the smallest magnitude, there is another common approach discussed in Yu et al. (2018); Molchanov et al. (2017b) that prunes a hidden unit, channel, or layer with the least influence to the final loss. He et al. He et al. (2017) and Luo et al. Luo et al. (2017) proposed channel pruning based on consequential feature reconstruction error at the next layer. Srinivas and Babu Srinivas and Babu (2015) introduced a data-free parameter pruning methodology based on saliency, which performs hidden unit pruning independently of the training process and as the result, does not need to access training data. The latest work also includes Suau et al. (2020) that considers the inter-correlation between channels in the same layer. Another work by Chin et al. Chin et al. (2018) proposes a layer-by-layer compensate filter pruning algorithm.

In-training Pruning & Post-training Pruning

On the other hand, depending on when the parameters’ pruning is performed, we can also categorize existing pruning strategies as either in-training pruning or post-training pruning (also known as data-free pruning) Tanaka et al. (2020). Besides a few papers that discuss post-training pruning Ashouri et al. (2019); Srinivas and Babu (2015), most existing studies such as Hu et al. (2016); Renda et al. (2020); LeCun et al. (1990); Hassibi and Stork (1993); Lee et al. (2018) are implemented as in-training pruning.

One representative in-training pruning approach named SNIP Lee et al. (2018) achieves single-shot pruning based on connection sensitivity and has been exhaustively compared with existing techniques. Hornik et al. Hornik et al. (1989) investigated a data-agnostic in-training pruning that proposes a saliency-guided iterative approach to address the layer-collapse issue. In-training pruning gives us a chance to fine-tune or even retrain the pruned network with the original dataset, and therefore it is capable to prune a larger portion of the neural network without worrying about a severe impact on the performance (e.g., accuracy and loss). A recent empirical study performed by Liebenwein et alLiebenwein et al. (2021) reveals the robustness could be well preserved during the mainstream in-training pruning. Even though, post-training like Srinivas and Babu (2015); Ashouri et al. (2019) still has its market to reduce the size of a pre-trained ready-to-use neural network model from a user’s perspective. The effectiveness of post-training pruning beyond accuracy, such as robustness preservation, has yet to be well studied.

9 Conclusions

In this work, we propose Paoding, a supervised pruning method to achieve data-free neural network pruning with robustness preservation. Our work aims to enrich the application scenarios of neural network pruning, as a supplementary of the state-of-the-art pruning techniques that request data for retraining and fine-tuning. With the sampling criterion that we have proposed, we take advantage of simulated annealing to address the data-free pruning as a stochastic optimization problem. Through a series of experiments, we demonstrate that our method is capable of preserving robustness while substantially reducing the size of a neural network model, and most importantly, without a significant compromise in accuracy. It also generalizes on diverse types of models and datasets, including the prediction of credit card fraud and pneumonia diagnosis based on chest x-ray images, which are two typical use cases of AI technologies that solve real-world problems. We remark that the model pruning in the context of data-freeness is a practical problem, and more future studies are desirable to cope with the challenges we report in this work.


  • A. H. Ashouri, T. S. Abdelrahman, and A. Dos Remedios (2019) Retraining-free methods for fast on-the-fly pruning of convolutional neural networks. Neurocomputing 370, pp. 56–69. Cited by: §8, §8.
  • J. Ba and R. Caruana (2014) Do deep nets really need to be deep?. In Advances in Neural Information Processing Systems, pp. 2654–2662. Cited by: §2.1.
  • O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi (2016) Measuring neural net robustness with constraints. In Advances in Neural Information Processing Systems, pp. 2613–2621. Cited by: §1, §3.2, §6.2, §6.2.
  • D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag (2020) What is the state of neural network pruning?. Proceedings of machine learning and systems 2, pp. 129–146. Cited by: §1.
  • K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman (2014) Return of the devil in the details: delving deep into convolutional nets. In British Machine Vision Conference, BMVC, Cited by: §6.2.
  • S. Chen (1995) Measures of similarity between vague sets. Fuzzy sets and Systems 74 (2), pp. 217–223. Cited by: §5.2.
  • S. Chib and E. Greenberg (1995) Understanding the metropolis-hastings algorithm. The american statistician 49 (4), pp. 327–335. Cited by: §2.2.
  • T. Chin, C. Zhang, and D. Marculescu (2018) Layer-compensated pruning for resource-constrained convolutional neural networks. arXiv, pp. arXiv–1810. Cited by: §2.1, §8.
  • Y. Choi, M. El-Khamy, and J. Lee (2020) Universal deep neural network compression. IEEE Journal of Selected Topics in Signal Processing. Cited by: §2.1.
  • J. Dai, H. Hu, G. Zheng, Q. Hu, H. Han, and H. Shi (2016) Attribute reduction in interval-valued information systems based on information entropies. Frontiers of Information Technology & Electronic Engineering 17 (9), pp. 919–928. Cited by: §5.2, §5.2.
  • C. Denninnart, J. Gentry, and M. A. Salehi (2019) Improving robustness of heterogeneous serverless computing systems via probabilistic task pruning. In 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 6–15. Cited by: §2.1.
  • E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus (2014) Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems, pp. 1269–1277. Cited by: §2.1.
  • T. Dinh, B. Wang, A. Bertozzi, S. Osher, and J. Xin (2020) Sparsity meets robustness: channel pruning for the feynman-kac formalism principled robust deep neural nets. In

    International Conference on Machine Learning, Optimization, and Data Science

    pp. 362–381. Cited by: §2.1.
  • T. Gale, E. Elsen, and S. Hooker (2019) The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574. Cited by: §1.
  • I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT Press. Note: Cited by: §1.
  • I. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In 3rd International Conference on Learning Representations, ICLR, Cited by: §1, §3.1, §3.1, §6.2.
  • Q. Guo, S. Chen, X. Xie, L. Ma, Q. Hu, H. Liu, Y. Liu, J. Zhao, and X. Li (2019) An empirical study towards characterizing deep learning development and deployment across different frameworks and platforms. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 810–822. Cited by: §1.
  • Q. Guo, F. Juefei-Xu, X. Xie, L. Ma, J. Wang, B. Yu, W. Feng, and Y. Liu (2020) Watch out! motion is blurring the vision of your deep neural networks. Advances in Neural Information Processing Systems 33, pp. 975–985. Cited by: §3.1.
  • S. Han, J. Pool, J. Tran, and W. Dally (2015) Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pp. 1135–1143. Cited by: §2.1, §6.4, §8.
  • B. Hassibi and D. G. Stork (1993) Second order derivatives for network pruning: optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 164–171. Cited by: §8, §8.
  • T. Hastie, R. Tibshirani, and J. Friedman (2009) The elements of statistical learning: data mining, inference, and prediction. Springer Science & Business Media. Cited by: §2.1.
  • K. He and J. Sun (2015) Convolutional neural networks at constrained time cost. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    pp. 5353–5360. Cited by: §6.2.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. Cited by: §1.
  • Y. He, X. Zhang, and J. Sun (2017) Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397. Cited by: §2.1, §8.
  • D. Hendrycks and T. G. Dietterich (2019) Benchmarking neural network robustness to common corruptions and perturbations. In 7th International Conference on Learning Representations, ICLR, Cited by: §3.1.
  • K. Hornik, M. Stinchcombe, and H. White (1989) Multilayer feedforward networks are universal approximators. Neural networks 2 (5), pp. 359–366. Cited by: §1, §8.
  • H. Hu, R. Peng, Y. Tai, and C. Tang (2016) Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250. Cited by: §2.1, §8, §8.
  • S. H. Huang, N. Papernot, I. J. Goodfellow, Y. Duan, and P. Abbeel (2017) Adversarial attacks on neural network policies. In 5th International Conference on Learning Representations, ICLR, Cited by: §3.1.
  • N. Inkawhich (2017) Adversarial example generation. Note: (Accessed 8 February 2022) External Links: Link Cited by: §3.1.
  • A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: Table 1.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, pp. 1097–1105. Cited by: §1, §6.2.
  • V. Le, C. Sun, and Z. Su (2015) Finding deep compiler bugs via guided stochastic program mutation. ACM SIGPLAN Notices 50 (10), pp. 386–399. Cited by: §2.2.
  • Y. LeCun and C. Cortes (2010) MNIST handwritten digit database. Note: External Links: Link Cited by: Table 1.
  • Y. LeCun, J. S. Denker, and S. A. Solla (1990) Optimal brain damage. In Advances in Neural Information Processing Systems, pp. 598–605. Cited by: §1, §2.1, §8, §8.
  • N. Lee, T. Ajanthan, and P. Torr (2018) SNIP: single-shot network pruning based on connection sensitivity. In 6th International Conference on Learning Representations, ICLR, Cited by: §8, §8.
  • H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf (2017) Pruning filters for efficient convnets. In 5th International Conference on Learning Representations, ICLR, Cited by: §2.1, §8.
  • Y. Li, J. Hua, H. Wang, C. Chen, and Y. Liu (2021) Deeppayload: black-box backdoor attack on deep learning models through neural payload injection. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pp. 263–274. Cited by: §1.
  • L. Liebenwein, C. Baykal, B. Carter, D. Gifford, and D. Rus (2021) Lost in pruning: the effects of pruning neural networks beyond test accuracy. Proceedings of Machine Learning and Systems 3. Cited by: §2.1, §8.
  • B. Littlewood (1981) Stochastic reliability-growth: a model for fault-removal in computer-programs and hardware-designs. IEEE Transactions on Reliability 30 (4), pp. 313–320. Cited by: §2.2.
  • Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell (2019) Rethinking the value of network pruning. Cited by: §2.1, §6.2, §6.4, §8.
  • J. Luo, J. Wu, and W. Lin (2017) Thinet: a filter level pruning method for deep neural network compression. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5058–5066. Cited by: §1, §2.1, §8.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018) Towards deep learning models resistant to adversarial attacks. In 6th International Conference on Learning Representations, ICLR, Cited by: §1, §3.1, §6.2.
  • D. Molchanov, A. Ashukha, and D. Vetrov (2017a) Variational dropout sparsifies deep neural networks. pp. 2498–2507. Cited by: §8.
  • P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz (2017b) Pruning convolutional neural networks for resource efficient inference. Cited by: §1, §8.
  • P. Mooney (2018) Chest X-Ray Images (Pneumonia). Dataset. External Links: Link Cited by: Table 1.
  • S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) DeepFool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §3.1.
  • N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar (2017) Semi-supervised knowledge transfer for deep learning from private training data. In 5th International Conference on Learning Representations, ICLR, Cited by: §1.
  • A. Renda, J. Frankle, and M. Carbin (2020) Comparing rewinding and fine-tuning in neural network pruning. In 8th International Conference on Learning Representations, ICLR, Cited by: §8.
  • V. B. S., A. Baburaj, and R. V. Babu (2019) Regularizer to mitigate gradient masking effect during single-step adversarial training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 66–73. Cited by: §3.1.
  • C. E. Shannon (1948) A mathematical theory of communication. The Bell system technical journal 27 (3), pp. 379–423. Cited by: §5.2.
  • G. Singh, T. Gehr, M. Püschel, and M. Vechev (2019) An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3 (POPL), pp. 1–30. Cited by: §5.1.
  • S. Srinivas and R. V. Babu (2015) Data-free parameter pruning for deep neural networks. In Proceedings of the British Machine Vision Conference 2015, pp. 31.1–31.12. Cited by: §2.1, §4.1, §6.2, §6.4, §8, §8, §8.
  • T. Su, G. Meng, Y. Chen, K. Wu, W. Yang, Y. Yao, G. Pu, Y. Liu, and Z. Su (2017) Guided, stochastic model-based gui testing of android apps. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering (FSE), pp. 245–256. Cited by: §2.2.
  • X. Suau, N. Apostoloff, et al. (2020) Filter distillation for network compression. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 3129–3138. Cited by: §1, §2.1, §8.
  • C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna (2016) Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826. Cited by: §1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2014) Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR, Cited by: §3.1.
  • Y. Taigman, M. Yang, M. Ranzato, and L. Wolf (2014) Deepface: closing the gap to human-level performance in face verification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708. Cited by: §1.
  • H. Tanaka, D. Kunin, D. L. Yamins, and S. Ganguli (2020) Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in Neural Information Processing Systems 33, pp. 6377–6389. Cited by: §8.
  • Y. Tang, Y. Wang, Y. Xu, D. Tao, C. Xu, C. Xu, and C. Xu (2020) SCOP: scientific control for reliable neural network pruning. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin (Eds.), Cited by: §2.1.
  • TensorFlow (2021)

    Pruning in keras example

    Note: (Accessed 7 February 2022) External Links: Link Cited by: §1, §2.1.
  • TensorFlow (2021) Adversarial example using FGSM. Note: (Accessed 1 February 2022) External Links: Link Cited by: §3.1, §6.2.
  • The European Parliament (2016) Regulation (eu) 2016/679 of the european parliament and of the council of 27 april 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/ec (general data protection regulation) (text with eea relevance). Official Journal of the European Union. Cited by: §1.
  • I. Tolstikhin, N. Houlsby, A. Kolesnikov, L. Beyer, X. Zhai, T. Unterthiner, J. Yung, A. P. Steiner, D. Keysers, J. Uszkoreit, M. Lucic, and A. Dosovitskiy (2021) MLP-mixer: an all-MLP architecture for vision. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan (Eds.), Cited by: §1, §7.
  • ULB Machine Learning Group (2018) Credit Card Fraud Detection. Dataset. External Links: Link Cited by: Table 1.
  • P. J. Van Laarhoven and E. H. Aarts (1987) Simulated annealing. In Simulated annealing: Theory and applications, pp. 7–15. Cited by: §1, §2.2.
  • J. Wang (2019) Formal methods in computer science. CRC Press. Cited by: §5.1.
  • S. Wang, K. Pei, J. Whitehouse, J. Yang, and S. Jana (2018) Efficient formal safety analysis of neuralnetworks. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Eds.), pp. 6367–6377. Cited by: §5.1.
  • Y. Wang, X. Zhang, L. Xie, J. Zhou, H. Su, B. Zhang, and X. Hu (2020) Pruning from scratch. In

    Proceedings of the AAAI Conference on Artificial Intelligence

    Vol. 34, pp. 12273–12280. Cited by: §1.
  • W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li (2016) Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 2074–2082. Cited by: §2.1.
  • J. A. Whittaker (1997) Stochastic software testing. Annals of Software Engineering 4 (1), pp. 115–131. Cited by: §2.2.
  • N. Yoshioka, J. H. Husen, H. T. Tun, Z. Chen, H. Washizaki, and Y. Fukazawa (2021) Landscape of requirements engineering for machine learning-based ai systems. In 2021 28th Asia-Pacific Software Engineering Conference Workshops (APSEC Workshops), pp. 5–8. Cited by: §1.
  • Y. You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer, and C. Hsieh (2020) Large batch optimization for deep learning: training bert in 76 minutes. In 8th International Conference on Learning Representations, ICLR, Cited by: §1.
  • R. Yu, A. Li, C. Chen, J. Lai, V. I. Morariu, X. Han, M. Gao, C. Lin, and L. S. Davis (2018) Nisp: pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9194–9203. Cited by: §2.1, §8.
  • C. Zhang and H. Fu (2006) Similarity measures on three kinds of fuzzy sets. Pattern Recognition Letters 27 (12), pp. 1307–1317. Cited by: §5.2.
  • Z. Zhang, Y. Li, Y. Guo, X. Chen, and Y. Liu (2020) Dynamic slicing for deep neural networks. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), pp. 838–850. Cited by: §1.
  • L. Zhou, B. Yu, D. Berend, X. Xie, X. Li, J. Zhao, and X. Liu (2020) An empirical study on robustness of dnns with out-of-distribution awareness. In 2020 27th Asia-Pacific Software Engineering Conference (APSEC), pp. 266–275. Cited by: §3.1.