Despite the tremendous success (LeCun et al., 2015) of deep neural networks in recent years, their applications in safety critical areas are still concerning.
Can we trust neural networks? This question arose when people found it hard to explain or interpret neural networks (Castelvecchi, 2016) and drew more attention since the discovery of adversarial examples (Szegedy et al., 2014). An adversarial example is an input that is obtained by adding a small, imperceptible perturbation to a valid input (i.e., correctly classified input), and that is designed to be misclassified. Recent studies (Ilyas et al., 2019) demonstrate that adversarial examples are features that widely exist in common datasets, thus can hardly be avoided. This means neural networks inherently lack robustness and are vulnerable to malicious attacks.
Considerable amount (Yuan et al., 2019) of works have been proposed to improve the robustness of neural networks against adversarial examples. One method is adversarial training (Goodfellow et al., 2014; Kurakin et al., 2016) which feeds adversarial examples to neural networks in the training stage. Adversarial training works well on the types of adversarial examples considered in the training dataset, but provides no guarantee on other types. Some works (Bradshaw et al., 2017; Abbasi and Gagné, 2017) focus on designing robust architectures of neural networks. However, similarly to adversarial training, these methods do not guarantee robustness on all adversarial examples.
One promising solution is formal verification, which can prove that a network satisfies some formally defined specifications. To give a formal specification on robustness, we first define a network as , where
is a vector space of input (e.g., images) andis a set of class labels. Then we define robustness radius (Wang et al., 2017) of a network on an input as
where means norm distance. Robustness radius measures the region in which a network is robust against perturbations. Another equivalent definition is minimal distortion (Weng et al., 2018) (i.e., the minimal distance required to craft an adversarial example). We prefer to use the term robustness radius since it is defined from a defensive perspective. With robustness radius, we can define global robustness property of a network as
where is a user-provided threshold and denotes the oracle on the classification of . Note that outputs if an input in is not classified into any class in . This property ensures that for any input that can be recognized by human and correctly classified, the neural network is robust to any perturbation to some extent (i.e., in normal distance). Unfortunately, the global robustness property following this definition can hardly be verified because of the huge input space and absence of the oracle . Some researchers tried to verify global robustness property of a weaker definition (Katz et al., 2017) (without
), but only succeeded on very small networks (i.e., consisting of a few dozens of neurons).
Given the difficulties in verifying global robustness properties, many researchers turned to local robustness properties, i.e., . Instead of the whole input space, local robustness property only considers a set of inputs (denoted as in the formula), for instance, the training dataset. Various techniques (Huang et al., 2017; Gehr et al., 2018; Singh et al., 2018b; Ehlers, 2017) have been successfully applied in this kind of verification. However, local robustness properties are currently only used to evaluate the robustness of a given network or a defense technique, since they do not provide guarantee for robustness on inputs outside of the set .
Can we trust neural networks on a specific runtime input? Although this question is a compromise to the sad fact that the global robustness properties can hardly be guaranteed, it is still practically useful if we can know whether a neural network gives the expected output on an input at runtime. Adversarial detection (Lu et al., 2017; Grosse et al., 2017) rejects inputs that are suspected of being adversarial examples based on the characteristics observed on known adversarial attacks. Input reconstruction (Meng and Chen, 2017) tries to transform adversarial examples to the inputs that can be correctly classified. Runtime verification (Desai et al., 2018) checks whether an output satisfies some safety specifications at runtime and drops the output if not (traditional software is used as backup). This method, however, needs to know the constraints on outputs, which is not the case in tasks like image classification.
In this paper, we propose to validate inputs at runtime in a new way, i.e., via local robustness verification, which can compute the robustness radius of any input (as opposed to correctly classified inputs only). We utilize robustness radius as the characterics of inputs to distinguish correctly classified inputs and misclassified (possibly adversarial) inputs. Although it is known that adversarial examples themselves are often not robust to small perturbations (Luo et al., 2018; Wang et al., 2018a), to the best of our knowledge, we are the first to validate inputs by observing robustness radius. To be specific, we have two observations. The first is that the average robustness radius of valid inputs (i.e., correctly classified inputs) is much larger than that of misclassified inputs, no matter whether adversarial or not. To be formal, given a neural network , and a set of inputs at runtime (which may include adversarial examples), let and , then we have
where denotes cardinality and denotes “much larger than”. Note that we only consider inputs that can be classified into labels, which exclude randomly generated inputs mapping to no label (i.e., mapped by to ). We believe that this assumption is practical. Our experiments show that Equation 1 holds on adversarial examples from all attacks we have tried, especially on those strong attacks which seek the smallest perturbations.
Another observation is that the robustness radii of valid inputs (i.e., ) follow a normal distribution.
Based on these two observations, we propose a new way of validating inputs for neural networks. It can reject both adversarial examples and misclassified clean data (i.e., without crafted adversarial examples). Thus it not only protects neural networks from adversarial attacks, but also improves their accuracies. More importantly, this way does not need knowledge of the classification scenario and is not specific to any attack. We have conducted experiments on Feedforward Neural Networks (FNN) and Convolutional Neural networks (CNN) with three representative attacks, i.e., FGSM (fast, white-box)(Goodfellow et al., 2014), C&W (strong, white-box) (Carlini and Wagner, 2017), and HOP (i.e., Hopskipjump, black-box) (Chen et al., 2019). The results demonstrate the effectiveness of our method. To be more specific, on a random CNN for MNIST (LeCun et al., 1998), our method can reject 75% misclassified natural inputs, 95% and 100% FGSM adversarial examples with different parameters respectively, 100% C&W adversarial examples and 100% HOP adversarial examples, with only 3% false alarm rate.
It is worth mentioning that the two observations are valid not only on exact robustness radius computed by complete verification, but also on under-approximated robustness radius computed by incomplete verification, which is fast enough to be deployed at runtime.
We make the following contributions:
We observed that, on FNNs and CNNs, the average robustness radius of the valid inputs is much larger than that of the misclassified inputs (no matter whether adversarial or not);
We observed that, on most FNNs and CNNs, the robustness radii of the valid inputs follow a normal distribution;
Based on these two observations, we propose a new input validation method based on local robustness verification (which currently is only used to evaluate the robustness of a given network in existing work, as opposed to validate inputs), which can protect neural networks from adversarial examples, especially from strong attacks, and improve their accuracies on clean data.
2 Observation on Robustness Radii of Inputs from Different Categories
In this section, we show our observation on the robustness radii of valid (i.e., correctly classified) data, misclassified clean data and adversarial examples.
2.1 Background and Experimental Setup
Local Robustness Verification. Local robustness properties ensure that, a neural network is immune to adversarial examples on a set of inputs within in norm distance. To prove it, we only need to prove that, for given and ,
In this paper, we only consider the case . Current verifiers for this property can be categorized as complete and
. Complete verifiers can give an exact answer on whether the property is satisfied. Most complete verifiers are based on Mixed Integer Linear Programming (MILP)(Dutta et al., 2018; Fischetti and Jo, 2018) or Satisfiability Modulo Theories (SMT) (Ehlers, 2017; Katz et al., 2017). These methods are NP-hard, thus can hardly be applied to large networks.
Incomplete verifiers only provide conservative answers, that is, they could return unknown even if the property holds. Thus incomplete verifiers usually can only verify an under-approximation of robustness radius. Typical incomplete verification methods on neural networks include symbolic intervals (Wang et al., 2018b) and abstract interpretation (Singh et al., 2019). These methods are much more scalable than complete ones.
Experimental Setup. We take MNIST (LeCun et al., 1998) and CIFAR10 (Krizhevsky et al., 2009) as our input datasets and use IBM’s Adversarial Robustness Toolbox (Nicolae et al., 2018) to generate FGSM, C&W, and HOP adversarial examples with default parameters, except for FGSM where we set (i.e., a parameter (Goodfellow et al., 2014)) as 0.1 (by default) and 0.05 (which is stronger) respectively.
We use ERAN (Singh et al., 2018a) as the verifier which supports both complete and incomplete robustness verifications. ERAN does not compute robustness radius directly, but can judge whether the robustness radius is larger than a given (i.e., the network is robust on all inputs that are within in norm distance with , as Equation 2). We denote it as . Note that, ERAN supports two versions of : the complete one and the incomplete one. Applying binary search on complete (resp. incomplete) , we can find a value close enough to the robustness radius (resp. an underapproximation of the robustness radius). This algorithm is described in Algorithm 1. In the following, we will call the computed value with complete verification the (asymptotically) exact robustness radius, and that with incomplete verification the approximate robustness radius. All experiments are conducted on an Ubuntu 18.04 running on a desktop with an Intel i9-9900K CPU, 32GB Memory.
2.2 Observation on Exact Robustness Radius
ERAN combines abstract interpretation, linear programming and MILP to completely verify a network. To make the verification terminate in a reasonable time, we trained a small FNN (denoted as FNN-MNIST) on MNIST (with accuracy), which consists of 5 layers: the input layer, three fully connected layers, each with 30 neurons and one output layer.
We run ERAN with RefineZono (Singh et al., 2018b) domain, and set as 0.256 and as 0.001 in Algorithm 1. We computed the robustness radii of the first 100 samples from each of following six categories in the MNIST test dataset:
samples that can be correctly classified by the network
samples that are misclassifed by the network
adversarial examples from successful FGSM attack with
adversarial examples from successful FGSM attack with
adversarial examples from successful C&W attack
adversarial examples from successful HOP attack
Figure 1(a) shows the number of inputs, the exact robustness radii of which are above a given value (i.e., the x-axis). We can see that the robustness radii of valid inputs are much larger than that of the other categories of data, especially the adversarial examples from the strong white-box C&W attack and black-box HOP attack. The robustness radii of adversarial examples from FGSM attack with are significantly larger than those with .
Our experiments suggest that we can use robustness radius to evaluate to what extent we should trust the output of a neural network on a given input. By setting a threshold to reject any input the robustness radius of which is lower, we can protect the neural network from adversarial examples and improve its accuracy. However, complete verification is time-consuming. In our experiments, each call to function takes 11s on average, even though our network contains only 100 neurons. It seems that complete verification can hardly be deployed at runtime, especially considering that the running time of complete verification increases exponentially with the number of neurons.
2.3 Observation on Approximate Robustness Radius
Incomplete verification usually runs much faster than complete verification and has the potential to be deployed at runtime. However, Algorithm 1 with incomplete verification can only give an approximate robustness radius. We would like to know (1) whether approximate robustness radius from incomplete verification is close to the exact robustness radius; (2) whether the approximate robustness radii of valid inputs are significantly larger than that of the misclassified inputs. Actually the second question is more important since it decides whether we can use incomplete verification to validate inputs.
Observation on FNN. We utilize ERAN with DeepZono domain (Singh et al., 2018a) (which is incomplete) to compute the approximate robustness radii of the same inputs on the network FNN-MNIST. The results are shown in Figure 1(b). Comparing Figure 1(a) and Figure 1(b), we can see that the values of approximate and exact robustness radii of the same inputs are very close (comparing the x-axis). In fact, the approximate robustness radii (except those equal to 0) of all inputs are between 44% and 100% of their exact robustness radii. Moreover, we can see that, similar to exact robustness radii, the approximate robustness radii of valid inputs are significantly larger than that of misclassified inputs. This means, we can utilize approximate robustness radius to protect neural networks. Moreover, each call to in incomplete verification costs less than 1s on the given network, and has polynomial time complexity wrt. the number of neurons, which means it has potential to be deployed at runtime.
Observation on CNN. We have also conducted experiments on Convolutional Neural Networks. They are significantly larger than the network FNN-MNIST, and complete verification methods can hardly compute robustness radius in a reasonable time. Thus we only tried incomplete verification. Our experiments on CNN are conducted on two datasets: MNIST (LeCun et al., 1998) and CIFAR10 (Krizhevsky et al., 2009).
We trained a CNN (denoted as CNN-MNIST) on MNIST of 7 layers: the input layer, a convolutional layer with 6 filters of size
, a max-pooling layer of, a convolutional layer with 16 filters of size , a max-pooling layer of , a fully connected layer of 128 neurons and an output layer with 10 labels. The accuracy is .
As in Section 2.2, we utilize ERAN with DeepZono domain (Singh et al., 2018a) to compute the approximate robustness radii of the first 100 inputs from each of the six categories. Figure 1(c) shows the results on CNN-MNIST. We can see that the computed approximate robustness radii of all inputs are much smaller (i.e., ) than those computed on the small network FNN-MNIST. We do not know whether the approximate robustness radii are close to the exact robustness radii, which we cannot get even after several days of computation. However, most importantly, the characteristics of approximate robustness radii of inputs of different categories are the same as exact robustness radii. That is the approximate robustness radii of valid inputs (i.e., the red line) are much larger than that of other inputs. In fact, if we set the threshold as 0.01, we can reject misclassified clean data, FGSM adversarial examples where , FGSM adversarial examples where , CW adversarial examples, and HOP adversarial examples, and only valid inputs.
We trained a LeNet-5 (LeCun et al., 1998) CNN (denoted as CNN-CIFAR in this paper) on CIFAR10 of 8 layers: the input layer, a convolutional layer with 6 filters of size , a max-pooling layer of , a convolutional layer with 16 filters of size , a max-pooling layer of , two fully connected layers of 120 and 84 neurons respectively, an output layer with 10 labels. The accuracy is .
Figure 1(d) shows the results of the first 100 inputs of each category in CIFAR10 test database. Even the approximate robustness radii of the valid inputs are significantly larger than those of misclassified inputs and adversarial examples from C&W and HOP attacks, but are almost indistinguishable from FGSM attacks. We believe that the reason is the accuracy of the network is too low such that it leaves big “holes” for adversarial examples in the input space.
To validate our observations, we trained more FNNs and CNNs of various structures and conducted the same measure. Table 1 shows the results. In the table, on each network, we show its training dataset (column Dataset), network structure (column Network), where () describes FNN-MNIST, () describes CNN-MNIST and (
) describes CNN-CIFAR. For each network structure, we adopted different activation functions (column Activation). The table also shows the accuracy on the test dataset (column Acc.) and the average approximate robustness radii of the first 10 inputs (we chose 10 because we believe that 10 is enough to compare the average values and generating adversarial attacks e.g., HOP can be very time-consuming) in the test dataset of six categories: correctly classified inputs (column Valid), misclassified inputs (column Mis.), adversarial examples from FGSM attacks with, FGSM attacks with , C&W attacks and HOP attacks. The average running time of each call to is also recorded in column Time(s). The column P-value will be explained later (see Section 4
). From the table, we can see that our observation is valid on all trained networks. Our experiments can be easily reproduced since we only use open source tools with a little modification (e.g., Algorithm1). The modified code and all trained networks in this paper have been uploaded to an online repository111https://www.github.com/jiangchaoliu/eran.git. Even though we believe that people can easily reproduce our experiments with their own trained networks.
There are CNNs (Hu et al., 2018; Xie et al., 2017) that have high accuracies on CIFAR10. But these networks usually adopt layers other than fully-connected, convolutional and max-pooling layers and are out of the scope of this paper and beyond the ability of current local robustness verifiers (Katz et al., 2017; Ehlers, 2017; Wang et al., 2018b; Dutta et al., 2018; Singh et al., 2018a).
We also tried another tool CLEVER (Weng et al., 2018)
which estimates the minimal distortion to craft adversarial examples (which should be equal to robustness radius). Figure2 illustrates the robustness radii on the first 100 inputs of the six categories on the networks FNN-MNIST and CNN-MNIST. We can see that, the difference of minimal distortions of valid inputs and other categories is less significant than that of robustness radii in Figure 1. Figure 3 shows the histogram of the density of the minimal distortion from CLEVER and exact/approximate robustness radii from ERAN of the first 100 valid inputs on the network FNN-MNIST. Compared with approximate robustness radii, both the values and distribution of minimal distortions are much farther from the exact robustness radii.
3 Input Validation with Observation I
Based on our first observation, we can design an algorithm to validate the inputs of a neural network at runtime to protect it from adversarial examples and improve its accuracy.
A naive idea is to set a threshold and reject all inputs the approximate robustness radii of which are below it. However, it is non-trivial to choose the threshold. One solution is to set the threshold according to the maximal false alarm rate that can be tolerated, which depends on the application. ROC curve plots the true alarm rate against the false alarm rate at various threshold settings. Figure 4 shows the ROC curves of the network CNN-MNIST and CNN-CIFAR on the first 100 inputs from each category. The result on MNIST is good on all kinds of adversarial examples. However, on CIFAR10, our method is not very helpful on FGSM attacks. The reason, we believe, is that the accuracy of our CNN on CIFAR10 is not high enough.
Until now, we have only studied the first 100 inputs in each category. In Table 2, we show the effect of different choices of thresholds on the network CNN-MNIST on the first 100 and random 100 inputs in each category. To be more specific, we show with different thresholds (column Th.), the percentage of rejected valid inputs (column Vic.), the percentage of rejected misclassified inputs (column W.), and the rejected adversarial examples from FGSM attack with (column F ()), FGSM attack with (column F ()), C&W attack and HOP attack. The result of the first 100 inputs and random 100 inputs are on the left and right sides of ”/” respectively in each cell. This table shows that the observation of the first 100 inputs of each category are also valid in the whole test database.
The benefit of validation by threshold is that once the threshold is decided, we just need to call once to test whether the approximate robustness radius of an input is above the threshold.
4 Distribution of The Approximate Robustness Radius of Valid Inputs
|Th.||Vic.||W||F ()||F ()||C&W||HOP|
One thing concerns us: if the attackers have the knowledge of our neural network and our detection method, they can generate adversarial examples with large approximate robustness radii on purpose (even though we believe that such adversarial examples can be hardly found on a neural network with high accuracy). To avoid this, we study further whether the approximate robustness radii of valid inputs follow a certain distribution. If they do, then the attackers not only need to generate adversarial examples with large enough robustness radii, but also need to make sure that such robustness radii follow a certain distribution, which is much harder. Observing Figure 3, one thought is that the exact/approximate robustness radii follow a normal distribution. To test that, we compute the approximate robustness radii of the first 100 valid inputs for all networks in Table 1. We test whether they follow a normal distribution by D’Agostino and Pearson’s test (D’Agostino and Pearson, 1973), which returns a p-value 222 (shown in column P-value). If the p-value is larger than , then they are believed to follow a normal distribution. We can see that 25 networks follow normal distributions, but 9 not. We cannot give a conclusion on what factors make the difference, but it seems that a medium size network with high accuracy usually enjoys this property. It is worth mentioning that the exact robustness radii of the FNN-MNIST network do not pass the D’Agostino and Pearson’s test either, the same as its approximate robustness radii, even though they look like a normal distribution in Figure 3.
5 Input Validation with Observation II
If the approximate robustness radii of the valid inputs on a network follow a normal distribution, we can utilize this to improve our input validation method.
The new algorithm is shown in Algorithm 2. It maintains a sliding window (Rebbapragada et al., 2009) of size containing the inputs believed to be valid. When a new input comes, the algorithm checks whether it breaks the original normal distribution by function . If it does, then this new input is deleted ( ), otherwise the first element is deleted ( ), thus the window slides. The design of function
is heuristic, and we propose the one below
This function combines our two observations. It returns true if the approximate robustness radius of the last input is larger than a threshold (from Observation I) or the p-value of the new sliding window does not drop sharply from the last one (from Observation II). However, we do not have a method to decide the two parameters and . On the network CNN-MNIST, we set , and . To test this algorithm, we take 100 valid inputs, 1 misclassified input (because the accuracy is 98.62% ), and 100 adversarial examples from each of the four types of attacks (which totally makes 400) as the inputs. If all the inputs come in sequence, our algorithm can reject all adversarial examples and the misclassified input, with only 3 valid inputs rejected. However, if we shuffle the inputs randomly, the average rejected valid inputs are 5 and adversarial examples are 28 respectively (by 10 times experiments). Actually, in both cases, the first condition () accepts 87 valid inputs and rejects all invalid inputs. The second condition accounts for other accepted valid inputs and false positives.
The disadvantage of validation by distribution is that it needs to compute the approximate robustness radius with Algorithm 1 which needs several iterative calls to , which takes more time. However, the time complexity of for incomplete verification is polynomial wrt. the number of neurons, and the potential of its speed is far from fully explored (e.g., GPU is not utilized).
6 Related Work
Some researchers focus on finding new adversarial attacks. According to whether the attackers have all knowledge about the target neural networks, adversarial attacks can be divided into two types: white-box attack and black-box attack. Most adversarial attacks including the first one (i.e., L-BFGS (Szegedy et al., 2014)) are white-box attacks. White-box attacks can be fast (e.g., FGSM (Goodfellow et al., 2014)) and strong (i.e., to find the adversarial examples close to the original inputs, e.g., DeepFool (Moosavi-Dezfooli et al., 2016), C&W (Carlini and Wagner, 2017), Ground-truth attack (Carlini et al., 2018)). Black-box attacks usually need more computational power (e.g., Hopskipjump attack (Chen et al., 2019), ZOO (Chen et al., 2017)). Because of transferability (Papernot et al., 2016), white-box attacks can be transformed to black-box.
There are many countermeasures for adversarial attacks, among which, verification and adversarial detection are mostly related to our work.
Verification methods check whether a neural network satisfies a given formally defined property before it is deployed. Such properties include safety constraints (Katz et al., 2017) and robustness (Ehlers, 2017; Dutta et al., 2018). However, due to the non-linearity of activation functions, complete verification is NP-hard, and thus can hardly scale. Incomplete verification sacrifices the ability to falsify a property so as to gain performance. Current incomplete verifiers (Wang et al., 2018b; Singh et al., 2019) can deal with neural networks of thousands of neurons in seconds. However, both verifications can only prove local robustness properties (Huang et al., 2017), rather than global robustness properties. Thus these verifiers can only give metrics on evaluating how robust a neural network is, rather than proving that a neural network is robust.
Adversarial detection methods make use of characterics of adversarial examples. (Feinman et al., 2017) found that the uncertainty of adversarial examples to be higher than clean data, and utilized a Bayesian neural network to estimate that. (Song et al., 2017) found the distribution of adversarial examples is different from clean data. Compared to their methods focusing on the inputs, our method computes the accumulative gradients information of the neural network in the regions around the inputs. (Wang et al., 2018a) proposed to detect adversarial examples by mutation test based on the belief that they are not robust to mutations. Their method shares the similar intuition with our method, that is, the adversarial examples must be some corner cases in the input space. However, we utilize local robustness verification which takes the whole region around an input into account instead of testing which considers some points near an input. (Lu et al., 2017) distinguishes adversarial examples from clean data by the threshold of their values on each ReLU layer. (Henzinger et al., 2019)
proposed to detect novel inputs by observing the hidden layers, i.e., whether they are outside the value ranges during training. Given the fact that these works are not open source and the results in their papers are often given in the form of graphs (like ROC curve), it is hard to have a fair comparison with their results. However, from the results in their papers, our method is comparable (if not better) with their work, especially on strong attacks. Our method utilizes only the value of robustness radius to validate inputs, and can also be seen as an anomaly detection technique(Chandola et al., 2009).
Exact/approximate robustness radius reflects the accumulative gradients information of the neural network in the region around an input. We believe that adversarial examples are often in the region with high accumulative gradients. Based on this belief, we observed the exact/approximate robustness radii of inputs of valid inputs and misclassified (possibly adversarial) inputs. We found that (1) the exact/approximate robustness radii of valid inputs are much larger than those of misclassified inputs and adversarial examples; (2) the approximate robustness radii of valid inputs can follow a normal distribution. Based on the two observations, we proposed a new way for input validation. Our experiments showed that the method is very effective in improving the accuracy of neural networks and protecting them from adversarial examples. Moreover, we believe that even if the attackers know our methods, they can hardly attack the protected neural networks by generating adversarial example, the approximate robustness radii of which are large enough and follow a normal distribution.
- Robustness to adversarial examples through an ensemble of specialists. arXiv preprint arXiv:1702.06856. Cited by: §1.
- Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476. Cited by: §1.
- Ground-truth adversarial examples. Cited by: §6.
- Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pp. 39–57. Cited by: §1, §6.
- Can we open the black box of ai?. Nature News 538 (7623), pp. 20. Cited by: §1.
- Anomaly detection: a survey. ACM computing surveys (CSUR) 41 (3), pp. 1–58. Cited by: §6.
- Hopskipjumpattack: a query-efficient decision-based attack. arXiv preprint arXiv:1904.02144. Cited by: §1, §6.
Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitute models.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. Cited by: §6.
- Tests for departure from normality. empirical results for the distributions of b 2 and√ b. Biometrika 60 (3), pp. 613–622. Cited by: §4.
- SOTER: programming safe robotics system using runtime assurance. arXiv preprint arXiv:1808.07921. Cited by: §1.
- Output range analysis for deep feedforward neural networks. In NASA Formal Methods Symposium, pp. 121–138. Cited by: §2.1, §2.3, §6.
Formal verification of piece-wise linear feed-forward neural networks. In Automated Technology for Verification and Analysis, D. D’Souza and K. Narayan Kumar (Eds.), Cham, pp. 269–286. External Links: Cited by: §1, §2.1, §2.3, §6.
- Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. Cited by: §6.
- Deep neural networks and mixed integer linear optimization. Constraints 23 (3), pp. 296–309. Cited by: §2.1.
- Ai2: safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. Cited by: §1.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §1, §1, §2.1, §6.
- On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280. Cited by: §1.
- Outside the box: abstraction-based monitoring of neural networks. arXiv preprint arXiv:1911.09032. Cited by: §6.
- Squeeze-and-excitation networks. In , pp. 7132–7141. Cited by: §2.3.
- Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pp. 3–29. Cited by: §1, §6.
- Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175. Cited by: §1.
- Reluplex: an efficient smt solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97–117. Cited by: §1, §2.1, §2.3, §6.
- CIFAR-10 (canadian institute for advanced research). . Cited by: §2.1, §2.3.
- Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236. Cited by: §1.
- Deep learning. nature 521 (7553), pp. 436–444. Cited by: §1.
- Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §1, §2.1, §2.3, §2.3.
- Safetynet: detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, pp. 446–454. Cited by: §1, §6.
- Towards imperceptible and robust adversarial example attacks against neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §1.
- MagNet: a two-pronged defense against adversarial examples. Cited by: §1.
- Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §6.
- Adversarial robustness toolbox v1.0.1. CoRR 1807.01069. Cited by: §2.1.
- Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277. Cited by: §6.
- Finding anomalous periodic time series. Machine learning 74 (3), pp. 281–313. Cited by: §5.
- Fast and effective robustness certification. In Advances in Neural Information Processing Systems, pp. 10802–10813. Cited by: §2.1, §2.3, §2.3, §2.3.
- Boosting robustness certification of neural networks. Cited by: §1, §2.2.
- An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3 (POPL), pp. 41. Cited by: §2.1, §6.
- Pixeldefend: leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766. Cited by: §6.
- Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, Cited by: §1, §6.
- Detecting adversarial samples for deep neural networks through mutation testing. arXiv preprint arXiv:1805.05010. Cited by: §1, §6.
- Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pp. 1599–1614. Cited by: §2.1, §2.3, §6.
- Analyzing the robustness of nearest neighbors to adversarial examples. arXiv preprint arXiv:1706.03922. Cited by: §1.
- Evaluating the robustness of neural networks: an extreme value theory approach. arXiv preprint arXiv:1801.10578. Cited by: §1, §2.3.
- Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492–1500. Cited by: §2.3.
- Adversarial examples: attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems. Cited by: §1.