1 Introduction
In the domain of image recognition, DNNbased approach has overcome traditional image processing techniques, achieving even humancompetitive results [25]. However, several studies have revealed that artificial perturbations on natural images can easily make DNN misclassify and accordingly proposed effective algorithms for generating such samples called “adversarial images” [18, 11, 24, 7]
. A common idea for creating adversarial images is adding a tiny amount of welltuned additive perturbation, which is expected to be imperceptible to human eyes, to a correctly classified natural image. Such modification can cause the classifier to label the modified image as a completely different class. Unfortunately, most of the previous attacks did not consider extremely limited scenarios for adversarial attacks, namely the modifications might be excessive (i.e., the the amount of modified pixels is fairly large) such that it may be perceptible to human eyes (see Figure
3 for an example). Additionally, investigating adversarial images created under extremely limited scenarios might give new insights about the geometrical characteristics and overall behavior of DNN’s model in high dimensional space [9]. For example, the characteristics of adversarial samples close to the decision boundaries can help describing the boundaries’ shape.In this paper, by perturbing only one pixel with differential evolution, we propose a blackbox DNN attack in a scenario where the only information available is the probability labels (Figure
2) Our proposal has mainly the following advantages compared to previous works:
Effectiveness  On CIFAR10 dataset, being able to launch nontargeted attacks by only modifying one pixel on three common deep neural network structures with , and success rates. We additionally find that each natural image can be perturbed to , and other classes. While on ImageNet dataset, nontargeted attacking the BVLC AlexNet model also by changing one pixel shows that of the validation images can be attacked.

SemiBlackBox Attack  Requires only blackbox feedback (probability labels) but no inner information of target DNNs such as gradients and network structures. Our method is also simpler since it does not abstract the problem of searching perturbation to any explicit target functions but directly focus on increasing the probability label values of the target classes.

Flexibility  Can attack more types of DNNs (e.g. networks that are not differentiable or when the gradient calculation is difficult).
Regarding the extremely limited onepixel attack scenario, there are two main reasons why we consider it:

Analyze the Vicinity of Natural Images  Geometrically, several previous works have analyzed the vicinity of natural images by limiting the length of perturbation vector. For example, the universal perturbation adds small value to each pixel such that it searches the adversarial images in a sphere region around the natural image [14]. On the other side, the proposed fewpixel perturbations can be regarded as cutting the input space using very lowdimensional slices, which is a different way of exploring the features of high dimensional DNN input space.

A Measure of Perceptiveness The attack can be effective for hiding adversarial modification in practice. To the best of our knowledge, none of the previous works can guarantee that the perturbation made can be completely imperceptible. A direct way of mitigating this problem is to limit the amount of modifications to as few as possible. Specifically, instead of theoretically proposing additional constraints or considering more complex cost functions for conducting perturbation, we propose an empirical solution by limiting the number of pixels that can be modified. In other words, we use the number of pixels as units instead of length of perturbation vector to measure the perturbation strength and consider the worst case which is onepixel modification, as well as two other scenarios (i.e. 3 and 5 pixels) for comparison.
2 Related works
The security problem of DNN has become a critical topic [2] [1]. C. Szegedy et al. first revealed the sensitivity to welltuned artificial perturbation [24] which can be crafted by several gradientbased algorithms using backpropagation for obtaining gradient information [11, 24]. Specifically, I.J.Goodfellow et al. proposed “fast gradient sign” algorithm for calculating effective perturbation based on a hypothesis in which the linearity and highdimensions of inputs are the main reason that a broad class of networks are sensitive to small perturbation [11]. S.M. MoosaviDezfooli et al. proposed a greedy perturbation searching method by assuming the linearity of DNN decision boundaries [7]. In addition, N. Papernot et al. utilize Jacobian matrix to build “Adversarial Saliency Map” which indicates the effectiveness of conducting a fixed length perturbation through the direction of each axis [18, 20]. Another kind of adversarial image is also proposed by A. Nguyen et al. [16]. The images can hardly be recognized by human eyes but nevertheless classified by the network with high confidence.
Several blackbox attacks that require no internal knowledge about the target systems such as gradients, have also been proposed [15, 17, 5]. In particular, to the best of our knowledge, the only work before ours that ever mentioned using onepixel modification to change class labels is carried out by N. Narodytska et al[15]. However, differently from our work, they only utilized it as a starting point to derive a further semi blackbox attack which needs to modify more pixels (e.g. about 30 pixels out of 1024) without considering the scenario of onepixel attack. In addition, they have neither measured systematically the effectiveness of the attack nor obtained quantitative results for evaluation. An analysis of the onepixel attack’s geometrical features as well as further discussion about its implications are also lacking.
There have been many efforts to understand DNN by visualizing the activation of network nodes [30, 29, 28] while the geometrical characteristics of DNN boundary have gained less attraction due to the difficulty of understanding highdimensional space. However, the robustness evaluation of DNN with respect to adversarial perturbation might shed light in this complex problem [9]. For example, both natural and random images are found to be vulnerable to adversarial perturbation. Assuming these images are evenly distributed, it suggests that most data points in the input space are gathered near to the boundaries [9]. In addition, A. Fawzi et al. revealed more clues by conducting a curvature analysis. Their conclusion is that the region along most directions around natural images are flat with only few directions where the space is curved and the images are sensitive to perturbation[10]. Interestingly, universal perturbations (i.e. a perturbation that when added to any natural image can generate adversarial samples with high effectiveness) were shown possible and to achieve a high effectiveness when compared to random pertubation. This indicates that the diversity of boundaries might be low while the boundaries’ shapes near different data points are similar [14].
3 Methodology
3.1 Problem Description
Generating adversarial images can be formalized as an optimization problem with constraints. We assume an input image can be represented by a vector in which each scalar element represents one pixel. Let be the target image classifier which receives ndimensional inputs, be the original natural image correctly classified as class . The probability of x belonging to the class is therefore . The vector is an additive adversarial perturbation according to x, the target class and the limitation of maximum modification . Note that is always measured by the length of vector . The goal of adversaries in the case of targeted attacks is to find the optimized solution for the following question:
subject to 
The problem involves finding two values: (a) which dimensions that need to be perturbed and (b) the corresponding strength of the modification for each dimension. In our approach, the equation is slightly different:
subject to 
where is a small number. In the case of onepixel attack . Previous works commonly modify a part of all dimensions while in our approach only dimensions are modified with the other dimensions of left to zeros.
The onepixel modification can be seen as perturbing the data point along a direction parallel to the axis of one of the dimensions. Similarly, the 3(5)pixel modification moves the data points within 3(5)dimensional cubes. Overall, fewpixel attack conducts perturbations on the lowdimensional slices of input space. In fact, onepixel perturbation allows the modification of an image towards a chosen direction out of possible directions with arbitrary strength. This is illustrated in Figure 4 for the case when .
Thus, usual adversarial samples are constructed by perturbating all pixels with an overall constraint on the strength of accumulated modification[14, 8] while the fewpixel attack considered in this paper is the opposite which specifically focus on few pixels but does not limit the strength of modification.
3.2 Differential Evolution
Differential evolution (DE) is a population based optimization algorithm for solving complex multimodal optimization problems [23], [6]
. DE belongs to the general class of evolutionary algorithms (EA). Moreover, it has mechanisms in the population selection phase that keep the diversity such that in practice it is expected to efficiently find higher quality solutions than gradientbased solutions or even other kinds of EAs
[4]. In specific, during each iteration another set of candidate solutions (children) is generated according to the current population (fathers). Then the children are compared with their corresponding fathers, surviving if they are more fitted (possess higher fitness value) than their fathers. In such a way, only comparing the father and his child, the goal of keeping diversity and improving fitness values can be simultaneously achieved.DE does not use the gradient information for optimizing and therefore does not require the objective function to be differentiable or previously known. Thus, it can be utilized on a wider range of optimization problems compared to gradient based methods (e.g, nondifferentiable, dynamic, noisy, among others). The use of DE for generating adversarial images have the following main advantages:

Higher probability of Finding Global Optima
 DE is a metaheuristic which is relatively less subject to local minima than gradient descent or greedy search algorithms (this is in part due to diversity keeping mechanisms and the use of a set of candidate solutions). Moreover, the problem considered in this article has a strict constraint (only one pixel can be modified) making it relatively harder.

Require Less Information from Target System  DE does not require the optimization problem to be differentiable as is required by classical optimization methods such as gradient descent and quasinewton methods. This is critical in the case of generating adversarial images since 1) There are networks that are not differentiable, for instance [26]. 2) Calculating gradient requires much more information about the target system which can be hardly realistic in many cases.

Simplicity  The approach proposed here is independent of the classifier used. For the attack to take place it is sufficient to know the probability labels.
3.3 Method and Settings
We encode the perturbation into an array (candidate solution) which is optimized (evolved) by differential evolution. One candidate solution contains a fixed number of perturbations and each perturbation is a tuple holding five elements: xy coordinates and RGB value of the perturbation. One perturbation modifies one pixel. The initial number of candidate solutions (population) is and at each iteration another candidate solutions (children) will be produced by using the usual DE formula:
where is an element of the candidate solution, are random numbers, is the scale parameter set to be 0.5, is the current index of generation. Once generated, each candidate solution compete with their corresponding father according to the index of the population and the winner survive for next iteration. The maximum number of iteration is set to and earlystop criteria will be triggered when the probability label of target class exceeds in the case of targeted attacks on CIFAR10, and when the label of true class is lower than
in the case of nontargeted attacks on ImageNet. Then the label of true class is compared with the highest nontrue class to evaluate if the attack succeeded. The initial population is initialized by using uniform distributions
for CIFAR10 images andfor ImageNet images, for generating xy coordinate (e.g. the image has a size of 32X32 in CIFAR10 and for ImageNet we unify the original images with various resolutions to 227X227) and Gaussian distributions N (
=128, =127) for RGB values. The fitness function is simply the probabilistic label of the target class in the case of CIFAR10 and the label of true class in the case of ImageNet.4 Evaluation and Results
The evaluation of the proposed attack method is based on CIFAR10 and ImageNet dataset. We introduce several metrics to measure the effectiveness of the attacks:

Success Rate  In the case of nontargeted attacks, it is defined as the percentage of adversarial images that were successfully classified by the target system as an arbitrary target class. And in the case of targeted attack, it is defined as the probability of perturbing a natural image to a specific target class.

Adversarial Probability Labels(Confidence)  Accumulates the values of probability label of the target class for each successful perturbation, then divided by the total number of successful perturbations. The measure indicates the average confidence given by the target system when misclassifying adversarial samples.

Number of Target Classes  Counts the number of natural images that successfully perturb to a certain number (i.e. from 0 to 9) of target classes. In particular, by counting the number of images that can not be perturbed to any other classes, the effectiveness of nontargeted attack can be evaluated.

Number of OriginalTarget Class Pairs  Counts the number of times each originaldestination class pair was attacked.
4.1 Cifar10
We train 3 types of common networks: All convolution network [22], Network in Network[13] and VGG16 network[21] as target image classifiers on cifar10 dataset [12]. The structures of the networks are described by Table 1, 2 and 3. The network setting were kept as similar as possible to the original with a few modifications in order to get the highest classification accuracy. Both the scenarios of targeted and nontargeted attacks are considered. For each of the attacks on the three types of neural networks natural image samples are randomly selected from the cifar10 test dataset to conduct the attack. In addition, an experiment is conducted on the all convolution network [22] by generating adversarial samples with three and five pixelmodification. The objective is to compare onepixel attack with three and five pixel attacks. For each natural image, nine target attacks are launched trying to perturb it to the other 9 target classes. Note that we actually only launch targeted attacks and the effectiveness of nontargeted attack is evaluated based on targeted attack results. That is, if an image can be perturbed to at least one target class out of total 9 classes, the nontargeted attack on this image succeeds. Overall, it leads to the total of adversarial images created. To evaluate the effectiveness of the attacks, some established measures from the literature are used as well as some new kinds of measures are introduced:
conv2d layer(kernel=3, stride = 1, depth=96) 
conv2d layer(kernel=3, stride = 1, depth=96) 
conv2d layer(kernel=3, stride = 2, depth=96) 
conv2d layer(kernel=3, stride = 1, depth=192) 
conv2d layer(kernel=3, stride = 1, depth=192) 
dropout(0.3) 
conv2d layer(kernel=3, stride = 2, depth=192) 
conv2d layer(kernel=3, stride = 2, depth=192) 
conv2d layer(kernel=1, stride = 1, depth=192) 
conv2d layer(kernel=1, stride = 1, depth=10) 
average pooling layer(kernel=6, stride=1) 
flatten layer 
softmax classifier 
conv2d layer(kernel=5, stride = 1, depth=192) 
conv2d layer(kernel=1, stride = 1, depth=160) 
conv2d layer(kernel=1, stride = 1, depth=96) 
max pooling layer(kernel=3, stride=2) 
dropout(0.5) 
conv2d layer(kernel=5, stride = 1, depth=192) 
conv2d layer(kernel=5, stride = 1, depth=192) 
conv2d layer(kernel=5, stride = 1, depth=192) 
average pooling layer(kernel=3, stride=2) 
dropout(0.5) 
conv2d layer(kernel=3, stride = 1, depth=192) 
conv2d layer(kernel=1, stride = 1, depth=192) 
conv2d layer(kernel=1, stride = 1, depth=10) 
flatten layer 
softmax classifier 
conv2d layer(kernel=3, stride = 1, depth=64) 
conv2d layer(kernel=3, stride = 1, depth=64) 
max pooling layer(kernel=2, stride=2) 
conv2d layer(kernel=3, stride = 1, depth=128) 
conv2d layer(kernel=3, stride = 1, depth=128) 
max pooling layer(kernel=2, stride=2) 
conv2d layer(kernel=3, stride = 1, depth=256) 
conv2d layer(kernel=3, stride = 1, depth=256) 
conv2d layer(kernel=3, stride = 1, depth=256) 
max pooling layer(kernel=2, stride=2) 
conv2d layer(kernel=3, stride = 1, depth=512) 
conv2d layer(kernel=3, stride = 1, depth=512) 
conv2d layer(kernel=3, stride = 1, depth=512) 
max pooling layer(kernel=2, stride=2) 
conv2d layer(kernel=3, stride = 1, depth=512) 
conv2d layer(kernel=3, stride = 1, depth=512) 
conv2d layer(kernel=3, stride = 1, depth=512) 
max pooling layer(kernel=2, stride=2) 
flatten layer 
fully connected(size=2048) 
fully connected(size=2048) 
softmax classifier 
4.2 ImageNet
For ImageNet we applied a nontargeted attack with the same DE paramater settings used on the CIFAR10 dataset, although ImageNet has a search space 50 times larger than CIFAR10. Note that we actually launch the nontargeted attack for ImageNet by using a fitness function that aims to decrease the probability label of the true class. Different from CIFAR10, whose effectiveness of nontargeted attack is calculated based on the targeted attack results carried out by using a fitness function for increasing the probability of target classes. Given the time constraints, we conduct the experiment without proportionally increasing the number of evaluations, i.e. we keep the same number of evaluations. Our tests are run over the BVLC AlexNet using 600 samples from ILSVRC 2012 validation set selected randomly for the attack. For ImageNet we only conduct one pixel attack because we are want to verify if such a tiny modification can fool images with larger size and if it is computationally tractable to conduct such attacks.
4.3 Results
The success rates and adversarial probability labels for onepixel perturbations on three CIFAR10 networks and BVLC network are shown in Table 4 and the three and fivepixel perturbations on CIFAR10 is shown in Table 5. The number of target classes is shown by Figure 5. The number of originaltarget class pairs is shown by the heatmaps of Figure 6 and 7. In addition to the number of originaltarget class pairs, the total number of times each class had an attack which either originated or targeted it is shown in Figure 8. Since only nontargeted attacks are launched on ImageNet, the “Number of target classes” and “Number of originaltarget class pairs” metrics are not included in the ImageNet results.
4.3.1 Success Rate and Adversarial Probability Labels (Targeted Attack Results)
On CIFAR10, the success rates of onepixel attacks on three types of networks show the generalized effectiveness of the proposed attack through different network structures. On average, each image can be perturbed to about two target classes for each network. In addition, by increasing the number of pixels that can be modified to three and five, the number of target classes that can be reached increases significantly. By dividing the adversarial probability labels by the success rates, the confidence values (i.e. probability labels of target classes) are obtained which are 79.39, 79.17 and 77.09 respectively to one, three and fivepixel attacks.
On ImageNet, the results show that the one pixel attack generalizes well to large size images and fool the corresponding neural networks. In particular, there is chance that an arbitrary ImageNet validation image can be perturbed to a target class with confidence. Note that the ImageNet results are done with the same settings as CIFAR10 while the resolution of images we use for the ImageNet test is 227x227, which is 50 times larger than CIFAR10 (32x32). Notice that in each successful attack the probability label of the target class is the highest. Therefore, the confidence of is relatively low but tell us that the other remaining classes are even lower to an almost uniform soft label distribution. Thus, the onepixel attack can break the confidence of AlexNet to a nearly uniform soft label distribution. The low confidence is caused by the fact that we utilized a nontargeted evaluation that only focuses on decreasing the probability of the true class. Other fitness functions should give different results.
AllConv  NiN  VGG16  BVLC  

OriginAcc  
Targeted  –  
Nontargeted  
Confidence 
3 pixels  5 pixels  

Success rate(tar)  
Success rate(nontar)  
Rate/Labels 
4.3.2 Number of Target Classes (Nontargeted Attack Results)
Regarding the results shown in Figure 5, we find that with only onepixel modification a fair amount of natural images can be perturbed to two, three and four target classes. By increasing the number of pixels modified, perturbation to more target classes becomes highly probable. In the case of nontargeted onepixel attack, the VGG16 network got a slightly higher robustness against the proposed attack. This suggests that all three types of networks (AllConv network, NiN and VGG16) are vulnerable to this type of attack.
The results of attacks are competitive with previous nontargeted attack methods which need much more distortions (Table 6). It shows that using one dimensional perturbation vectors is enough to find the corresponding adversarial images for most of the natural images. In fact, by increasing the number of pixels up to five, a considerable number of images can be simultaneously perturbed to eight target classes. In some rare cases, an image can go to all other target classes with onepixel modification, which is illustrated in Figure 9.
Method 
Success rate 
Confidence 
Number of pixels 
Network 
Our method 
1 ()  NiN  
Our method  1 ()  VGG  
Our method  1 ()  AlexNet  
LSA[15]  33 ()  NiN  
LSA[15]  30 ()  VGG  
FGSM[11]  1024 ()  NiN  
FGSM[11]  1024 ()  VGG  

4.3.3 OriginalTarget Class Pairs
Some specific originaltarget class pairs are much more vulnerable than others (Figure 6 and 7). For example, images of cat (class 3) can be much more easily perturbed to dog (class 5) but can hardly reach the automobile (class 1). This indicates that the vulnerable target classes (directions) are shared by different data points that belong to the same class. Moreover, in the case of onepixel attack, some classes are more robust that others since their data points can be relatively hard to perturb to other classes. Among these data points, there are points that can not be perturbed to any other classes. This indicates that the labels of these points rarely change when going across the input space through directions perpendicular to the axes. Therefore, the corresponding original classes are kept robust along these directions. However, it can be seen that such robustness can rather easily be broken by merely increasing the dimensions of perturbation from one to three and five because both success rates and number of target classes that can be reached increase when conducting higherdimensional perturbations.
Additionally, it can also be seen that each heatmap matrix is approximately symmetric, indicating that each class has similar number of adversarial samples which were crafted from these classes as well as to these classes (Figure 8). Having said that, there are some exceptions for example the class 8 (ship) when attacking NiN, the class 4 (deer) when attacking AllConv networks with one pixel, among others. In the ship class when attacking NiN networks, for example, it is relatively easy to craft adversarial samples from them while it is relatively hard to craft adversarial samples to them. Such unbalance is intriguing since it indicates the ship class is similar to most of the other classes like truck and airplane but not viceversa. This might be due to (a) boundary shape and (b) how close are natural images to the boundary. In other words, if the boundary shape is wide enough it is possible to have natural images far away from the boundary such that it is hard to craft adversarial images from it. On the contrary, if the boundary shape is mostly long and thin with natural images close to the border, it is easy to craft adversarial images from them but hard to craft adversarial images to them.
In practice, such classes which are easy to craft adversarial images from may be exploited by malicious users which may make the whole system vulnerable. In the case here, however, the exceptions are not shared between the networks, revealing that whatever is causing the phenomenon is not shared. Therefore, for the current systems under the given attacks, such a vulnerability seems hard to be exploited.
4.3.4 Time complexity and average distortion
To evaluate the time complexity we use the number of evaluations which is a common metric in optimization. In the DE case the number of evaluations is equal to the population size multiplied by the number of generations. We also calculate the average distortion on the single pixel attacked by taking the average modification on the three color channels. The results of two metrics are shown in Table.7.
AllConv  NiN  VGG16  AlexNet  

AvgEvaluation  16000  12400  20000  25600 
AvgDistortion  100  114  115  101 
5 Discussion and Future Work
Previous results have shown that many data points might be located near to the decision boundaries [9]. For the analysis the data points were moved small steps in the input space while quantitatively analyzing the frequency of change in the class labels. In this paper, we showed that it is also possible to move the data points along few dimension to find points where the class labels change. Our results also suggest that the assumption made by I. J. Goodfellow et al. that small addictive perturbation on the values of many dimensions will accumulate and cause huge change to the output [11], might not be necessary for explaining why natural images are sensitive to small perturbation. Since we only changed one pixel to successfully perturb a considerable number of images.
According to the experimental results, the vulnerability of CNN exploited by the proposed one pixel attack is generalized through different network structures as well as different image sizes. In addition, the results showed here mimics an attacker and therefore uses a low number of DE iterations with a relatively small set of initial candidate solutions. Therefore, the perturbation success rates should improve further by having either more iterations or a bigger set of initial candidate solutions. Additionally, the proposed algorithm and the widely vulnerable samples (i.e. natural images that can be used to craft adversarial samples to most of the other classes) collected might be useful for generating better artificial adversarial samples in order to augment the training data set. This aids the development of more robust models[19] which is left for future work.
6 Acknowledgment
This research was partially supported by Collaboration Hubs for International Program (CHIRP) of SICORP, Japan Science and Technology Agency (JST).
References
 [1] M. Barreno, B. Nelson, A. D. Joseph, and J. Tygar. The security of machine learning. Machine Learning, 81(2):121–148, 2010.
 [2] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar. Can machine learning be secure? In Proceedings of the 2006 ACM Symposium on Information, computer and communications security, pages 16–25. ACM, 2006.

[3]
J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer.
Selfadapting control parameters in differential evolution: A
comparative study on numerical benchmark problems.
IEEE transactions on evolutionary computation
, 10(6):646–657, 2006. 
[4]
P. Civicioglu and E. Besdok.
A conceptual comparison of the cuckoosearch, particle swarm optimization, differential evolution and artificial bee colony algorithms.
Artificial intelligence review, pages 1–32, 2013.  [5] H. Dang, Y. Huang, and E.C. Chang. Evading classifiers by morphing in the dark. 2017.
 [6] S. Das and P. N. Suganthan. Differential evolution: A survey of the stateoftheart. IEEE transactions on evolutionary computation, 15(1):4–31, 2011.

[7]
M.D. et al.
Deepfool: a simple and accurate method to fool deep neural networks.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 2574–2582, 2016.  [8] M.D. et al. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554, 2017.
 [9] A. Fawzi, S. M. Moosavi Dezfooli, and P. Frossard. A geometric perspective on the robustness of deep networks. Technical report, Institute of Electrical and Electronics Engineers, 2017.
 [10] A. Fawzi, S.M. MoosaviDezfooli, P. Frossard, and S. Soatto. Classification regions of deep neural networks. arXiv preprint arXiv:1705.09552, 2017.
 [11] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
 [12] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. 2009.
 [13] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
 [14] S. M. Moosavi Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), number EPFLCONF226156, 2017.
 [15] N. Narodytska and S. Kasiviswanathan. Simple blackbox adversarial attacks on deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1310–1318. IEEE, 2017.
 [16] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015.
 [17] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical blackbox attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.

[18]
N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami.
The limitations of deep learning in adversarial settings.
In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 372–387. IEEE, 2016.  [19] A. Rozsa, E. M. Rudd, and T. E. Boult. Adversarial diversity and hard positive generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 25–32, 2016.
 [20] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
 [21] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition. arXiv preprint arXiv:1409.1556, 2014.
 [22] J. Springenberg, A. Dosovitskiy, T. Brox, and M. Riedmiller. Striving for simplicity: The all convolutional net. In ICLR (workshop track).
 [23] R. Storn and K. Price. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization, 11(4):341–359, 1997.
 [24] C. e. a. Szegedy. Intriguing properties of neural networks. In In ICLR. Citeseer, 2014.
 [25] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to humanlevel performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1701–1708, 2014.
 [26] D. V. Vargas and J. Murata. Spectrumdiverse neuroevolution with unified neural models. IEEE transactions on neural networks and learning systems, 28(8):1759–1773, 2017.
 [27] D. V. Vargas, J. Murata, H. Takano, and A. C. B. Delbem. General subpopulation framework and taming the conflict inside populations. Evolutionary computation, 23(1):1–36, 2015.
 [28] D. Wei, B. Zhou, A. Torrabla, and W. Freeman. Understanding intraclass knowledge inside cnn. arXiv preprint arXiv:1507.02379, 2015.
 [29] J. Yosinski, J. Clune, T. Fuchs, and H. Lipson. Understanding neural networks through deep visualization. In In ICML Workshop on Deep Learning. Citeseer.
 [30] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014.
Comments
There are no comments yet.