and face recognitionZhao et al. (2018). However, users of such model are more concerned about the performance of the model and largely ignore the vulnerability and robustness of the model. In fact, most existing models are easily misled by adversarial samples deliberately designed by attackers and enable the attackers to achieve the purpose of bypassing the detection Szegedy et al. (2013). For example, in an image classification system, by adding the disturbance information to the original image, attackers can achieve the goal of changing image classification results with high probability Moosavi-Dezfooli et al. (2017b)
. The generated adversarial samples can even be classified with an arbitrary label according to the purpose of an attacker, making this type of attack a tremendous threat to the image classification systemBarreno et al. (2010). More seriously, printing the generated images of adversarial samples and then photographing them with a camera, the captured images are still misclassified, confirming the presence of adversarial samples in the real world Kurakin et al. (2016). These vulnerability problems make people raise the question on whether neural networks can be applied to security-critical areas.
Several papers have studied related security issues Liu et al. (2018a, b); Li et al. (2007). Unfortunately, in most previous generation approaches of adversarial samples, when is fixed, the similarity of the sample is fixed: in the algorithm’s calculation, it won’ change dynamically. This may cause the image to be disturbed so much that it can be visually distinguishable Moosavi-Dezfooli et al. (2017a). Moreover, the existing approaches mainly use gradient information to transform the original samples into the required adversarial samples. If the parameters of the model are unknown, the attackers cannot generate effective adversarial samples Goodfellow et al. (2014a); Hu & Tan (2017). Others also proposed some black-box attack approaches Papernot et al. (2017); Narodytska & Kasiviswanathan (2017). However, Papernot Papernot et al. (2017) takes the transferability assumption. If transferability of the model to be attacked is reduced, the effectiveness of the attack will be reduced. LSA Narodytska & Kasiviswanathan (2017) cannot simply modify the required distance metrics, such as L0, L2, Lmax. In most cases, it is only guaranteed that the disturbance is successful at Lmax, but not guaranteed that the disturbance can be kept minimum under other distance functions.
In this paper, we propose a new approach that generates a black-box attack to deep neural networks. Our approach is named BANA, denoting A (B)lack-box (A)ttack on (N)eural Networks Based on Swarm Evolutionary (A)lgorithm. Compared with the previous approaches Szegedy et al. (2013); Goodfellow et al. (2014b); Papernot et al. (2016b); Carlini & Wagner (2017), our approach has the following main advantages:
Effectiveness. The adversarial samples generated by our approach can misclassify the neural networks with 100% probability both on non-targeted attacks and targeted attacks. The distance between adversarial samples and original images is less than 10 on average, indicating that images can be disturbed with so small changes that are not to be undetectable. If we continue to increase the number of iterations of our proposed algorithm, we expect to achieve even better results.
Black-box Attack. Adversarial samples can be generated without the knowledge of the internal parameters of the target network, such as gradients and structures. Existing attacks such as Carlini and Wagner’s attacks Carlini & Wagner (2017) usually require such information.
Generality. Our proposed attack is a general attack to neural networks. For the attack, we can generate effective adversarial samples of DNNs, CNNs, etc.. We have even tested our proposed attack in a wider range of machine learning algorithms and it still misleads the model with 100% probability.
Randomness. Benefiting from the characteristics of evolutionary algorithms, the adversarial samples generated each time are different for the same input image, so they are able to resist defensive mechanisms such as defensive distillation.
In particular, our proposed attack is based on the swarm evolutionary algorithm Coello et al. (2007). The swarm evolutionary algorithm is a population-based optimization algorithm for solving complex multi-modal optimization problems. It can transform the optimization problems into the individual fitness function and has a mechanism to gradually improve individual fitness. Evolutionary algorithms do not require the use of gradient information for optimization and do not require that the objective function be differentiable or deterministic. Different from another approach also based on an evolutionary algorithm Su et al. (2017), our approach focuses on the optimization of results rather than the number of disturbed pixels. Therefore, we have completely different optimization function and iterative processes from the one pixel attack. Without knowing the parameters of the model, our proposed approach uses the original sample as the input to apply to generate an adversarial sample of the specific label. The used information is only the probability of the various labels produced by the model.
Our attack also addresses technical challenges when applying the swarm evolutionary algorithm to generate the adversarial samples. The improvements made in our approach include the optimization of calculation results and convergence speed (see more details in Section 3).
The rest of the paper is organized as follows. Section 2 introduces the related work of adversarial samples. Section 3 presents SEAA (Swarm Evolutionary Algorithm For Black-box Attacks to Deep Neural Networks. Section 4 presents and discusses our experimental results. Section 5 concludes.
2 Related Work
The adversarial samples of deep neural networks have drawn the attention of many researchers in recent years. Szegedy et al. (2013) used a constrained L-BFGS algorithm to generate adversarial samples. L-BFGS requires that the gradient of the model can be solved, limiting the diversity of the model and the objective function, and making this approach computationally expensive to generate adversarial samples. Goodfellow et al. (2014b) proposed the fast gradient sign method (FGSM). However, this approach is designed without considering the similarity of the adversarial samples: the similarity of the generated adversarial samples may be low. The consequence is that the generated adversarial samples may be detected by defensive approaches or directly visually distinguished. An adversarial sample attack named the Jacobian-based Saliency Map Attack (JSMA) was proposed by Papernot et al. (2016b). JSMA also requires the gradient of the model to be solved, and the approach is limited to the distance, and cannot be generated using other distance algorithms Carlini & Wagner (2017). These approaches all assume that the attackers have full access to the parameters of the model. Moosavi-Dezfooli et al. (2016) proposed a non-targeted attack approach named Deepfool. This approach assumes that the neural network is linear and makes a contribution to the generation of adversarial samples, while actually neural networks may be not linear. Besides, this approach also does not apply to non-neural network model. Some previous research focused on generating adversarial samples to the malware detection models Yang et al. (2017); Demontis et al. (2017); Grosse et al. (2016). These adversarial samples also successfully disrupted the model’s discriminant results, showing that the common models of machine learning are vulnerable to attacks.
Some recent research aimed to defend against the attack of adversarial samples and proposed approaches such as defensive distillatione Hendrycks & Gimpel (2016); Papernot et al. (2016a); Feinman et al. (2017); Metzen et al. (2017). However, experiment results show that these approaches do not perform well in particular situations due to not being able to defend against adversarial samples of high quality He et al. (2017).
3.1 Problem Description
The generation of adversarial samples can be considered as a constrained optimization problem. We use distance (which is norm) to describe the similarity between the original images and the adversarial images. Let be the -class classifier that receives -dimensional inputs and gives -dimensional outputs. Different from L-BFGS Szegedy et al. (2013), FGS Goodfellow et al. (2014b), JSMA Papernot et al. (2016b), Deepfool Moosavi-Dezfooli et al. (2016) and Carlini and Wagner’s attack Carlini & Wagner (2017), our approach is a black-box attack without using the gradient information. This optimization problem is formalized as follows:
where for a non-targeted attack (whose purpose is to mislead the classifier to classify the adversarial samples as any of the error categories), is defined as
and for a targeted attack (whose purpose is to mislead the classifier to classify the adversarial samples as a specified category), is defined as
and is the original image, is the adversarial sample to be produced and is the distance. is a positive number much larger than , is the real label, and is the target label. The output of is the probability that the sample is recognized as the label and the output of is the probability set that the sample is separately recognized as other labels. Since , we discuss the case of and for the targeted attacks, respectively. The non-targeted attacks are the same.
(1) When , the is not the maximum in Equation 3, indicating that the adversarial sample is not classified as the targeted label at this time. Since is much larger than , the objective function in Equation 1 is approximately equal to the latter half. In this case, it is equivalent to optimizing to minimize , i.e., increasing the probability that the classifier identifies the sample as being a class .
(2) When , the adversarial sample has been classified as the target label at this time. In this case, it is equivalent to optimizing to minimize the value of , i.e., to improve the similarity between the adversarial sample and the original sample as much as possible.
Through the preceding objective function, the population is actually divided into two sections, as shown in Fig.1. The whole optimization process can be divided into three steps.
Step 1. At this time the adversarial sample cannot successfully mislead the classifier. Individuals at the top of section A gradually approach the bottom through crossover and mutation operators.
Step 2. The individuals move from Section A to Section B, indicating that , i.e., the adversarial samples generated at this time can successfully mislead the classifier.
Step 3. Individuals at the top of Section B gradually approach the bottom, indicating the improvement of the similarity between the adversarial image and the original image.
Eventually, the bottom individual of Section B becomes the optimal individual in the population, and the information that it carries is the adversarial sample being sought out.
3.2 Our BANA Approach
As the generation of adversarial samples has been considered as an optimization problem formalized as Equation 1, we solve this optimization problem by the swarm evolution algorithm. In this algorithm, fitness value is the result of , population is a collection of and many individuals make up the population. By constantly simulating the process of biological evolution, the adaptive individuals which have small fitness value in the population are selected to form the subpopulation, and then the subpopulation is repeated for similar evolutionary processes until the optimal solution to the problem is found or the algorithm reaches the maximum number of iterations. After the iterations, the optimal individual obtained is the adversarial sample
. As a widely applied swarm evolutionary algorithm, such genetic algorithm is flexible in coding, solving fitness, selection, crossover, and mutation. Therefore, in the algorithm design and simulation experiments, we use the following improved genetic algorithm as an example to demonstrate the effectiveness of our BANA approach. The advantages of this approach are not limited to the genetic algorithm. We leave as the future work the investigation of the effects of different types of swarm evolutionary algorithms on our approach.
3.2.1 Algorithm Workflow
The whole algorithm workflow is shown in Fig.2
. Classifiers can be logistic regression, deep neural networks, and other classification models. We do not need to know the model parameters and just set the input and output interfaces. Each individual is transformed into an adversarial sample and then sent to the classifier to get the classification result. After that, the individual fitness value is obtained through solving the objective function. The individuals in the population are optimized by the genetic algorithm to solve the feasible solution of the objective function (i.e., the adversarial sample of the image).
The workflow of our BANA approach is as follows:
Step 1. Population Initialization. One gene corresponds to one pixel, and for the grayscale images of (28, 28), there are a total of 784 genes, and there are genes for the color image of (32, 32).
Step 2. Calculate the Fitness Value. Calculate the value of the fitness function according to the approach described in Section 3.1 and take this value as the fitness of the individual. Since this problem is a minimization problem, the smaller the value, the better the individual’s fitness. After that, the best individual with the minimum fitness value in the current population is saved as the optimal solution.
Step 3. Select Operation. According to the fitness of individuals in the population, through the tournament algorithm, individuals with higher fitness are selected from the current population.
Step 4. Cross Operation. Common crossover operators include single-point crossover, multi-point crossover, and uniform crossover. Our algorithm uses uniform crossover. That is, for two random individuals, each gene crosses each independently according to the probability . Due to the large number of genes that each individual carries, uniform crossover allows for a greater probability of generating new combinations of genes and is expected to combine more beneficial genes to improve the searching ability of genetic algorithms.
Step 5. Mutation Operation. In order to speed up the search ability of genetic algorithms, combining with the characteristics of the problem to be solved, the operator adopts a self-defined Gaussian mutation algorithm. In the process of mutation, Gaussian noise is randomly added to the individual (shown in Equation 6 below), where is the mean of Gaussian noise and
is the standard deviation of Gaussian noise:
The reason for adopting this mutation operation is that the resulting adversarial sample inevitably has a high degree of similarity with the input sample, and a feasible solution to the problem to be solved must also be in the vicinity. This technique can effectively reduce the number of iterations required to solve the problem.
Step 6. Terminate the Judgment. The algorithm terminates if the exit condition is satisfied, and otherwise returns to Step 2.
There are two major technical improvements made in our approach.
Improvements of result. In order to improve the optimization effect of BANA, we adopt a new initialization technique. Considering the problem to be solved requires the highest possible degree of similarity, this technique does not use random numbers while using the numerical values related to the original pixel values. Let be the original image, and be the initialized adversarial image. Then , where is a very small value.
Improvements of speed. In order to speed up the convergence of BANA, on one hand, we constrain the variation step of each iteration in the mutation stage. On the other hand, we try to keep the point that has the pixel value of 0, because it is more likely that such a point is at the background of the picture. These improvements help the algorithm converge faster to the optimal solution.
and ImageNetRussakovsky et al. (2015)
. 80% of the data are used as a training set and the remaining 20% as a test set. In order to assess the effectiveness of the adversarial samples, we attack a number of different classifier models. The used classifiers include logistic regression (LR), fully connected deep neural network (DNN), and convolutional neural network (CNN). We evaluate our BANA approach by generating adversarial samples from the MNIST and CIFAR10 test sets.
The parameters used by BANA are shown in Table 1. The experimental results show that different parameters affect the convergence rate of BANA. However, with the increase of the iterations, the results would eventually be close. The parameters listed in Table 1 are our empirical values.
|Database||Population||Genes Number||Cross Probability||Mutation Probability||Iterations||Gaussian Mean||
4.1 Adversarial Sample Generation on MNIST
In the first experiment, the used dataset is MNIST. The used classification models are LR, DNN, and CNN. We train each of these models separately and then test the accuracy of each model on the test set. Logistic Regression (LR), DNN, and CNN achieve the accuracy of 92.46%, 98.49%, and 99.40% respectively. In the generation of adversarial samples, we set the number of iterations of the genetic algorithm is 200, and the sample with the smallest objective function value generated in each iteration is selected as the optimal sample. For a targeted attack, we select first 100 samples initially correctly classified from the test set to attack. Each of the samples generates adversarial samples from 9 different target labels, resulting in 100 * 9 = 900 corresponding target adversarial samples. For non-targeted attacks, we select the first 900 samples initially correctly classified from the test set to attack. Each sample generates a corresponding adversarial sample, resulting in 900 non-target adversarial samples.
(* The details of distilled model are shown in Section 4.3. There is no distilled LR model.
** This model is attacked by the approach proposed by Carlini and Wagner .)
The results are shown in Table 2 and Fig. 3. For each model, our attacks find adversarial samples with less than 10 in the distance, and succeed with 100% probability. Compared with the results generated by Carlini and Wagner’s attack Carlini & Wagner (2017), our perturbations are slightly larger than their results. However, both of our attacks succeed with 100% probability and our BANA is a black-box attack. Besides, there is no visual difference between the adversarial samples. Fig. 3(a) and Fig. 3(b) show that as the model becomes more complex, the number of iterations required to produce an effective adversarial sample increases. The distribution of the 900 best fitness values after 200 iterations is shown in Fig. 3(c). The figure indicates that the more complex the model, the larger the mean and standard deviation. The reason is that simple classification models do not have good decision boundaries. For the same classification model, non-targeted attacks require fewer iterations than targeted attacks, resulting in about lower distortion and stability. Such result indicates that for the attacker the targeted adversarial sample is generated at a higher cost. However, with the increasing of iterations, all the best fitness values tend to be 0. The difficulty caused by the targeted attack can be overcome by increasing the number of iterations. Overall, BANA is able to generate effective adversarial samples for LR, DNN, and CNN on MNIST.
By comparing the trend of success rate and best fitness values for targeted attack and non-targeted attack, respectively, it can be seen that the robustness of the classification model against adversarial samples is related to the complexity of the model, and the more complex the model, the better the robustness of the corresponding classification model.
4.2 Adversarial Sample Generation on CIFAR-10 and ImageNet
In the second experiment, the used dataset is CIFAR-10. Our purpose is to find whether BANA is able to generate effective adversarial samples on CIFAR-10. Considering the conclusion in Section 4.1, we choose CNN as the classification model to be attacked. Our CNN achieves an accuracy of 77.82% on CIFAR-10. After generating the adversarial samples with BANA, we get the results shown in Fig. 3. Our attacks find adversarial samples with less than 2 in the distance and succeed with 100% probability. We can find the same conclusion as Section 4.1 from Fig. 4 and Table 3.
Fig. 6 shows a case study of our BANA on ImageNet. As shown in Fig. 6, there is no visual difference between the original images and the perturbed images. Fig. 6 shows that our attack is able to generate adversarial samples with small visually invisible perturbations even on complex datasat.
|Non-targeted Attack||Targeted Attack|
(* The details of distilled model are shown in Section 4.3.
** This model is attacked by the approach proposed by Carlini and Wagner .)
More importantly, by comparing the experimental results for CNN on MNIST and CIFAR, it can be seen that the average best fitness value and the standard deviation on CIFAR are smaller than them on MNIST, indicating that the adversarial samples generated on CIFAR dataset are more likely to be misleading and more similar to the original data. We find that the robustness of the classification model against adversarial samples is not only related to the complexity of the model but also to the trained data set; however, not the more complex the data set, the better the robustness of the generated classification model.
4.3 Defensive Distillation
We train the distilled DNN and CNN, using at temperature . The experimental results are shown in Tables 2 and 3. The observation is that the average fitness value and standard deviation of undistilled models are smaller than those of distilled model both on targeted attacks and non-targeted attacks. However, the attack success rate of the adversarial samples produced by BABA on the distilled model is still 100% or close to 100%. Our attack is able to break defensive distillation. The reason may be related to the randomness of the swarm evolutionary algorithm. Even with the same model and data, BANA produces a different adversarial sample each time, making it effective against defensive distillation.
4.4 Sample Analysis
The perturbations of targeted attacks for an MNIST digit are shown in Fig. 7. The first column contains the adversarial samples. The second column shows the original samples. The last column shows the perturbations of the targeted samples. The first row is an example of targeted attacks for digit 1 to digit 2. The figure shows that the disturbance in the negative direction is more obvious at the features of digit 1. The disturbance in the positive direction is obvious at the features of the digit 2, and the disturbance area approximates the contour of digit 2. The negative-direction perturbation reduces the probability of the sample being predicted as the real label, and the positive perturbation improves the probability of predicted as a target label. Such result indicates that the adversarial samples to some extent reproduce the characteristics of the sample data learned by the neural networks model.
In this paper, we have presented a new approach that generates a black-box attack to neural networks based on the swarm evolutionary algorithm. Our experimental results show that our approach generates high-quality adversarial samples for LR, DNN, and CNN, and our approach is resistant to defensive distillation. Finally, our results indicate that the robustness of the artificial intelligence algorithm is related to the complexity of the model and the complexity of the data set. Our future work includes designing an effective defense approach against our proposed attack.
- Barreno et al.  Barreno, Marco, Nelson, Blaine, Joseph, Anthony D, & Tygar, J Doug. 2010. The security of machine learning. Machine Learning, 81(2), 121–148.
- Carlini & Wagner  Carlini, Nicholas, & Wagner, David. 2017. Towards evaluating the robustness of neural networks. Pages 39–57 of: 2017 IEEE Symposium on Security and Privacy (SP).
- Coello et al.  Coello, Carlos A Coello, Lamont, Gary B, Van Veldhuizen, David A, et al. . 2007. Evolutionary algorithms for solving multi-objective problems. Vol. 5.
- Demontis et al.  Demontis, Ambra, Melis, Marco, Biggio, Battista, Maiorca, Davide, Arp, Daniel, Rieck, Konrad, Corona, Igino, Giacinto, Giorgio, & Roli, Fabio. 2017. Yes, machine learning can be more secure! a case study on android malware detection. IEEE Transactions on Dependable and Secure Computing.
- Feinman et al.  Feinman, Reuben, Curtin, Ryan R, Shintre, Saurabh, & Gardner, Andrew B. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410.
- Goodfellow et al. [2014a] Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, & Bengio, Yoshua. 2014a. Generative adversarial nets. Pages 2672–2680 of: Advances in neural information processing systems.
- Goodfellow et al. [2014b] Goodfellow, Ian J, Shlens, Jonathon, & Szegedy, Christian. 2014b. Explaining and harnessing adversarial examples (2014). arXiv preprint arXiv:1412.6572.
- Grosse et al.  Grosse, Kathrin, Papernot, Nicolas, Manoharan, Praveen, Backes, Michael, & McDaniel, Patrick. 2016. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435.
- He et al.  He, Warren, Wei, James, Chen, Xinyun, Carlini, Nicholas, & Song, Dawn. 2017. Adversarial example defenses: Ensembles of weak defenses are not strong. arXiv preprint arXiv:1706.04701.
- Hendrycks & Gimpel  Hendrycks, Dan, & Gimpel, Kevin. 2016. Early methods for detecting adversarial images. arXiv preprint arXiv:1608.00530.
- Hu & Tan  Hu, Weiwei, & Tan, Ying. 2017. Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv:1702.05983.
- Krizhevsky & Hinton  Krizhevsky, Alex, & Hinton, Geoffrey. 2009. Learning multiple layers of features from tiny images. Tech. rept.
- Kurakin et al.  Kurakin, Alexey, Goodfellow, Ian, & Bengio, Samy. 2016. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
- LeCun et al.  LeCun, Yann, Bottou, Léon, Bengio, Yoshua, & Haffner, Patrick. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278–2324.
- Li et al.  Li, Shijin, Wu, Hao, Wan, Dingsheng, & Zhu, Jiali. 2011. Knowledge-Based Systems, 24(1), 40–48.
- Li et al.  Li, Tianrui, Ruan, Da, Geert, Wets, Song, Jing, & Xu, Yang. 2007. A rough sets based characteristic relation approach for dynamic attribute generalization in data mining. Knowledge-Based Systems, 20(5), 485–494.
- Liu et al. [2018a] Liu, Wei, Luo, Zhiming, & Li, Shaozi. 2018a. Improving deep ensemble vehicle classification by using selected adversarial samples. Knowledge-Based Systems.
- Liu et al. [2018b] Liu, Xiaolei, Zhang, Xiaosong, Guizani, Nadra, Lu, Jiazhong, Zhu, Qingxin, & Du, Xiaojiang. 2018b. TLTD: a testing framework for learning-based IoT traffic detection systems. Sensors, 18(8), 2630.
- Metzen et al.  Metzen, Jan Hendrik, Genewein, Tim, Fischer, Volker, & Bischoff, Bastian. 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267.
- Moosavi-Dezfooli et al.  Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, & Frossard, Pascal. 2016. Deepfool: a simple and accurate method to fool deep neural networks. Pages 2574–2582 of: .
- Moosavi-Dezfooli et al. [2017a] Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, Fawzi, Omar, Frossard, Pascal, & Soatto, Stefano. 2017a. Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554.
- Moosavi-Dezfooli et al. [2017b] Moosavi-Dezfooli, Seyed-Mohsen, Fawzi, Alhussein, Fawzi, Omar, & Frossard, Pascal. 2017b. Universal adversarial perturbations. arXiv preprint.
- Narodytska & Kasiviswanathan  Narodytska, Nina, & Kasiviswanathan, Shiva Prasad. 2017. Simple Black-Box Adversarial Attacks on Deep Neural Networks. Pages 1310–1318 of: CVPR Workshops.
- Papernot et al. [2016a] Papernot, Nicolas, McDaniel, Patrick, Wu, Xi, Jha, Somesh, & Swami, Ananthram. 2016a. Distillation as a defense to adversarial perturbations against deep neural networks. Pages 582–597 of: 2016 IEEE Symposium on Security and Privacy (SP).
Papernot et al. [2016b]
Papernot, Nicolas, McDaniel, Patrick, Jha, Somesh, Fredrikson, Matt, Celik,
Z Berkay, & Swami, Ananthram. 2016b.
The limitations of deep learning in adversarial settings.Pages 372–387 of: Security and Privacy (EuroS&P), 2016 IEEE European Symposium on.
- Papernot et al.  Papernot, Nicolas, McDaniel, Patrick, Goodfellow, Ian, Jha, Somesh, Celik, Z Berkay, & Swami, Ananthram. 2017. Practical black-box attacks against machine learning. Pages 506–519 of: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security.
- Russakovsky et al.  Russakovsky, Olga, Deng, Jia, Su, Hao, Krause, Jonathan, Satheesh, Sanjeev, Ma, Sean, Huang, Zhiheng, Karpathy, Andrej, Khosla, Aditya, Bernstein, Michael, et al. . 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211–252.
- Su et al.  Su, Jiawei, Vargas, Danilo Vasconcellos, & Kouichi, Sakurai. 2017. One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864.
- Szegedy et al.  Szegedy, Christian, Zaremba, Wojciech, Sutskever, Ilya, Bruna, Joan, Erhan, Dumitru, Goodfellow, Ian, & Fergus, Rob. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
- Yang et al.  Yang, Wei, Kong, Deguang, Xie, Tao, & Gunter, Carl A. 2017. Malware detection in adversarial settings: Exploiting feature evolutions and confusions in android apps. Pages 288–302 of: Proceedings of the 33rd Annual Computer Security Applications Conference.
- Yao et al.  Yao, Yuan, Li, Xutao, Ye, Yunming, Liu, Feng, Ng, Michael K, Huang, Zhichao, & Zhang, Yu. 2018. Low-resolution image categorization via heterogeneous domain adaptation. Knowledge-Based Systems.
- Zhao et al.  Zhao, Zhong, Feng, Guocan, Zhang, Lifang, Zhu, Jiehua, & Shen, Qi. 2018. Novel orthogonal based collaborative dictionary learning for efficient face recognition. Knowledge-Based Systems.