Deep neural networks (DNNs) have been significantly successful in many artificial intelligence tasks such as image classification[16, 33, 12], object detection [8, 29, 20]35, 7]. However, recent studies [32, 23, 21] showed that DNNs are vulnerable and easy to be attacked by adversarial samples. By adding small perturbation onto an image, it is difficult to differentiate adversarial sample from the original image visually but can mislead modern classification model into definitely incorrect categories easily, which may produce severe security threat for real application systems such as auto-driving and face verification. Adversarial samples also possess the characteristic of transferability, which means adversarial sample generated for attacking one model could also mislead another model. Thus much attention has been concentrated on this phenomenon to better understand the weaknesses of DNNs, as well as to develop and strengthen the robustness of deep networks.
The pioneering work from  used an optimization-based method to generate adversarial samples. Then many different types of methods [23, 21, 26, 31] were proposed to improve the attacking performance and speed. For example, Goodfellow  proposed the first gradient-based method, which can attack DNNs effectively by using the back-propagated gradient to update the input image with small perturbations iteratively. Moosavi-Dezfooli 
designed an iterative linearization of the classifier to generate minimal perturbations that are sufficient to change classification label. Although these optimization-based and gradient-based methods can produce very good attacking results, they often rely on a very time-consuming iterative procedure, which impose a great computation burden for real attacking systems.
Recently, generation-based methods [1, 28, 36] received much attention in the literature. They directly trained a generation model to learn how to transform input images to adversarial samples. This kind of methods can be viewed as the acceleration versions of above optimization-based and gradient-based methods. By training on a large scale of images with the pre-defined attacking object, these models do not need to access the target model again and can generate adversarial samples just with a fast forward pass during the runtime. However, one of the critical problems of current generation-based methods is that they are only able to realize single-target attack, meaning that adversarial samples generated by one model can only mislead the prediction of the attacked model to one specific predefined category during training. If we want to mislead the attacked model to another category, a new model needs to be trained, which is both time and storage consuming. Therefore, they are not feasible to real classification systems which contain hundreds/thousands of categories.
Motivated by these shortcomings, we propose a novel framework called Multi-target Adversarial Network (MAN), which aims to generate multi-target attack samples by training the adversarial model once. As illustrated in Figure 1, MAN does not require specific attack target in training and can produce target adversarial samples for all categories in a dataset. Compared with existing single-target attack methods, MAN just trains once for all categories rather than training different models for different categories.
As shown in Figure 2
(a), MAN adopts the encoder-decoder network to embed the target information from incorrect categories and appearance information from input images to generate adversarial samples. It accomplishes the target-independent adversarial training by a simple framework with intuitive loss functions. Moreover, through extensive experiments, we find that the adversarial sample generated by such a model has strong transferability. Figure3 illustrates the adversarial samples generated by MAN. When we adopt different target information, MAN enables the attack samples to effectively inherit the appearance information from the original image, while cheating the pretrained model with high fooling rates.
Our contributions are summarized as follows.
To the best of our knowledge, this is the first work to present the task of multi-target attack that generates target adversarial samples for all the categories in a dataset using one single model. We accomplish this purpose by presenting a novel Multi-target Adversarial Network (MAN), which can produce multi-target adversarial samples by training a single model, and can significantly reduce the training cost and model storage.
In the single-target scenario, the proposed method achieves competitive and better attack performance and generalization ability, compared to state-of-the-art methods [1, 28] in various popular deep architectures [30, 12]. For example, MAN achieves 98.55% attack accuracy on the CIFAR10 dataset against VGG16 and 88.95% when transferring to ResNet32, outperforming  by 4.39% and 12.49% respectively. For the multi-target attack scenario, our model can maintain these advantages by training the multi-target adversarial network only once.
The generated attack samples by MAN effectively improve the robustness and adversarial defense ability of attacked models. Take the attacked model pretrained on the CIFAR10 as an example, when using the strongest attack samples generated by MAN to finetune ResNet32, the classification accuracy is improved from 10.58% to 81.14%, outperforming other counterparts by 17.74%.
2 Related Work
2.1 Adversarial Attack Methods
Current methods for generating adversarial samples can be divided into three categories: optimization-based [32, 2], gradient-based [9, 17, 5], and generation-based [1, 28]. Optimization-based methods model the generation of adversarial samples as an optimization problem and use optimizers like box-constrained L-BGFS  or Adam  to solve it, which are powerful but quite slow. Goodfellow  proposed the first gradients-based method Fast Gradient Sign Method (FGSM) and Alexey  performed small step size iteratively to get better attacking performance (I-FGSM). The speed of gradient-based methods is linearly related to the number of parameters of target models and iteration numbers, and is often faster than optimization-based methods. However, due to their inherent model-specific mechanism, previous gradient-based methods have relatively weak transferability. Dong  further introduced momentum into the iterative steps (MI-FGSM) to get better transferability.
Different from the iterative procedure used in the above two types of methods, generation-based methods directly use generation models to transform input images into adversarial samples. For example, an auto-encoder like network was adopted in , and U-Net and ResNet were leveraged in . Although these methods need extra time to train, they can generate adversarial samples at a constant speed and do not need to access to the target model again in the inference stage. The biggest limitation of all current generation-based methods is that they are only able to attack one specific category with one model, which is very unfriendly for real attacking systems. By contrast, the multi-target adversarial network proposed by us enables attacking any category of a classification model with similar attack success rate, and can also bring better transferability for the adversarial attack task as well as model robustness for the adversarial training task.
2.2 Adversarial Defence Methods
There were also many methods proposed to defend against adversarial attack. Adversarial training used in [10, 18, 34] trained a robust network against adversarial attack by adding the adversarial sample into the training set and jointly trained with original samples. In [11, 24, 3, 19], they used preprocessing procedures to remove the adversarial noise of the input images before feeding into the target model. Other methods [34, 25, 27] used some regularizes or smooth labels to make the target model more robust to the perturbation on input images. In the methods mentioned above, adversarial training is the most effective method. But it requires enough adversarial samples for training, so the speed of generating adversarial samples determines the effectiveness and performance of adversarial training. In this paper, we focus on how to generate adversarial sample to attack all categories with high speed, and this will be helpful for training more robust models.
3 Multi-Target Attack
Non-target Single-target attack. Let be a deep neural network to be attacked, which is trained on a dataset with classes. For an image with the category label
, the predicted probability of the networkfor the -th category is represented by , where . The non-target adversarial attack tries to generate a new image , such that , It implies that predicted label of should not be the ground-truth label of . By contrast, single-target attack aims to generate the adversarial sample , such that , where is a predefined class label that is specified in the testing stage. According to the above definition, the non-target attack is much easier to realize than single-target attack, since it only requires the attacked model to make different predictions between the adversarial image and the corresponding original image. Beyond the single-target attack, this work focuses on a more challenging task, multi-target attack, whose goal is to generate multi-target adversarial samples by a single network.
White-Box attack Black-Box attack. If we have the knowledge about the attacked model , including model architecture and parameters, we call such attack as a white-box attack. In this case, the attacker can generate adversarial samples directly using attack methods by accessing the attacked model. On the contrary, for black-box scenario, the attacker knows nothing about the target model. Therefore the designed attacking model need stronger transferability, which means that adversarial samples generated for attacking one model could also mislead another model.
Problem Definition. In order to generate a target adversarial sample , a transformation function with parameters is defined to map the input image to a target domain, . In practice, we approximate this function by a deep neural network, where represents the network parameters. The purpose of multi-target attack is to train a network such that , where . Unlike traditional single-target attack that trains specific network for a predefined label , multi-target attack only trains a single network for any target label .
3.1 Multi-Target Adversarial Network (MAN)
Architecture Overview. The overall architecture of MAN is shown in Figure 2 (a). It takes image and target label as input and generates adversarial sample corresponding to the desired target. Compared with traditional single-target adversarial networks , the target label is regarded as a discrete variable rather than a constant in MAN. The range of is from to . Accordingly, the encoding network of MAN includes two branches, one for extracting the appearance features from the input image, as shown in Figure 2 (d), and the other for encoding the target label information. The feature integration module is also introduced to integrate the feature representation from these two modalities. As shown in Figure 2 (e), the decoder network with six residual blocks and two deconvolutional layers are adopted to generate the final adversarial samples.
Two intuitive losses are employed to optimize the adversarial network. One is the reconstruction loss to preserve the appearance similarity between the original image and the adversarial samples and the other is the classification loss which is adopted to embed the target label information into the attack samples. To calculate the classification loss, the generated attack sample is fed into the attacked model, the pretrained classification network, to predict the label. Although the parameters in the attacked model are fixed during the MAN training, the gradients can still be passed through this model to guide the generative subnetwork.
Variants of MAN. Next, we describe two versions of feature integration schemes of MAN. The first one is illustrated in Figure 2 (b), which is a quite straightforward idea to integrate the image feature and label feature by concatenating along the channels. Given an input image , the image encoder first calculates the image feature map , where , and indicate the number of channels, height and width of feature maps respectively. The target label
is presented as the one-hot vector, where is expanded along height and width directions to get the label feature map . Then the above two sets of feature maps are concatenated along the channels to get a mixed one . An additional convolutional layer is adopted to reshape the mixed feature to . At last, is fed into the decoder to generate the adversarial sample . In this approach, the mixed feature is generated by concatenating the feature maps of image and target label. We denote this model as the multi-target adversarial concatenate network (MANc).
, we claim that different channels of feature representation can capture diverse features for different classes. Thus the channel-wise product, recalibration operation, is used to integrate the label feature and image feature. In such a case, the label encoder contains a two-layer multi-layer perceptron (MLP), and the output is activated by sigmoid function. We denote as the output of MLP, which is given by:
is the ReLUactivation function, and are fully connected layers, is the number of intermediate units, , and limits every element . The final integrated feature representation is presented as , where
indicates the channel-wise product on the tensor:, in which is channel-wise feature map of image feature . We refer this structure as multi-target adversarial recalibrating network (MANr).
To optimize the network parameters, we define the objective function of MAN for target label as follows,
where is the predicted probability of the target model on adversarial sample, is the discrete label variable, weight factor decides the importance of two loss terms, and is the classification loss which encourages the attacked model make prediction to the target label. The concrete format of loss function has massive impact on the final result. Previous studies [1, 28, 2] tried different classification losses in their works; we found that the standard cross-entropy loss works well in all of our experiments.
The reconstruction loss measures the appearance difference between adversarial sample and original image. It is usually measured by vector norm where . In this paper, we adopt Euclidean distance, norm, as our measurement. To ensure the difference between adversarial sample and the original image is small, the perturbation is restricted to a range. A fixed distance upper bound is given and we force the constraint . In practice, we use to leverage the power of the two loss terms. Larger encourages better reconstruction quality while smaller ones get higher success attack rate.
4.1 Adversarial Attack
In this section, we evaluate the attack effectiveness of MAN on the ImageNet and the CIFAR10 .
General setup. On the ImageNet  dataset, we conduct the experiments by using popular network architectures VGG16 , and ResNet152 (Res152)  as attacked models . Meanwhile, we use VGG19 and Res101 as “black-box” model. For fair comparison, in the testing stage, we scale the norm of all perturbations to a certain threshold . . Here is the scaled adversarial sample for testing. The threshold of the norm in our experiments is calculated by , where is the dimension of input images . We set in this part and further exploration on is listed in the ablation study. We use Adam  optimizer with and to train all of the adversarial networks. The batch size is 32. For the CIFAR10 , the attacked networks are VGG16  and ResNet32 (Res32) , and “black-box” models are VGG19 and Res14. We set batchsize to 128, and other settings are the same as the Imagenet.
Single-target attack. In this part, we evaluate the attack performance on the single-target attack task. We fix the input label during the single-target training process to get a degraded version of our model. We compare the proposed MANc and MANr with two other state-of-the-art methods ATN  and GAP 
. For all of the methods, we train adversarial models for 120K iterations, with an initial learning rate of 0.002 and decreased by 10 after 80K iterations on the ImageNet. On the CIFAR10 training set, we train 20K iterations with an initial learning rate of 0.001 and decreased by 10 after 16K iterations. The hyperparameter
is set to 100 (1 for ATN) on the ImageNet and 800 (80 for ATN) on the CIFAR10. During testing, adversarial samples are generated by 50K images in the ImageNet validation set and 5k images in the CIFAR10 validation set. We use the success rate as the evaluation metric, which is defined as the ratio of the attacked model classifying the adversarial samples to the target class.
Table 1 and Table 2 report the attack success rates on two datasets. We first evaluate the white-box attack performance. From the results, we can find that ATN performs quite weak for all cases. Comparing with GAP, both MANc and MANr achieve better results in most cases, MANr achieves 98.55% attack accuracy against VGG16 on the CIFAR10, outperforming GAP by 4.39%.
When it comes to the black-box attack. ATN shows very weak transferability. When transferring to a model with similar architecture, from VGG16 to VGG19, our MANc get 85% success rate on the ImageNet and nearly 90% on the CIFAR10, outperform GAP by over 10%.
When transferring to a model with totally different architecture from VGG to ResNet, we find results on the CIFAR10 decrease a bit but still higher than 88% in most cases. But on the ImageNet, the success rate of all methods are lower than 25%. We think this is because there are too many categories in the ImageNet dataset, making it hard to transfer to another model with a specific attack target.
According to the above results, our method performs much better in most cases. We think this is because our method splits the task into two branches. One branch guides the generation model to generate adversarial samples similar to input as the previous methods did. The other branch encodes the target label and provides extra guidance helping generate more powerful adversarial samples.
Multi-target attack. In the training phase for multi-target attack, a random target label is assigned to each training image. We train models for 280K iterations, with an initial learning rate of 0.002 and decreased by 10 after 240K iterations on the ImageNet training set. On the CIFAR10 training set, we train 80K iterations, with an initial learning rate of 0.001 and decreased by 10 after 64K iterations. At the evaluation phase, we randomly pick 5000 samples from the ImageNet validation set and randomly assign ten labels for each sample. For the CIFAR10, the validation set contains 5000 samples and we assign all of the ten labels to each sample. In total, there are 50K adversarial samples generated to attack the pretrained models on each dataset.
Table 3 and Table 4 report the attack success rates of variants of MAN on the ImageNet and on the CIFAR10 respectively. For white-box case, we find our models keep a great performance with attack accuracy over 98%. When evaluating the black-box performance on both datasets, we find our models also keep a high transferability. Since the evaluation metric in Table 1, 2 and Table 3, 4 are different, the result can not be compared directly. We make further comparison under the same evaluation metric in the ablation study.
Results show the feasibility and effectiveness of our methods to realize the multi-target attack task. Even on a large dataset with numerous categories, our method can still attack arbitrary category with a single model.
There is also an interesting phenomenon that MANr performs better on CIFAR10 while MANc has better results on ImageNet. MANr re-calibrates each channel of the feature map. It controls better when size is small. When size is large, it is not easy to re-calibrate the feature map. MANc concatenates feature maps to get a mixed one, which is not influenced by the size of feature map and thus performs evenly on both datasets. Figure 3 shows the original samples and adversarial samples generated by different target label.
|Method||No. of Params||Total Models||Total Training Iters|
Parameter Comparison. Table 5 lists the number of parameters for each model, the number of models needed and the total training iterations needed to attack all categories in the ImageNet. Previous models have about 8.3% fewer parameters comparing with MANr, but 1K models with in total 7840M parameters and 120M iterations are required to attack all categories in the ImageNet. By contrast, our method just adopts a single model with much fewer parameters and 0.3M iterations to achieve the same goal. Thus our method is significantly more time and storage efficient.
|Attack Strength||Attack Method||GAP 1||GAP 5||GAP 10||MANc||MANr||Raw Res32|
|Attack Strength||Attack Method||GAP 1||GAP 5||GAP 10||MANc||MANr||Raw Res32|
4.2 Adversarial Training
Adversarial training is one of the most effective method to improve the robustness of models against adversarial attack. It finetunes the network by adversarial samples with the groundtruth labels of the images used to generate the adversarial samples. In this part, we evaluate the improvement of classification model against adversarial attack when the adversarial samples are used for finetuning.
Setups. All of the following experiments are conducted on the CIFAR10 dataset. To improve the robustness of the classification model, we use adversarial samples and their ground truth labels to finetune the pretrained networks. In the rest part of the paper, we use “adv-model” to indicate the model finetuned by the adversarial samples. The attacked model architecture is ResNet32, and the adversarial samples are generated by MANc, MANr and GAP  respectively with .
For MANc and MANr, we generate adversarial samples with random target labels and finetune the attacked model with these samples. We also train ten GAP models with respect to ten attack target, each of which is a single-target attack model, to produce adversarial samples. We finetune the attacked model with adversarial samples generated by one, five and ten GAP models to explore the influence of the number of classes used during adversarial training. The corresponding settings are denoted as GAP1, GAP5, and GAP10 respectively. All the adv-models are finetuned with 40K iterations with batch size 128 (642, each image is coupled with its adversarial sample). The learning rate is set to 0.001. To further evaluate the robustness of these adv-models, we use MI-FGSM  and ATN  as attack methods. In practice, we use MI-FGSM to generate adversarial samples with random target labels and follow the optimal settings used in . For ATN method, we follow the generation procedure stated in Sec. 2. Note that all the attack samples for testing are generated by using pretrained ResNet32 architecture on the CIFAR10 test set.
We consider the robustness of a classification model in two aspects. On the one hand, we define the attack success robustness by the attack success rate of adversarial samples to the adv-models. In such a case, a lower success rate indicates the model is more robust. On the other hand, the classification success robustness is also defined to evaluate whether the adv-models could classify the adversarial samples to their ground truth classes. In this subsection, we set the attack strength (threshold) . Two of these thresholds are larger than that in the last part because the adv-models has already finetuned by adversarial samples. A larger threshold can also reflect the robustness of adv-models against stronger attack.
Attack success robustness. We show the result of attack success rates in Table 6. According to the results, the attack success rate grows as the attack strength grows. When the attack strength is small, , it is quite hard to use adversarial samples to fool any of the adv-models, while GAP1 is slightly worse than others. When the strength becomes larger, , GAP1 and GAP5 models tend to be vulnerable to adversarial samples, which almost increase the attack success rate by around 5%, whereas GAP10 and our models are still robust, and the success attack rate is slightly increased by around 2%. When the attack strength is set to , GAP1 and GAP5 let over 20% adversarial samples attack successfully. Other models also tend to be vulnerable in such case, but our MANs still slightly outperform the GAP10. MANr can defense 0.56% more adversarial samples against MI-FGSM attack.
Classification success robustness. We report the results of classification success robustness in Table 7. By using small attack strength, , all of the adv-models can classify most adversarial samples correctly and achieve high classification accuracy (larger than 80%). When the attack strength becomes larger, , the accuracy of the GAP1 model decrease obviously, while other adv-models can still maintain a promising accuracy over 80%. When setting the attack strength to , only GAP10, MANc and MANr can keep the accuracy around 80%. Moreover, our methods get better results. For MI-FGSM attack method, MANc can get 81.14% accuracy, outperforming GAP10 by 2.06%
In summary, when we compare the adversarial training results among GAP1, GAP5 and GAP10 models, we find that when the number of target models increases, the adv-models can obtain more ability to resist the adversarial attack, which demonstrates the necessity to retrain the attacked model using multiple single-target attack models. At the same time, compared with single-target methods, our models show better performance for almost all cases. It demonstrates that our methods can generate diverse adversarial samples to promote the retraining efficiency of adv-models. Our methods can use fewer training time and storage resources to produce richer adversarial samples and make the attacked models more robust.
4.3 Ablation Study
Weight factor . In this part, we explore the influence of weight factor for attack result. As we stated above, a larger encourages better reconstruction quality while smaller ones lead to higher attack rate. When evaluating the results, we compare the attack rate under a certain reconstruction threshold , and the influence of it becomes complicated.
Figure 4 (a) shows the attack success rate on the CIFAR10 dataset with different weight factor to a VGG16 model. We can find that smaller performs well when the permitted threshold is large. Larger often performs better when the at a small value, but sometimes it may depress the performance among all cases.
Threshold . In this part, we explore the influence of different threshold . Larger means more severe change to the original image. We test the attack accuracy under different from to with step size .
Figure 4 (b) shows the attack success rate on the CIFAR10 dataset. We find that smaller (such as , ) limits the success rate. When increases, success rate grows at the same time. and get the best performance so we use in our main experiments. But when the is too large, the performance decreases a bit. We think this is because perturbation with too large impedes the representation of adversarial features.
In this part, we make further exploration about the transferability of single-target attack model and multi-target attack model. Different from last section which we take more iterations to train the multi-target attack model, in this part we use same training iterations for all models. For single-target models, we fix the attack target label during training, while multi-target models accept random input labels. In testing phase, we use the same attack target for multi-target models as used in single target ones.
The results are listed in Table 8. According to the results, even we train two models with the same iterations, the multi-target model performs better transferability than the single-target model in all black-box cases, it obtains 8.98% gain when transferring from ResNet32 to VGG16. We conjecture that the competition between different target label promote the model to learn more generalization feature with more generalization ability and this result proves our conjecture.
Model Ensemble. We also try to combine different attacked models during the training process, the last row of Table 9 shows the success rate jointly trained on VGG16 and ResNet32 models. It is obvious that ensemble different attacked models make the transferability much better than a single architecture. Considering the performance transferring form ResNet32 to VGG19, ensemble method exceeds the single architecture over 10%. This is reasonable since the network attempt to fit various architectures, thus it can be generalized well to other unseen models.
In this paper, we propose a novel adversarial model named Multi-target Adversarial Network (MAN) to deal with the multi-target attack problem. It can produce multi-target adversarial samples by training a single model and significantly reduces the training cost and model storage. By embedding the target label information into the generated adversarial samples, a great improvement on attack ability and transferability is shown by these samples. MAN also produces diverse adversarial samples efficiently for adversarial training, which greatly improves the robustness of models. Future work may lie on adding random noise or introducing additional regularization term to generate adversarial samples with better transferability to black-box attack. More applications about the adversarial samples are also considered to be developed, adversarial attack under more realistic constraints.
This work is supported in part by SenseTime Group Limited, in part by the General Research Fund through the Research Grants Council of Hong Kong under Grants CUHK14202217, CUHK14203118, CUHK14205615, CUHK14207814, CUHK14213616, in part by the Natural Science Foundation of China under Grant U1636201 and 61572452, and by Anhui Initiative in Quantum Information Technologies under Grant AHY150400.
-  Shumeet Baluja and Ian Fischer. Adversarial transformation networks: Learning to generate adversarial examples. arXiv preprint arXiv:1703.09387, 2017.
-  Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
-  Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, and Duen Horng Chau. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression. arXiv e-prints, page arXiv:1705.02900, May 2017.
-  Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. Ieee, 2009.
-  Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Xiaolin Hu, Jianguo Li, and Jun Zhu. Boosting adversarial attacks with momentum. arXiv preprint arXiv:1710.06081, 2017.
-  Roger Fletcher. Practical methods of optimization. John Wiley & Sons, 2013.
-  Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
-  Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
-  Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
-  Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. arXiv e-prints, page arXiv:1412.6572, Dec 2014.
-  Shixiang Gu and Luca Rigazio. Towards Deep Neural Network Architectures Robust to Adversarial Examples. arXiv e-prints, page arXiv:1412.5068, Dec 2014.
-  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
-  Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 7, 2017.
-  Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton.
Imagenet classification with deep convolutional neural networks.In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
-  Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial Machine Learning at Scale. arXiv e-prints, page arXiv:1611.01236, Nov 2016.
-  Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser. arXiv e-prints, page arXiv:1712.02976, Dec 2017.
-  Tsung-Yi Lin, Piotr Dollár, Ross B Girshick, Kaiming He, Bharath Hariharan, and Serge J Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, page 4, 2017.
-  Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
Vinod Nair and Geoffrey E Hinton.
Rectified linear units improve restricted boltzmann machines.
Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
-  Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436, 2015.
-  M. Osadchy, J. Hernandez-Castro, S. Gibson, O. Dunkelman, and D. Pérez-Cabo. No bot expects the deepcaptcha! introducing immutable adversarial examples, with applications to captcha generation. IEEE Transactions on Information Forensics and Security, 12(11):2640–2653, Nov 2017.
-  Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. Practical Black-Box Attacks against Machine Learning. arXiv e-prints, page arXiv:1602.02697, Feb 2016.
-  Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pages 506–519. ACM, 2017.
-  Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Towards the Science of Security and Privacy in Machine Learning. arXiv e-prints, page arXiv:1611.03814, Nov 2016.
-  Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative adversarial perturbations. arXiv preprint arXiv:1712.02328, 2017.
-  Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
-  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai.
One pixel attack for fooling deep neural networks.
IEEE Transactions on Evolutionary Computation, 2019.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
-  Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1701–1708, 2014.
-  Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble Adversarial Training: Attacks and Defenses. arXiv e-prints, page arXiv:1705.07204, May 2017.
-  Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017.
-  Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. arXiv preprint arXiv:1801.02610, 2018.