also show that adversarial examples can be physically realized, which can lead to serious safety issues. Design of robust models, which correctly classifies adversarial examples, is an active research area[7, 17, 25, 11, 34], with adversarial training  being one of the most effective methods. It formulates training as a game between adversarial attacks and the model: the stronger the adversarial examples generated to train the model are, the more robust the model is.
To generate strong adversarial examples, iterative attacks , which use multiple attack iterations to generate adversarial examples, are widely adopted in various adversarial training methods[21, 39, 4, 32]. Since adversarial perturbations are usually bounded by a constrained space and attack perturbations outside need to be projected back to , -step projected gradient descent method[16, 21] (PGD-) has been widely adopted to generate adversarial examples. Typically, using more attack iterations (higher value of ) produces stronger adversarial examples . However, each attack iteration needs to compute the gradient on the input, which causes a large computational overhead. As shown in Table 1, the training time of adversarial training can be close to times larger than natural training.
Recent works [24, 20, 23] show that adversarial examples can be transferred between models: adversarial examples generated for one model can still stay adversarial to another model. The key insight in our work, which we experimentally verified, is that, because of high transferability between models (i.e., checkpoints) from neighboring training epochs, attack strength can be accumulated across epochs by repeatedly reusing the adversarial perturbation from the previous epoch.
We take advantage of this insight in coming up with a novel adversarial training method called ATTA (Adversarial Training with Transferable Adversarial examples) that can be significantly faster than state-of-the-art methods while achieving similar model robustness. In traditional adversarial training, when a new epoch begins, the attack algorithm generates adversarial examples from natural images, which ignores the fact that these perturbations can be reused effectively. In contrast, we show that it can be advantageous to reuse these adversarial perturbations through epochs. Even using only one attack iteration to generate adversarial examples, ATTA- can still achieve comparable robustness with respect to traditional PGD-.
We apply our technique on Madry’s Adversarial Training method (MAT)  and TRADES and evaluate the performance on both MNIST and CIFAR10 dataset. Compared with traditional PGD attack, our method improves training efficiency by up to () on MNIST (CIFAR10) with comparable model robustness. Trained with ATTA, the adversarial accuracy of MAT can be improved up to . Noticeably, for MNIST, with one attack iteration, our method can achieve adversarial accuracy within minutes. For CIFAR10, compared to MAT whose training time is more than one day, our method achieves comparable adversarial accuracy in about hours.
Contribution. To the best of our knowledge, our work is the first to enhance the efficiency and effectiveness of adversarial training by taking advantage of high transferability between models from different epochs. In summary, we make the following contributions:
We are the first to reveal the high transferability between models of neighboring epochs in adversarial training. With this property, we verify that the attack strength can be accumulated across epochs by reusing adversarial perturbations from the previous epoch.
We propose a novel method (ATTA) for iterative attack based adversarial training with the objectives of both efficiency and effectiveness. It can generate the same (or even stronger) adversarial examples with much fewer attack iterations via accumulating adversarial perturbations through epochs.
Evaluation result shows that, with comparable model robustness, ATTA is () faster than traditional adversarial methods on MNIST (CIFAR10). ATTA can also enhance model adversarial accuracy by up to for MAT on CIFAR10.
2 Adversarial training and transferability
In this section, we introduce relevant background on adversarial training and transferability of adversarial examples. We also discuss the trade-off between training time and model robustness.
2.1 Adversarial Training
Adversarial training is an effective defense method to train robust models against adversarial attacks. By using adversarial attacks as a data augmentation method, a model trained with adversarial examples achieves considerable robustness. Recently, lots of works[21, 39, 38, 2, 14, 12, 26]
focuse on analyzing and improving adversarial machine learning. Madryet al.  first formulate adversarial training as a min-max optimization problem:
where is the hypothesis space, is the distribution of the training dataset,
is a loss function, andis the allowed perturbation space that is usually selected as an L- norm ball around . The basic strategy in adversarial training is, given a natural image , to find a perturbed image that maximizes the loss with respect to correct classification. The model is then trained on generated adversarial examples. In this work, We consider adversarial examples with a high loss to have high attack strength.
PGD- attack based adversarial training: Unfortunately, solving the inner maximization problem is hard. Iterative attack  is commonly used to generate strong adversarial examples as an approximate solution for the inner maximization problem of Equation 1. Since adversarial perturbations are usually bounded by the allowed perturbation space , PGD- (-step projected gradient descent ) is adopted to conduct iterative attack [21, 39, 37, 27]. PGD- adversarial attack is the multi-step projected gradient descent on a negative loss function:
In the above, is the adversarial example in the -th attack iteration, is the attack step size, and is the projection function to project adversarial examples back to the allowed perturbation space .
With a higher value of (more attack iterations), PGD- can generate adversarial examples with higher loss . However, there is a trade-off between training time and model robustness in adversarial training. On one hand, since each attack iteration needs to calculate the gradient for the input, using more attack iterations requires more time to generate adversarial examples, thus causing a large computational overhead for adversarial training. As shown in Table 1, compared to natural training, adversarial training may need close to x more training time until the model converges. Most of the training time is consumed in generating adversarial examples (attack time). On the other hand, reducing the number of attack iterations can reduce the training time, but that negatively impacts robustness of the trained model (Table 2).
2.2 Transferability of Adversarial Examples.
Szegedy et al. show that adversarial examples generated for one model can stay adversarial for other models. This property is named as transferability. This property is usually leveraged to perform a black-box attack [23, 24, 19, 20]. To attack a targeted model , the attacker generates transferable adversarial examples from the source model . The higher the transferability between and is, the higher the success rate the attack has.
Substitute model training is a commonly used method to train a source model . Rather than the benchmark label , is trained with which is the prediction result of the targeted model  to achieve a higher black-box attack success rate. While our work does not use black-box attacks, we do rely on a similar intuition as behind substitute model training, namely, two models with similar behavior and decision boundaries are likely to have higher transferability between each other. We use this intuition to show high transferability between models from neighboring training epochs, as discussed in the next section.
3 Attack strength accumulation
In this section, we first conduct a study and find that models from neighboring epochs show very high transferability and are naturally good substitute models to each other. Based on this observation, we design an accumulative PGD- attack that accumulates attack strength by reusing adversarial perturbations from one epoch to the next. Compared to traditional PGD- attack, accumulative PGD- attack achieves much higher attack strength with fewer number of attack iterations in each epoch.
3.1 High transferability between epochs
Transferability between models in different training epochs of the same training program has not been studied. Because fluctuations of model parameters between different epochs are very small, we think that they are likely to have similar behavior and similar decision boundaries, which should lead to high transferability between these models.
To evaluate the transferability between models from training epochs, we adversarially train a model as the targeted model, while saving intermediate models at the end of three immediately prior epochs as , , and , with being the model from the epoch immediately prior to . We measure the transferability of adversarial examples of each of with . For comparison, we also train three additional models , , and that are trained by exactly the same training method as but with different random seeds. And we measure the transferability between each of and .
To measure transferability from the source model to the targeted model, we use two metrics. The first metric is error rate transferability used in [1, 23], which is the ratio of the number of adversarial examples misclassified by source model to that of the targeted model. The other metric is loss transferability, which is the ratio of the loss value caused by adversarial examples on the source model to the loss value caused by the same examples on the targeted model.
We conduct experiments on both MNIST and CIFAR10 dataset, and the results are shown on Figure 1. We find that, compared to the baseline models , the models from neighboring epochs of have higher transferability for both transferability metrics (the transferability metrics for all models are larger than ). This provides strong empirical evidence that adversarial examples generated in one epoch still retain some strength in subsequent epochs.
Inspired by the above result, we state the following hypothesis. Hypothesis: Repeatedly reusing perturbations from the previous epoch can accumulate attack strength epoch by epoch. Compared to current methods that iterate from natural examples in each epoch, this can allow us to use few attack iterations to generate the same strong adversarial examples.
3.2 Accumulative PGD- attack
To validate the aforementioned hypothesis, we design an accumulative PGD- attack. As shown in Figure 1(b), we longitudinally connect models in each epoch by directly reusing the attack perturbation of the previous epoch. Accumulative PGD- attack Figure 1(b) generates adversarial examples for first. Then, for the following epochs, the attack is performed based on the accumulated perturbations from previous epochs.
To compare the attack strength of two attacks, we use Madry’s method  to adversarially train two models on MNIST and CIFAR10 and evaluate the loss value of adversarial examples generated by two attacks. Figure 3 summarises the evaluation result. We can find that, with more epochs involved in the attack, accumulative PGD- attack can achieve a higher loss value with the same number of attack iterations .
Especially, when adversarial examples are transferred through a large number of epochs, even accumulative PGD- attack can cause high attack loss. For MNIST, accumulative PGD- attack can achieve the same attack loss as traditional PGD- attack when . For CIFAR10, accumulative PGD- attack can achieve the same attack loss as traditional PGD- attack when .
This result indicates that, with high transferability between epochs, adversarial perturbations can be reused effectively, which allows us to use fewer attack iterations to generate the same or stronger adversarial examples. Reuse of perturbations across epochs can help us reduce the number of attack iterations in PGD-, leading to more efficient adversarial training. Next section describes our proposed algorithm, ATTA (Adversarial Training with Transferable Adversarial examples), based on this property.
4 Adversarial training with transferable adversarial examples
The discussion on transferability in Section 3 suggests that adversarial examples can retain attack strength in subsequent training epochs. The results of accumulative attack in Section 3 suggest that stronger adversarial examples can be generated by accumulating attack strength. This property inspires us to link adversarial examples between adjacent epochs as shown in Figure 4. Unlike starting from a natural image to generate an adversarial example in each epoch as shown in Figure 4(a), we start from a previously saved adversarial example from the previous epoch to generate an adversarial example (Figure 4(b)). To improve the transferability between epochs, we use a connection function to link adversarial examples, which transforms to a start point for the next epoch. During the training, with repeatedly reusing adversarial example between epochs, attack strength can be accumulated epoch by epoch:
where is the attack algorithm, is a connection function (described in the next section). is the model in the -th epoch, is the adversarial examples generated in the -th epoch and are natural image and benchmark label 111Note that the is still bounded by the natural image , rather than ..
As shown in the previous section, adversarial examples can achieve high attack strength as they are transferred across epochs via the above linking process, rather than starting from natural images. This, in turn, should allow us to train a more robust model with fewer attack iterations.
4.1 Connection function design
Designing connection function can help us overcome two challenges that we encountered in achieving high transferability of adversarial examples from one epoch to the next:
Data augmentation problem: Data augmentation is a commonly used technique to improve the performance of deep learning. It applies randomly transformation on original images so that models can be trained with various images in different epochs. This difference can cause a mismatch of images between epochs. Since the perturbation is calculated with the gradient of image, if we directly reuse the perturbation, the mismatch between reused perturbation and new augmented image can cause a decrease in attack strength. Simply removing data augmentation also hurts the robustness. We experimentally show this negative influence of these two alternatives in Section 5.3.
Drastic model parameter change problem: As discussed in Section 3, similar parameters between models tends to cause a similar decision boundary and thus high transferability. Unfortunately, model parameters tend to change drastically at the early stages of training. Thus, adversarial perturbations in early epochs tend to be useless for subsequent epochs.
Overcoming challenges: Inverse data augmentation. To address the first issue, we propose a technique called inverse data augmentation so that adversarial examples retain a high transferability between training epochs despite data augmentation. Figure 5
shows the workflow with inverse data augmentation. Some transformations (like cropping and rotation) pad augmented images with background pixels. To transfer a perturbation on background pixels, our method transfers the padded imagerather than standard images so that the background pixels used by data augmentation can be covered by these paddings.
After the adversarial perturbation of augmented image is generated,
we can perform the inverse transformation222 In adversarial training, most data augmentation methods used are linear transformations which are easy to be inversed.
In adversarial training, most data augmentation methods used are linear transformations which are easy to be inversed.to calculate the inversed perturbation on the padded image . By adding to , we can store and transfer all perturbation information in to next epoch. (Note that, when we perform the adversarial attack on the augmented image , the perturbation is still bounded by the natural image rather than .)
Periodically reset perturbation. To solve the second issue, we propose a straightforward but effective solution: our method resets the perturbation and lets adversarial perturbations be accumulated from the beginning periodically, which mitigates the impact caused by early perturbations.
4.2 Attack Loss
where is the loss function, is the model and , are natural and adversarial example respectively. This loss represents how much the adversarial example diverges from the natural image. Zhang et al. shows that this loss has a better performance in the TRADES algorithm.
In our method, we use the following loss function:
It represents how much the adversarial examples diverges to the benchmark label.
The objection () of Equation 2 for adversarial attack varies across epochs, which may weaken the transferability between epochs. Equation 3 applies a fixed objection (), which doesn’t have this concern. In addition, Equation 3 has a smaller computational graph, which can reduce computational overhead during training.
The overall training method is described in Algorithm 1.
In this section, we integrate ATTA with two popular adversarial training methods: Madry’s Adversarial Training (MAT)  and TRADES . By evaluating the training time and robustness, we show that ATTA can provide a better trade-off than other adversarial training methods. To understand the contribution of each component to robustness, we conduct an ablation study.
For the MNIST dataset, the model has four convolutional layers followed by three full-connected layers which is same architecture as used in [21, 39]. The adversarial perturbation is bounded by ball with size .
5.2 Efficiency and effectiveness of ATTA
In this part, we evaluate the training efficiency and the robustness of ATTA, comparing it to state-of-the-art adversarial training algorithms. To better verify effectiveness of ATTA, we also evaluate ATTA under various attacks.
5.2.1 Training efficiency
We select four state-of-the-art adversarial training methods as baselines: MAT, TRADES, YOPO and Free. For MNIST, the model trained with ATTA can achieve a comparable robustness with up to times training efficiency and, for CIFAR10, our method can achieve a comparable robustness with up to times training efficiency. Compared to MAT trained with PGD, our method improve the accuracy up to with times training efficiency.
MNIST. The experiment results of MNIST are summarised in Table 3. For MAT, to achieve comparable robustness, ATTA is about times faster than the traditional PGD training method. Even with one attack iteration, model trained with ATTA- achieves adversarial accuracy within seconds. For TRADES, we get a similar result. With one attack iteration, ATTA- achieves adversarial accuracy within seconds, which is about times faster than TRADES(PGD-). Compared to another fast adversarial training method YOPO, ATTA- is about times faster and achieves higher robustness ( versus YOPO’s ).
|YOPO-- 333The author-implemented YOPO-- can’t converge in our experiment. We pick the accuracy data from YOPO paper.||-|
ATTARGB194, 79, 85 PGDRGB78, 116, 174 FreeRGB219, 131, 87 YOPORGB88, 167, 106
CIFAR10. We summarise the experiment results of CIFAR10 in Table 4. For MAT, compared to PGD-, ATTA- achieves higher adversarial accuracy with about times training efficiency, and ATTA- improves adversarial accuracy by with times training efficiency when the model is trained with attack iterations. For TRADES, ATTA- achieves comparable adversarial accuracy to PGD- with times faster training efficiency. By comparing the experiment results with YOPO and Free, for MAT, our method is () times faster than Free (YOPO) with () better adversarial accuracy.
To better understand the performance of ATTA, we present the trade-off between training time and robustness of different methods in Figure 6. We find that our method (indicated by the solid markers in the left-top corner) gives a better trade-off on efficiency and effectiveness on adversarial training.
5.2.2 Defense under other attacks
As shown in Table 5, models trained with ATTA are still robust to other attacks. Compared to baselines, our method still achieves a comparable or better robustness under other attacks. We find that, although ATTA- has a similar robustness to PGD- under PGD- attack, with a stronger attack (e.g. PGD-), ATTA- shows a better robustness ( higher adversarial accuracy.)
5.3 Ablation study
To study the contribution of each component to robustness, we do an ablation study on inverse data augmentation and different attack loss functions.
Inverse data augmentation. To study the robustness gain of inverse data augmentation(i.d.a.), we use ATTA- to adversarially train models by reusing the adversarial perturbation directly. As shown in Table 6, for both MAT (ATTA-) and TRADES (ATTA-), models trained with inverse data augmentation achieve about higher accuracy, which means that inverse data augmentation does help improve the transferability between training epochs. As discussed in 4.1, another alternative is to remove data augmentation. However, Table 6 shows that removal of data augmentation hurts both natural accuracy and robustness.
|MAT(w/o d.a., w/o i.d.a.)|
|MAT(w/ d.a., w/o i.d.a.)|
|MAT(w/ d.a., w/ i.d.a.)|
|TRADES(w/o d.a., w/o i.d.a.)|
|TRADES(w/ d.a., w/o i.d.a.)|
|TRADES(w/ d.a., w/ i.d.a.)|
Attack loss. Zhang et al. show that, for TRADES, using Equation 2 leads to a better robustness. However, in this attack loss, we noticed that both inputs to the loss function are related to the model . Since the model is updated every epoch, compared to Equation 3 whose is fixed, the instability of and may have a larger influence on transferability. To analyze the performance difference between these two attack losses in ATTA, we train two TRADES(ATTA-) models with different attack losses. In Table 7, we find that Equation 3 leads to higher accuracy against PGD- attack. This result suggests that the higher stability of Equation 3 helps ATTA increase transferability between training epochs.
6 Related work
Adversarial training is first proposed in  and is formulated as a min-max optimization problem. As one of the most effective defense methods, lots of works [2, 32, 28, 4, 12, 21, 18, 37, 27] focus on enhancing either the efficiency or effectiveness of adversarial training. YOPO finds that an adversary update is majorly coupled with the first layer. It can speed up the training progress by just updating the first layer. Shafahi et al. improve the training efficiency by recycling the gradient information computed when updating model parameters to generate adversarial examples. In , TRADES improves the robustness of an adversarially trained model by adding a robustness regularizer in the loss function.
Transferability of adversarial examples. Szegedy et al.  first describes the transferability of adversarial examples. This property is usually used to perform black-box attack between models [24, 19, 35, 5, 20]. Lie et al.  show that adversarial examples generated by an ensemble of multiple models are more transferable to a targeted model.
Scalability to large dataset.
One downside to our method is the extra space needed to store the adversarial perturbation for each image, which may limit the scalability of ATTA when the model is trained on larger datasets (e.g., ImageNet). However, we believe this is not going to be a serious issue, since asynchronous IO pipelines are widely implemented in existing DL frameworks, which should allow ATTA to store perturbations data on hard disks without significantly impacting efficiency.
Transferability between training epochs. Adversarial attacks augment the training data to improve the model robustness. Our work points out that, unlike images augmented by traditional data augmentation methods that are independent between epochs, adversarial examples generated by adversarial attacks show high relevance transferability between epochs. We hope this finding can inspire other researchers to enhance adversarial training from a new perspective (e.g., improving transferability between epochs).
This work is supported by NSF Grant No.1646392.
ATTA is a new method for iterative attack based adversarial training that significantly speeds up training time and improves model robustness. The key insight behind ATTA is high transferability between models from neighboring epochs, which is firstly revealed in this paper. Based on this property, the attack strength in ATTA can be accumulated across epochs by repeatedly reusing adversarial perturbations from the previous epoch. This allows ATTA to generate the same (or even stronger) adversarial examples with fewer attack iterations. We evaluate ATTA and compare it with state-of-the-art adversarial training methods. It greatly shortens the training time with comparable or even better model robustness. More importantly, ATTA is a generic method and can be applied to enhance the performance of other iterative attack based adversarial training methods.
Shumeet Baluja and Ian Fischer.
Learning to attack: Adversarial transformation networks.In AAAI, pages 2687–2695, 2018.
Qi-Zhi Cai, Min Du, Chang Liu, and Dawn Song.
Curriculum adversarial training.
International Joint Conferences on Artificial Intelligence (IJCAI), 2018.
-  Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3–14. ACM, 2017.
-  Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C Duchi. Unlabeled data improves adversarial robustness. arXiv preprint arXiv:1905.13736, 2019.
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh.
Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models.In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.
Steven WD Chien, Stefano Markidis, Chaitanya Prasad Sishtla, Luis Santos, Pawel
Herman, Sai Narasimhamurthy, and Erwin Laure.
Characterizing deep-learning i/o workloads in tensorflow.In 2018 IEEE/ACM 3rd International Workshop on Parallel Data Storage & Data Intensive Scalable Computing Systems (PDSW-DISCS), pages 54–63. IEEE, 2018.
-  Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. In International Conference on Machine Learning (ICML), 2019.
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and
Boosting adversarial attacks with momentum.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185–9193, 2018.
-  Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. Robust physical-world attacks on deep learning models. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
-  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR), 2014.
-  Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In Advances in Neural Information Processing Systems, pages 2266–2276, 2017.
-  Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In International Conference on Machine Learning (ICML), 2019.
-  Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019.
-  Yunseok Jang, Tianchen Zhao, Seunghoon Hong, and Honglak Lee. Adversarial defense via learning to generate diverse attacks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2740–2749, 2019.
-  Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
-  Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
-  Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
-  Hyeungill Lee, Sungyeob Han, and Jungwoo Lee. Generative adversarial trainer: Defense to adversarial perturbations with gan. arXiv preprint arXiv:1705.03387, 2017.
-  Yingwei Li, Song Bai, Yuyin Zhou, Cihang Xie, Zhishuai Zhang, and Alan Yuille. Learning transferable adversarial examples via ghost networks. arXiv preprint arXiv:1812.03413, 2018.
-  Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations (ICLR), 2017.
-  Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations (ICLR), 2018.
-  Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian Molloy, and Ben Edwards. Adversarial robustness toolbox v1.0.1. CoRR, 1807.01069, 2018.
-  Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
-  Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519. ACM, 2017.
-  Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations (ICLR), 2018.
-  Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarially robust generalization requires more data. In Advances in Neural Information Processing Systems, pages 5014–5026, 2018.
-  Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019.
-  Ali Shafahi, Mahyar Najibi, Zheng Xu, John Dickerson, Larry S Davis, and Tom Goldstein. Universal adversarial training. arXiv preprint arXiv:1811.11304, 2018.
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter.
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition.In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016.
-  Dawn Song, Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, and Tadayoshi Kohno. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018.
-  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
-  Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICLR), 2018.
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and
Robustness may be at odds with accuracy.In International Conference on Learning Representations (ICLR), 2019.
-  Eric Wong and J Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning (ICML), 2018.
-  Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2730–2739, 2019.
-  Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
-  Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerating adversarial training via maximal principle. In Neural Information Processing Systems (NeurIPS), 2019.
-  Huan Zhang, Hongge Chen, Zhao Song, Duane Boning, Inderjit S Dhillon, and Cho-Jui Hsieh. The limitations of adversarial training and the blind-spot attack. In International Conference on Learning Representations (ICLR), 2019.
-  Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. 2019.
Appendix A Overview
This supplementary material provides details on our experiment and additional evaluation results. In Section B, we introduce the detailed setup of our experiment. In Section C, we compare adversarial examples generated by ATTA and PGD and show that, even with one attack iteration, ATTA- can generate similar perturbations to PGD- (PGD-) on MNIST (CIFAR10). We also provide the complete evaluation results in Section C.2.
Appendix B Experiment setup
We provide additional details on the implementation, model architecture, and hyper-parameters used in this work.
MNIST. We use the same model architecture used in [21, 39, 37], which has four convolutional layers followed by three fully-connected layers. The adversarial examples used to train the model are bounded by ball with size and the step size for each attack iteration is . We do not apply any data augmentation (and inverse data augmentation) on MNIST and set the epoch period to reset perturbation as infinity which means that perturbations are not reset during the training. The model is trained for epochs with an initial learning rate and a learning rate after epochs, which is the same as . To evaluate the model robustness, we perform the PGD , M-PGD  and CW  attack with a step size and set decay factor as for M-PGD (momentum PGD).
CIFAR10. Following other works [21, 39, 37, 27], we use Wide-Resnet-34-10  as the model architecture. The adversarial examples used to train the model are bounded by ball with size . For ATTA-, we use as the step size, respectively. For ATTA- (), we use as the step size. The data augmentation used is a random flip and -pixel padding crop, which is same with other works [21, 39, 37, 27]. We set the epoch period to reset perturbation as epochs. Following YOPO , the model is trained for epochs with an initial learning rate, a learning rate after epochs, and a learning rate after epochs. To evaluate the model robustness, we perform the PGD, M-PGD and CW attack with a step size and set decay factor as for M-PGD (momentum PGD).
For the baseline, we use the author implementation of MAT444
https://github.com/MadryLab/cifar10_challenge , TRADES555https://github.com/yaodongyu/TRADES , YOPO666https://github.com/a1600012888/YOPO-You-Only-Propagate-Once , and Free777https://github.com/ashafahi/free_adv_train  with the hyper-parameters recommended in their works, and we select as for TRADES (both ATTA and PGD).
In Section red3, which analyzes the transferability between training epochs, we use MAT with PGD- to train models and PGD- to calculate loss value and error rate.
Each experiment is taken on one idle NVIDIA GeForce RTX 2080 Ti GPU. Except PGD attack, we implement other attacks with Adversarial Robustness Toolbox .
Appendix C Experiment details
c.1 Qualitative study on training images
To compare the quality of adversarial examples generated by PGD and ATTA, we visualize some adversarial examples generated by both methods. For MNIST, we choose the model checkpoint trained by MAT-ATTA- at epoch . For CIFAR10, we choose the model checkpoint trained by MAT-ATTA- at epoch . Figure 7 shows the adversarial examples and perturbations used to train the model (ATTA-) and genereated by PGD- (PGD-) attack on MNIST (CIFAR10) model in each class. To better visualize the perturbation, we re-scale the perturbation by calculating (where is the perturbation and is the bound of adversarial attack). This shifts the ball to the scale of .
We find that, although ATTA- just performs one attack iteration in each epoch, it generates similar perturbations to PGD- (PGD-) in MNIST (CIFAR10). The effect of inverse data augmentation is shown in Figure 6(b). There are some perturbations on the padded pixels in the third row (ATTA-), but perturbations just generated by PGD- (shown in fifth row) just appear on cropped pixels.
c.2 Complete evaluation results
We put the complete evaluation result in this section as a supplement to Section red5.2.
We evaluate defense methods under additional attacks and the evaluation results are shown in Table 8 and Table 9. Similar to the conclusion stated in Section red5.2, compared to other methods, ATTA achieves comparable robustness with much less training time, which shows a better trade-off on training efficiency and robustness. With the same number of attack iterations, ATTA needs less time to train the model. As mentioned in Section red3.2, with the accumulation of adversarial perturbations, ATTA can use the same number of attack iterations to achieve a higher attack strength, which helps the model converge faster.
Natural accuracy v.s. Adversarial accuracy. In this paper, we find that higher adversarial accuracy can lower natural accuracy. This trade-off has been observed and explained in [33, 39]. A recent work  points out that features used by naturally trained models are highly-predictive but not robust and adversarially trained models tend to use robust features rather than highly-predictive features, which may cause this trade-off. Table 9 also shows that, when models are trained with stronger attacks (more attack iterations), the models tend to have higher adversarial accuracy but lower natural accuracy.