Output Diversified Initialization for Adversarial Attacks

by   Yusuke Tashiro, et al.

Adversarial examples are often constructed by iteratively refining a randomly perturbed input. To improve diversity and thus also the success rates of attacks, we propose Output Diversified Initialization (ODI), a novel random initialization strategy that can be combined with most existing white-box adversarial attacks. Instead of using uniform perturbations in the input space, we seek diversity in the output logits space of the target model. Empirically, we demonstrate that existing ℓ_∞ and ℓ_2 adversarial attacks with ODI become much more efficient on several datasets including MNIST, CIFAR-10 and ImageNet, reducing the accuracy of recently proposed defense models by 1–17%. Moreover, PGD attack with ODI outperforms current state-of-the-art attacks against robust models, while also being roughly 50 times faster on CIFAR-10. The code is available on https://github.com/ermongroup/ODI/.


page 1

page 2

page 3

page 4


Torchattacks : A Pytorch Repository for Adversarial Attacks

Torchattacks is a PyTorch library that contains adversarial attacks to g...

Generalizing Universal Adversarial Attacks Beyond Additive Perturbations

The previous study has shown that universal adversarial attacks can fool...

Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

Defense models against adversarial attacks have grown significantly, but...

An Efficient Adversarial Attack for Tree Ensembles

We study the problem of efficient adversarial attacks on tree based ense...

Probabilistic Jacobian-based Saliency Maps Attacks

Machine learning models have achieved spectacular performances in variou...

Distributionally Adversarial Attack

Recent work on adversarial attack has shown that Projected Gradient Desc...

Cortical Features for Defense Against Adversarial Audio Attacks

We propose using a computational model of the auditory cortex as a defen...

1 Introduction

Deep neural networks have achieved great success in image classification. However, it is known that they are vulnerable to adversarial examples 

(Szegedy et al., 2013)—images perturbed by visually undetectable noise can cause classification models to output wrong predictions. Several researchers have focused on improving model robustness against these malicious perturbations. Examples include adversarial training (Madry et al., 2018; Goodfellow et al., 2015), where models are trained on adversarial images for better robustness, input purification using generative models (Song et al., 2018; Samangouei et al., 2018), regularization of the training loss (Ross and Doshi-Velez, 2018; Zhang and Wang, 2018; Moosavi-Dezfooli et al., 2019; Qin et al., 2019), and certified defenses (Wong and Kolter, 2017; Raghunathan et al., 2018; Cohen et al., 2019).

Strong attacking methods are crucial for evaluating the robustness of different classifiers and defense methods. One of the most popular attacks for such purposes is the Projected Gradient Descent (PGD) attack 

(Madry et al., 2018)

, which initializes an adversarial example from a random perturbation to some clean image and iteratively updates it by following the gradient directions of the classification loss. For better performance, PGD attacks often restart multiple times with different random perturbations of clean images to increase the likelihood of successfully finding an adversarial example. These perturbations are typically sampled from a uniform distribution in the input pixel space. This random restart strategy is also widely adopted by other attacking methods 

(Zheng et al., 2019; Croce and Hein, 2019; Gowal et al., 2019).

Input space Output space Input space Output space
Naïve uniform initialization Output diversified initialization
Figure 1: Illustration of the differences between naïve uniform initialization and output diversified initialization. The black ‘o’ corresponds to an original image, and white ‘o’s represent the initial image samples. Sampled images in the input space are restricted within the dotted circle (i.e., the -radius ball around an original image).

However, we argue that the standard initialization strategy is not very suitable for generating adversarial examples. Because deep neural networks are often highly non-linear functions, diversity in the input pixel space cannot be directly translated to diversity in the output (logits) space of the target model. As a result, we may not be able to explore the output space sufficiently by doing random restarts in the input space. We illustrate this intuition in the left panel of Figure 1. When we randomly perturb initial data points in the input space (see the leftmost plot of Figure 1), the corresponding output logits could be very similar to each other, in which case iterative attack schemes (such as PGD) will often get stuck in similar local optima and fail to find difficult adversarial examples effectively (as illustrated by the second plot of Figure 1). Empirically, we observe that this phenomenon can negatively impact practical attack methods, especially for adversarially trained models, because the outputs of them have been explicitly trained to be insensitive to changes in the input space.

To improve the success rate of finding difficult adversarial examples, we propose to utilize random restarts in the output space. In particular, we move an input away from the original image as measured by distances in the output space (see the rightmost plot in Figure 1) before starting an attack. In order to generate sufficiently diverse starting points in the output space, we randomly choose the moving direction for every restart. We call this new initialization strategy Output Diversified Initialization (ODI). With ODI (illustrated by the third plot in Figure 1), we typically obtain a much more diverse (and effective) set of initializations for adversarial attacks. Moreover, since this initialization strategy is agnostic to the underlying attack method, we can easily incorporate ODI into most existing white-box attack strategies.

Empirically, we demonstrate the success of and attacks with ODI in both untargeted and targeted settings on MNIST, CIFAR-10, and ImageNet datasets. We show that ODI reduces the accuracy of state-of-the-art robust models by 1% – 17%, compared to naïve initialization. Moreover, the PGD attack with ODI outperforms the current state-of-the-art attack against pre-trained defense models, while being 50 times faster on CIFAR-10. In addition, we find that adversarial examples generated by attacks with ODI are more transferable than those with naïve restarts.

2 Preliminaries

We denote an image classifier as , where is an input image, represents the logits, and is the number of classes. We use to denote the model prediction, where is the -th element of .

2.1 Adversarial attacks

There are both targeted and untargeted attacks. Given an image , a label and a classifier , the purpose of untargeted attacks is to find an adversarial example that is similar to but causes misclassification . In targeted settings, attackers aim to change the model prediction to a particular target label .

Two settings for constructing adversarial examples exist. The most common setting of -attacks is to find an adversarial example within , which is the -radius ball around an original image . The other setting is to find a valid adversarial example with the smallest perturbation from the original image.

The Projected Gradient Descent (PGD) attack (Madry et al., 2018) is a popular attack with strong performance and it is typically used for -attack. This -attack (typically ) The PGD attack iteratively applies the following update rule to find an adversarial example within :


where , is the step size, and

is a loss function. To increase the likelihood of finding an adversarial example, we commonly restart the procedure multiple times by uniformly sampling an initial input

from . Our goal is to improve this naïve input diversification approach, which also commonly appears in many other attack methods.

Multiple loss functions can be used for PGD attacks, including the cross-entropy loss, and the margin loss defined as (c.f., CW attacks (Carlini and Wagner, 2017)). In our experiments, we use the margin loss for untargeted PGD attacks and the cross-entropy loss for targeted PGD attacks to make considered attacking methods stronger.

3 Output Diversified Initialization

As intuitively presented in Figure 1, naïve random restarts in the input space do not necessarily produce diverse outputs in the logit space, and could cause multiple restarts of an attacking method to generate very similar adversarial examples. To address this problem, we propose Output Diversified Initialization (ODI). Below we give a detailed introduction to this method, followed by experimental comparisons on the diversity of ODI vs. naïve restarts.

3.1 Method

ODI is a different random initialization strategy that directly encourages diversity in the output space. Specifically, we generate a restart from a given input by solving the following optimization problem


where defines the direction of the diversification, and is the set of allowed perturbations for an original input , which is typically an -ball in norm. By optimizing Equation (2), we can find an initial image that is sufficiently far away from the original image along the direction of

in the output space. The direction vector

is diversified by sampling from the uniform distribution over . To maximize Equation (2), we adopt the following gradient-based iterative update, like Equation (1):


where and denotes the projection function to set . When applying ODI to attacks, we replace the sign function in Equation (3) with normalized gradients . When combining with PGD attacks, we use the same -ball for the projected gradient descent optimization in both Equation (3) and Equation (1).

  Input: A targeted image , a classifier , perturbation set , number of ODI steps , step size , number of restarts
  Output: Initial inputs for adversarial attacks
  for  to  do
     sample randomly from
     sample from uniform distribution
     for  to  do
     end for
     run any attack (e.g., PGD attack) from
  end for
Algorithm 1 Output Diversified Initialization (ODI)

The pseudo-code for ODI is provided in Algorithm 1. Because we compute the gradient once per iteration, the computation time per step for ODI is roughly the same as one step of most gradient-based attacks (e.g., PGD). Additionally, we emphasize that ODI is attack-agnostic, i.e., we can combine ODI with any white-box attacks as long as the gradient in Equation (3

) can be estimated.

ODI has two hyperparameters: number of ODI steps

and step size . While large will bring diversified inputs, it also causes computational cost. We confirm that small is enough to obtain diversified inputs, and thus we fix and for all experiments, unless otherwise stated. We describe the sensitivity analysis to hyperparameters in the Appendix.

3.2 ODI Improves Output Diversity

In this section, we empirically demonstrate that ODI can find a more diverse set of initialization points, as well as improving the diversity of adversarial examples obtained from existing attack methods. As an example, we focus on PGD attacks with ODI (called ODI-PGD) on the CIFAR-10 dataset, and quantitatively evaluate the diversity of a set of random initializations using three metrics.


We train a robust classification model using adversarial training (Madry et al., 2018) with the untargeted PGD attack on CIFAR-10. We use ResNet18 (He et al., 2016) and set perturbation size as (details of training in the Appendix). We evaluate both untargeted ODI-PGD and naïve-PGD (PGD with naïve restarts) against this classifier. For hyperparamters of PGD, we set perturbation size as , step size as , and number of steps as .

Diversity Measured by Loss Values

To demonstrate the increased diversity of adversarial examples due to ODI, we pick an image from the CIFAR-10 test dataset and run both ODI-PGD and naïve-PGD with 100 restarts. Then, we calculate loss values for initial images (the initial values obtained after restarts) and attack results (generated adversarial examples) to evaluate their diversity in the output space. We use the margin loss defined in Section 2.1 as the loss function. Note that positive loss values represent successful adversarial examples. The left panel of Figure 2 is the histogram of loss values for initial images. We can easily observe that images from naïve initialization concentrate in terms of loss values (around ), whereas images from ODI are much more diverse in terms of the loss values. Moreover, ODI-PGD finds successful adversarial examples, as indicated by positive loss values (the right panel of Figure 2), while most runs of naïve-PGD converge to negative loss values which suggest failure in finding valid adversarial examples.

Initial inputs

Attack results
Figure 2: (Left): Distribution of loss values evaluated at the initial samples from ODI and naïve uniform sampling. (Right): Distribution of loss values evaluated at the final attack results from ODI-PGD and naïve-PGD. All attacks are restarted 100 times for each image in CIFAR-10.
pairwise distance cosine similarity
attack data input space output space of gradients
naïve-PGD initial inputs 1.419 (0.002) 0.376 (0.008) 0.900 (0.005)
ODI-PGD initial inputs 2.178 (0.032) 6.406 (0.211) 0.672 (0.019)
naïve-PGD attack results 1.152 (0.016) 0.572 (0.061) 0.939 (0.007)
ODI-PGD attack results 1.302 (0.022) 1.091 (0.121) 0.881 (0.014)
Table 1:

Average pairwise distances from 10 different runs on the first 100 test images of CIFAR-10 (standard deviation in parenthesis).

Diversity Measured by Pairwise Distances

We additionally evaluate the diversity of adversarial examples in terms of average pairwise distances measured in both the input and output spaces. Results are reported in Table 1. In the output space, the average pairwise distance obtained from ODI (6.406) is much larger than that from naïve initialization (0.376), which corroborates our intuition that initial inputs obtained by ODI are more diverse than naïve random restarts. As expected, the average pairwise distance of attack results for ODI-PGD is also larger.

Diversity Measured by Gradients

If gradients of the loss function at two initial inputs are similar, different runs of the attack method may converge to similar results. Therefore, we also compare the diversity by measuring the cosine similarity of evaluated at multiple initial inputs and attack results, where stands for the loss function for crafting adversarial examples against classifier . We calculate the average pairwise cosine similarity of gradient directions for 10 different runs of the attack method. Results are reported in the rightmost column of Figure 1. We observe that ODI reduces the cosine similarity for both initial inputs and final attack results.

4 Improving PGD with ODI

In this section, we show that the diversity offered by ODI can lead to stronger PGD attacks. We demonstrate that a simple combination of PGD and ODI (named ODI-PGD) can give tighter estimation of model robustness for various robustified models, as well as achieving new state-of-the-art attack success rates for many tasks.

4.1 Diversity Improves Attack Success Rates

In Section 3.2, we empirically demonstrate that ODI gives more diverse initial inputs and leads to more diverse attack results compared to the naïve initialization. A natural follow-up question is whether this added diversity helps improve the attack success rates. To investigate this, we conduct experiments under the same setup as described in Section 3.2, where ODI-PGD and naïve-PGD are executed with 20 restarts on 10000 test images.

Figure 3: Accuracy curves for ODI-PGD and naïve-PGD (lower is better). We plot the average accuracy over 5 runs, and report the corresponding maximum and minimum values as the error bars.
model (1) (2) naïve-PGD (3) ODI-PGD (1)(2) (2)(3)
UAT (Uesato et al., 2019) 62.63% 61.93% 57.43% 0.70% 4.50%
RST (Carmon et al., 2019) 61.17% 60.77% 59.93% 0.40% 0.84%
Feature-scatter (Zhang and Wang, 2019) 59.69% 56.49% 39.52% 3.20% 16.97%
Metric learning (Mao et al., 2019) 50.57% 49.91% 47.64% 0.56% 2.27%
Free (Shafahi et al., 2019) 47.19% 46.39% 44.20% 0.80% 2.19%
YOPO (Zhang et al., 2019a) 47.70% 47.07% 45.09% 0.63% 1.98%
Table 2: Accuracy of models after performing ODI-PGD and naïve-PGD attacks against recently proposed defense models.

We summarize our experimental results in Figure 3, which describes how the accuracy of the target model decreases with the number of restarts. As expected, when using more restarts, both initialization methods lead to higher success rates and can bring the accuracy of the target model down to a lower level. Albeit the initial model accuracy of ODI-PGD is higher, the curve dips faster and quickly overtakes naïve-PGD, which exhibits the benefit of higher output diversity compared to naïve restarts.

4.2 Tighter Estimation of Model Robustness

One important application of powerful adversarial attacks is to evaluate and compare different defense methods. In many previous works on defending against adversarial examples, naïve-PGD is a prevailing benchmark and its attack success rate is commonly regarded as a tight estimation on (worst-case) model robustness. In this section, we conduct a case study on six published defense methods to show that ODI-PGD outperforms naïve-PGD in terms of upper bounding the worst model accuracy under all possible attacks. The evaluated defense methods include Uesato et al. (2019), Carmon et al. (2019), Zhang and Wang (2019), Mao et al. (2019), Shafahi et al. (2019) and Zhang et al. (2019a).


We used pre-trained models from four of those studies, and trained the other two models (Shafahi et al., 2019; Zhang et al., 2019a) using the settings and architectures described in their original papers. We run attacks with on all test images. Other attack settings are the same as those in Section 3.2. Apart from comparing ODI-PGD and naïve-PGD, we also evaluate PGD attack without restarts (denoted as ) as it is adopted in several existing studies including Uesato et al. (2019), Carmon et al. (2019), Zhang and Wang (2019) and Zhang et al. (2019a).


As shown in Table 2, ODI-PGD uniformly outperforms naïve-PGD against all six recently-proposed defense methods, lowering the estimated model accuracy by 1–17%. In other words, ODI-PGD provides uniformly tighter upper bounds on the worst case model accuracy than naïve-PGD. Additionally, the performance improvements of naïve-PGD and ODI-PGD against are positively correlated. This indicates that ODI-PGD might be a better benchmark for comparing and evaluating different defense methods, rather than naïve-PGD and .

4.3 Efficacy of ODI-PGD across Multiple Datasets

In preceding sections, we evaluated ODI-PGD only on CIFAR-10. To demonstrate that the success of ODI-PGD is not limited to a single dataset, we provide and compare the performance of ODI-PGD against naïve-PGD across three different datasets: MNIST, CIFAR-10, and ImageNet. In addition to untargeted attacks, we also test targeted ODI-PGD and naïve-PGD since model robustness on ImageNet is more commonly evaluated with targeted attacks.

untargeted targeted
(model accuracy) (success rate)
dataset model naïve ODI naïve ODI
MNIST MadryLab 90.31% 90.06% 1.63% 1.66%
CIFAR-10 MadryLab 46.03% 44.32% 23.67% 23.87%
ImageNet ResNet50 55.9% 53.7% 61.0% 63.0%
ImageNet ResNet152 (Denoise) 43.3% 42.3% 30.9% 36.5%
Table 3: ODI-PGD vs. naïve-PGD on various different datasets. The values in untargeted settings are model accuracy (lower is better) and the attack success rate (higher is better) in targeted settings.



(ResNet152 (Denoise))
Figure 4: The attack performance against number of restarts for ODI-PGD and naïve-PGD attacks on various datasets. The top row provides the model accuracy in the untargeted setting and the bottom row shows the attack success rate in the targeted setting.


We perform attacks against four pre-trained models. For MNIST and CIFAR-10, we use pre-trained models from MadryLab (Madry et al., 2018)111https://github.com/MadryLab/mnist_challenge and https://github.com/MadryLab/cifar10_challenge. We use their secret model.. Both models are adversarially trained using an untargeted PGD attack with on MNIST and on CIFAR-10. For ImageNet, we evaluate attacks against two pre-trained models: 1) the vanilla ResNet50 model trained on original images and 2) the Feature Denoising ResNet152 network (Xie et al., 2019)222https://github.com/facebookresearch/ImageNet-Adversarial-Training. We use ResNet152 Denoise model., which is adversarially trained using a targeted PGD attack with . We report the results of ODI-PGD and naïve-PGD in both untargeted and targeted settings. All attacks are evaluated over the whole test set with 20 restarts, except for ImageNet, where the first 1000 test images are used. For untargeted attacks, we compare ODI-PGD against naïve-PGD based on the model accuracy. For targeted attacks, we randomly fix a target label per test image and evaluate the attack success rate, which is the percentage at which at least one of the trials generate a valid adversarial example (classified as the target label).

A key parameter for the PGD attack is the perturbation size . For the ResNet50 model, we chose for untargeted attacks and for targeted attacks. Those values are selected with the goal that ODI-PGD and naïve-PGD cannot both drive the model accuracy to zero, in which case we could draw no decisive conclusion on which one performs better. We chose the same values used in previous research for other models: in untargeted settings and in targeted settings for MadryLab (MNIST), MadryLab (CIFAR-10), ResNet50 and ResNet152 (Denoise), respectively. We describe other parameters in the appendix.


In Figure 4, we show how the attack performance improves as the number of restarts increases. These curves further corroborate that restarts facilitate the running of attack algorithms, and ODI restarts are more effective than naïve ones.

We summarize all quantitative results in Table 3. For untargeted attacks, the improvement of ODI-PGD on CIFAR-10 and ImageNet models is more significant than that of the MNIST model. We hypothesize that this difference in improvement results from the difference in model non-linearity. When the non-linearity of a target model is strong, the difference in diversity between the input and output space could be large, in which case ODI will be more effective in providing a diverse set of restarts to facilitate attack algorithms. Highly performant models on CIFAR-10 and ImageNet datasets are arguably more “non-linear” than MNIST models, which explains why ODI-PGD achieves greater improvement over naïve-PGD on CIFAR-10 and ImageNet. In addition, the superior performance of ODI-PGD for the naturally trained ResNet50 model implies that ODI-PGD can work well for models trained on original clean images as well. For the targeted attacks, the improvement of ODI-PGD is particularly significant on the ImageNet dataset. We hypothesize that this is because ImageNet has 1000 classes, where diversity in the output space is more beneficial to finding an adversarial example classified as the target label.

model ODI-PGD tuned ODI-PGD
tuned PGD
 (Gowal et al., 2019)
 (Gowal et al., 2019)
MadryLab accuracy 90.06% 88.13% 88.21% 88.36%
MNIST complexity
MadryLab accuracy 44.32% 43.99% 44.51% 44.03%
CIFAR-10 complexity

accuracy 53.26% 53.01% 53.70% 53.07%
CIFAR-10 complexity
Table 4: Comparison of ODI-PGD with current state-of-the-art attacks against pre-trained defense models. The complexity rows display products of the number of steps and restarts. For ODI-PGD, the number of steps is the sum of ODI steps and PGD attack steps.

4.4 ODI-PGD in Attack Leaderboards

To further demonstrate the power of ODI-PGD, we test its attack performance against three robust models: MadryLab’s defense models on MNIST and CIFAR-10 (see Section 4.3) and TRADES (Zhang et al., 2019b) on CIFAR-10333https://github.com/yaodongyu/TRADES. , which is trained using an untargeted PGD attack with and empirically exhibits better robustness than MadryLab’s model. These studies publish attack leaderboards for untargeted attacks with the same as the training. We demonstrate that ODI-PGD achieves the new state-of-the-art performance across all leaderboards

. In addition, we show the superior computational efficiency of ODI-PGD against several state-of-the-art attacks, and report the confidence intervals of its performance.


One state-of-the-art attack we compare with is the well-tuned PGD attack from Gowal et al. (2019), which achieved 88.21% accuracy for MadryLab’s MNIST model. The other state-of-the-art attack we focus on is the MultiTargeted attack (Gowal et al., 2019), which obtained 44.03% accuracy against MadryLab’s CIFAR-10 model and 53.07% accuracy against the TRADES CIFAR-10 model.

We use all test images on each dataset and perform ODI-PGD under two different settings. One is the same as Section 4.3, without much tuning. The other is ODI-PGD with tuned hyperparameters, similar to Gowal et al. (2019). We increase the number of ODI and PGD steps. Please see the Appendix for more details of tuning.


We summarize the comparison between ODI-PGD and state-of-the-art attacks in Table 4. Our tuned ODI-PGD reduces the accuracy to 88.13% for MadryLab’s MNIST model, to 43.99% for MadryLab’s CIFAR-10 model, and to 53.01% for TRADES CIFAR-10 model. Our results outperform existing state-of-the-art attacks and were ranked the 1st in their leaderboards at the time of submission. Surprisingly, (untuned) ODI-PGD can even outperform the tuned PGD attack (Gowal et al., 2019) on the CIFAR-10 dataset, in spite of having much fewer steps and restarts.

We also compare the total number of steps (a product of the number of steps and restarts) as the computational complexity, because computation time per step is comparable for all gradient-based attacks. The complexity of tuned ODI-PGD is smaller than that of the state-of-the-art attacks. In particular, tuned ODI-PGD is 50 times faster than the MultiTargeted attack against CIFAR-10 models.

Leveraging bootstrap, we also show the confidence intervals of our results against MadryLab’s MNIST and CIFAR-10 models. We run tuned ODI-PGD attack with 3000 restarts on MNIST and 100 restarts on CIFAR-10. Then, we sample 1000 runs on MNIST and 20 runs on CIFAR-10 from the results to evaluate the model accuracy, and re-sample 100 times to calculate statistics. Figure 5 shows the accuracy curve under tuned ODI-PGD. We observe that confidence intervals become tighter as the number of restarts grows, and tuned ODI-PGD consistently outperforms the state-of-the-art attack after 1000 restarts on MNIST and 20 restarts on CIFAR-10.

MadryLab (MNIST) MadryLab (CIFAR-10)
Figure 5: Model accuracy under tuned ODI-PGD and the current state-of-the-art attack. The solid lines represent values from Table 4 and the gray shadows show 95% confidence intervals. The accuracy of state-of-the-art attacks is from Gowal et al. (2019).
untargeted targeted
(mean perturbation distance)
dataset model naïve ODI naïve ODI
MNIST MadryLab 2.218 2.238 2.964 3.395
CIFAR-10 MadryLab 0.712 0.669 1.156 1.133
ImageNet ResNet50 0.262 0.245 0.738 0.691
ImageNet ResNet152 (Denoise) 1.562 1.377 6.722 5.567
Table 5: Performance of the C&W attack with various initialization schemes. Each value represents the average of the minimum perturbation distances (lower is better).



(ResNet152 (Denoise))
Figure 6: The attack performance against number of restarts for C&W attacks with ODI and naïve restarts. The y-axis corresponds to the average of minimum perturbation distances.

5 General Applicability of ODI

In this section, we show that the efficacy of ODI is not limited to improving PGD for white-box attacks. In particular, we combine ODI with another popular attack, the C&W (Carlini and Wagner, 2017) attack, to show that ODI is agnostic to the underlying attack method. Furthermore, we demonstrate that ODI is also helpful in black-box settings.

5.1 Improving C&W Attacks with ODI

We give a comparison between C&W attacks with ODI and naïve random restarts. As mentioned in Section 3.1, Initial inputs by ODI are generated within an -ball. We define naïve random restarts for C&W attacks to make sure the initial inputs are also within an -ball: we first sample Gaussian noise and then add the clipped noise to an original image.


We perform C&W attacks against the same models as Section 4.3: MadryLab’s robust model on MNIST and CIFAR-10, ResNet50 model on ImageNet, and Feature Denoising ResNet152 model on ImageNet. The C&W attack is performed using the first 1000 images (MNIST and CIFAR-10) and the first 500 images (ImageNet) with 10 restarts. We calculate the minimum perturbation distance that yields a valid adversarial example among 10 restarts for each image, and measure the average of minimum perturbation distances.

We set the perturbation radius to be around the mean perturbation distance: for untargeted attacks and for targeted attacks against MadryLab (MNIST), MadryLab (CIFAR-10), ResNet50, and ResNet152 (Denoise), respectively. We relegate hyperparameters of C&W attacks to the Appendix.


All results are summarized in Table 5 and Figure 6. We observe that ODI outperforms naïve restarts for C&W attacks on both the CIFAR-10 and ImageNet datasets. ODI works particularly well against ImageNet models in the targeted setting, i.e., it decreases the mean perturbation distance by 18%. Moreover, for CIFAR-10 and ImageNet models, as shown in Figure 6, the performance gap between ODI and naïve restarts increases as the number of restarts grows. These results are consistent with those for PGD attacks in Section 4.3.

Interestingly, ODI for the C&W attack is comparably not very effective against the MNIST model. We hypothesize that this is because of gradient masking. As reported in Madry et al. (2018), gradient-based attacks can overestimate robustness of their MNIST model due to gradient masking. Since gradient updates are crucial for ODI, gradient masking might negatively impact its performance.

5.2 Black-Box Attacks with ODI

Adversarial examples can be transferred to other models (Liu et al., 2017; Papernot et al., 2017)—one adversarial example generated against a source model might easily fool another model (the target model). This fact can be leveraged to construct black-box adversarial examples, where the exact architecture and weights of the target model are unknown. In this section, we show ODI is also effective when performing black-box attacks.

We consider three pre-trained models on CIFAR-10. Two of them are MadryLab’s adversarially trained model and the TRADES model, which are introduced in Section 4.4. The other model is also from MadryLab, but trained with original natural images. We call this model a natural model. All models use the WideResNet34 architecture. When performing black-box attacks against a target model, we use the other two models as source models.

target source naïve-PGD ODI-PGD
natural MadryLab 69.95% 63.98%
natural TRADES 62.25% 57.92%
MadryLab natural 84.44% 83.83%
MadryLab TRADES 68.04% 65.24%
TRADES natural 82.02% 81.71%
TRADES MadryLab 70.22% 67.39%
Table 6: Performance of black-box attacks on CIFAR-10. Each value represents the model accuracy on adversarial examples crafted using 20 restarts.

We compare the transferability of ODI-PGD versus naïve-PGD. Both attacks are performed with the same setting as the untargeted PGD attack in Section 4.3. We compute the model accuracy of the target model, which indicates model robustness against all adversarial examples crafted using 20 restarts against the source model. Table 6 shows that the accuracy for ODI-PGD is lower than that of naïve-PGD for all pairs of source and target models. This indicates that ODI-PGD produces more transferable adversarial examples regardless of whether target models are adversarially trained or not.

6 Conclusion

We propose ODI, a novel initialization strategy for gradient-based adversarial attacks. By generating more diverse inputs as measured in the output space, ODI is able to improve the optimization procedure of attack algorithms. We demonstrate that attacks with ODI initialization can give a tighter estimation on model robustness than attacks with naïve random restarts. Combining ODI with the prevailing PGD attack, we can achieve new state-of-the-art performance on various settings—our results are ranked as 1st across many leaderboards provided by the MadryLab and TRADES authors. Moreover, ODI is broadly useful for improving other attack methods such as the C&W attack, and exhibits better transferability for performing black-box attacks.


This research was supported in part by AFOSR (FA9550-19-1-0024), NSF (#1651565, #1522054, #1733686), ONR, and FLI.


  • N. Carlini and D. Wagner (2017) Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy, Cited by: §2.1, §5.
  • Y. Carmon, A. Raghunathan, L. Schmidt, P. Liang, and J. C. Duchi (2019) Unlabeled data improves adversarial robustness. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, §4.2, Table 2.
  • J. Cohen, E. Rosenfeld, and Z. Kolter (2019) Certified adversarial robustness via randomized smoothing. In

    International Conference on Machine Learning

    Cited by: §1.
  • F. Croce and M. Hein (2019) Minimally distorted adversarial examples with a fast adaptive boundary attack. arXiv preprint arXiv:1907.02044. Cited by: §1.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2015) Explaining and harnessing adversarial examples. In International Conference on Learning Representations, Cited by: §1.
  • S. Gowal, J. Uesato, C. Qin, P. Huang, T. Mann, and P. Kohli (2019) An alternative surrogate loss for pgd-based adversarial testing. arXiv preprint arXiv:1910.09338. Cited by: §A.3, §1, Figure 5, §4.4, §4.4, §4.4, Table 4.
  • K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In

    IEEE Conference on Computer Vision and Pattern Recognition

    Cited by: §3.2.
  • D. P. Kingma and J. Ba (2015) A method for stochastic optimization. In International Conference on Learning Representations, Cited by: §A.3.
  • Y. Liu, X. Chen, C. Liu, and D. Song (2017) Delving into transferable adversarial examples and black-box attacks. In International Conference on Learning Representations, Cited by: §5.2.
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu (2018)

    Towards deep learning models resistant to adversarial attacks

    In International Conference on Learning Representations, Cited by: §1, §1, §2.1, §3.2, §4.3, §5.1.
  • C. Mao, Z. Zhong, J. Yang, C. Vondrick, and B. Ray (2019) Metric learning for adversarial robustness. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, Table 2.
  • S. Moosavi-Dezfooli, A. Fawzi, J. Uesato, and P. Frossard (2019) Robustness via curvature regularization, and vice versa. In International Conference on Learning Representations, Cited by: §1.
  • N. Papernot, P. McDaniel, and I. Goodfellow (2017) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. In Asia Conference on Computer and Communications Security, Cited by: §5.2.
  • C. Qin, J. Martens, S. Gowal, D. Krishnan, K. Dvijotham, A. Fawzi, S. De, R. Stanforth, and P. Kohli (2019) Adversarial robustness through local linearization. In Advances in Neural Information Processing Systems, Cited by: §1.
  • A. Raghunathan, J. Steinhardt, and P. S. Liang (2018) Certified defenses against adversarial examples. In International Conference on Learning Representations, Cited by: §1.
  • A. S. Ross and F. Doshi-Velez (2018) Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In

    AAAI Conference on Artificial Intelligence

    Cited by: §1.
  • P. Samangouei, M. Kabkab, and R. Chellappa (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. In International Conference on Learning Representations, Cited by: §1.
  • A. Shafahi, M. Najibi, A. Ghiasi, Z. Xu, J. Dickerson, C. Studer, L. S. Davis, G. Taylor, and T. Goldstein (2019) Adversarial training for free!. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, §4.2, Table 2.
  • Y. Song, T. Kim, S. Nowozin, S. Ermon, and N. Kushman (2018) Pixeldefend: leveraging generative models to understand and defend against adversarial examples. In International Conference on Learning Representations, Cited by: §1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
  • J. Uesato, J. Alayrac, P. Huang, R. Stanforth, A. Fawzi, and P. Kohli (2019) Are labels required for improving adversarial robustness?. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, §4.2, Table 2.
  • J. Uesato, B. O’Donoghue, A. van den Oord, and P. Kohli (2018) Adversarial risk and the dangers of evaluating against weak attacks. In International Conference on Machine Learning, Cited by: §A.3.
  • L. van der Maaten and G. Hinton (2008) Visualizing data using t-sne. Journal of machine learning research 9. Cited by: §B.1.
  • E. Wong and J. Z. Kolter (2017) Provable defenses against adversarial examples via the convex outer adversarial polytope. In International Conference on Machine Learning, Cited by: §1.
  • C. Xie, Y. Wu, L. van der Maaten, A. Yuille, and K. He (2019) Feature denoising for improving adversarial robustness. In IEEE Conference on Computer Vision and Pattern Recognition, Cited by: §4.3.
  • D. Zhang, T. Zhang, Y. Lu, Z. Zhu, and B. Dong (2019a) You only propagate once: accelerating adversarial training via maximal principle. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, §4.2, Table 2.
  • H. Zhang and J. Wang (2018) IAdversarially robust training through structured gradient regularization. In Advances in Neural Information Processing Systems, Cited by: §1.
  • H. Zhang and J. Wang (2019) Defense against adversarial attacks using feature scattering-based adversarial training. In Advances in Neural Information Processing Systems, Cited by: Table 9, §4.2, §4.2, Table 2.
  • H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan (2019b) Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, Cited by: §4.4.
  • T. Zheng, C. Chen, and K. Ren (2019) Distributionally adversarial attack. In AAAI Conference on Artificial Intelligence, Cited by: §1.

Appendix A Experiment Settings

a.1 Setting for adversarial training in Section 3.2

We describe the setting for adversarial training on CIFAR-10 in Section 3.2. We adopted popular hyperparameters for adversarial training under the PGD attack on CIFAR-10: perturbation size , step size , number of steps

. Training epochs are 100 and learning rates are changed depending on epoch: 0.1 until 75 epochs, 0.01 until 90 epochs, 0.001 until 100 epochs. Batch size is 128 and weight decay 0.0002.

a.2 Hyperparameter of PGD attacks in Section 4.3

We describe hyperparameters for PGD attacks in Section 4.3. PGD attack has three hyperparameters: pertubation size , step size and number of steps . We already gave in Section 4.3. We chose in untargeted settings and in targeted settings for MadryLab(MNIST), MadryLab(CIFAR-10), ResNet50, and ResNet152 (Denoise), respectively. We also set for MadryLab(MNIST), MadryLab(CIFAR-10), ResNet50, and ResNet152 (Denoise), respectively.

a.3 Hyperparameter tuning for tuned ODI-PGD in Section 4.4

We describe hyperparameter tuning for our tuned ODI-PGD in Section 4.4. We summarize the setting in Table 7.

total step
step size
total step
step size (learning rate)
50 0.05 Adam 1000
10 8/255
10 0.031
Table 7: Hyperparameter setting for tuned ODI-PGD in Section 4.4.

For ODI, we increased the number of ODI step to obtain more diversified inputs than ODI with . In addition, we made step size smaller than on MNIST, because -ball with is large and is not suitable for seeking the diversity within the large -ball. In summary, we set for MadryLab’s MNIST model, MadryLab’s CIFAR-10 model, the TRADES CIFAR-10 model, respectively.

We tuned hyperparameters of PGD based on Gowal et al. (2019). While several studies used the sign function to update images for the PGD attack,  Uesato et al. (2018) and Gowal et al. (2019) reported that updates by Adam optimizer (Kingma and Ba, 2015) brought better results than the sign function. Following Gowal et al. (2019), we consider the sign function as an optimizer and the choice of an optimizer as a hyperparameter. We used Adam for the PGD attack on the MNIST model and the sign function on the CIFAR-10 models.

We adopted scheduled step size instead of fixed one. Because we empirically found that starting from large step size leads to better results, we set the initial step size as for the CIFAR-10 models. When we use Adam, step size is considered as learning rate.

a.4 Hyperparameter of C&W attacks in Section 5.1

We set hyperparameters of C&W attacks in Section 5.1 as follows: max iterations are 1000 (MNIST) and 100 (CIFAR-10 and ImageNet), search step is 10, learning rate is 0.1, and initial constant is 0.01.

Appendix B Additional Experiments

b.1 Visualization of the diversity produced by ODI

In Section 3.2, we have illustrated the diversity of attack results by ODI-PGD in the experiment for loss diversity. As an additional experiment, We apply t-SNE (van der Maaten and Hinton, 2008) to the result of the experiment for the visualization of the diversity. As input to t-SNE, we use the output logits for initial inputs and attack results by ODI-PGD and naïve-PGD in the experiment. We visualize the embedding produced by t-SNE in Figure 7. As expected, initial inputs produced by ODI are more diversified than initial inputs by naïve restarts in the left figure. As a result, ODI-PGD can find a different cluster of adversarial examples on the bottom in the right figure.

Initial inputs

Attack results
Figure 7: (Left): Embedding for initial inputs sampled on each attack produced by t-SNE. (Right): Embedding for attack results generated by each attack. The input to t-SNE is produced in the experiment in Section 3.2.

b.2 Analysis of the sensitivity to ODI hyperparameters

In this paper, we mainly set the number of ODI steps and step size . To validate the setting, we confirm that ODI-PGD is not sensitive to these hyperparameters. We adopt the same attack setup as Section 4.1. We test and , but exclude patterns with to make larger than or equal to the diameter of the -ball. As in the last experiment, we calculate the mean accuracy for five repetitions of the attack, each with 20 restarts.

mean max min
2 44.46% 44.50% 44.45%
4 44.47% 44.50% 44.42%
4 44.42% 44.48% 44.40%
8 44.47% 44.52% 44.44%
8 44.42% 44.48% 44.36%
8 44.46% 44.49% 44.42%
16 44.46% 44.50% 44.43%
16 44.46% 44.50% 44.40%
16 44.45% 44.48% 44.43%
16 44.44% 44.47% 44.41%
Table 8: The sensitivity to the number of ODI steps and ODI step size . We repeat each experiment 5 times to calculate statistics.

Table 8 shows the mean accuracy under ODI-PGD for different hyperparameters. The maximum difference in the mean accuracy among different hyperparameters of ODI is only 0.05%. Although large and will be useful to find more diversified initial inputs, the performance of ODI is not very sensitive to hyperparameters. Thus, we restrict to a small value to give fair comparison in terms of computation time as much as possible. Table 8 also shows that the difference between the maximum and minimum accuracy is about 0.1% for all hyperparameter pairs. This result supports the stability of ODI.

b.3 Effect of the loss function for the PGD attack in Section 4.2

While we adopt the margin loss as the loss function for the untargeted PGD attack, a lot of studies used the PGD attack with the cross-entropy loss to evaluate their defense models. To investigate the effect of the loss function, we perform ODI-PGD and naïve-PGD with the cross-entropy loss against recently proposed defense models in Section 4.2. The result is in Table 9. For all models, ODI-PGD with the margin loss achieves lower accuracy than that with the cross-entropy loss. This means that the margin loss is more proper loss for ODI. In addition, ODI-PGD with the cross-entropy loss outperforms naïve-PGD with the cross-entropy loss. The effectiveness of ODI does not depend on the choice of the loss function. We note that better loss function for naïve-PGD depends on the model being attacked.

margin cross-entropy
model naïve-PGD ODI-PGD naïve-PGD ODI-PGD
UAT (Uesato et al., 2019) 61.93% 57.43% 61.44% 59.76%
RST (Carmon et al., 2019) 60.77% 59.93% 62.06% 61.28%
Feature-scatter (Zhang and Wang, 2019) 56.49% 39.52% 67.48% 42.50%
Metric learning (Mao et al., 2019) 49.91% 47.64% 50.05% 48.88%
Free (Shafahi et al., 2019) 46.39% 44.20% 47.06% 46.63%
YOPO (Zhang et al., 2019a) 47.07% 45.09% 46.12% 45.47%
Table 9: Effect of the choice of the loss function on model accuracy of recently proposed defense models under ODI-PGD and naïve-PGD.

b.4 Effect of the choice of the loss function for untargeted and targeted PGD attack

In this paper, we have adopted the margin loss as the loss function for the untargeted PGD attack, and the cross-entropy loss as the loss function for targeted PGD attack for better performance. To validate this choice, we evaluate ODI-PGD with each loss function in untargeted and targeted settings. We use the same defense model and setting as Section 4.4. We replace the loss function used in Section 4.4 with the other loss, and compare the attack performance. Table 10 shows the result.

The margin loss is better than or equal to the cross-entropy loss for untargeted attacks as we saw in Appendix B.3. In contrast, the cross-entropy loss is better than or almost equal to the margin loss for targeted attacks. While minimizing the cross-entropy loss reduces all logit values except target class, minimizing the margin loss only focus on two classes for each step and may increase logit values of other classes. Thus, the cross-entropy loss is suitable for targeted attacks.

untargeted targeted
(model accuracy) (success rate)
dataset model margin cross-entropy margin cross-entropy
MNIST MadryLab 90.06% 90.15% 1.67% 1.66%
CIFAR-10 MadryLab 44.32% 44.34% 22.88% 23.87%
ImageNet ResNet50 53.7% 56.2% 63.2% 63.0%
ImageNet ResNet152 (Denoise) 42.3% 42.3% 31.4% 36.5%
Table 10: Effect of the loss function for attack performance of untargeted and targeted PGD attack with ODI. Each value shows the model accuracy for the untargeted PGD attack (lower is better) and the attack success rate for the targeted PGD attack (higher is better).

b.5 Diversity produced by ODI on MNIST and ImageNet

pairwise distance cosine similarity
model method pair input space output space of gradients
naïve-PGD initial inputs 4.161 (0.010) 2.108 (0.089) 0.230 (0.022)
MadryLab ODI-PGD initial inputs 6.301 (0.259) 3.530 (0.305) 0.193 (0.027)
(MNIST) naïve-PGD attack results 3.837 (0.045) 2.184 (0.110) 0.436 (0.015)
ODI-PGD attack results 4.156 (0.071) 2.701 (0.161) 0.401 (0.020)
naïve-PGD initial inputs 0.202 (0.000) 0.180 (0.003) 0.963 (0.002)
ResNet50 ODI-PGD initial inputs 0.317 (0.002) 14.413 (0.325) 0.684 (0.013)
(ImageNet) naïve-PGD attack results 0.081 (0.001) 0.396 (0.120) 0.936 (0.004)
ODI-PGD attack results 0.172 (0.005) 6.091 (0.597) 0.774 (0.019)
naïve-PGD initial inputs 4.893 (0.001) 0.393 (0.009) 0.887 (0.004)
ResNet152 (Denoise) ODI-PGD initial inputs 7.641 (0.146) 8.568 (0.376) 0.727 (0.015)
(ImageNet) naïve-PGD attack results 2.732 (0.034) 0.337 (0.064) 0.970 (0.005)
ODI-PGD attack results 3.834 (0.102) 2.244 (0.253) 0.867 (0.018)
Table 11: Average pairwise distances among 10 different runs of attacks on the MNIST and ImageNet models (standard deviation in parenthesis).

We have shown that ODI produces more diversified inputs on CIFAR-10 in Section 3.2. We now evaluate the diversity of initial inputs produced by ODI on MNIST and ImageNet models, and show that the effectiveness of ODI depends on the model being attacked. We use the pre-trained models introduced in Section 4.3. The method to measure the diversity of inputs is the same as those in Section 3.2: we run untargeted PGD attacks 10 times for the first 100 images and calculate mean pairwise distance of initial inputs in the input and output space. Parameters for the attack are the same as those in Section 4.3.

The result is in Table 11. On the ImageNet models, the average pairwise distance of initial inputs of ODI-PGD is more than 20 times as large as naïve-PGD in the output space. On the other hand, the difference in the distance of initial inputs is relatively small on the MNIST model. In addition, ODI significantly reduces the average pairwise cosine similarity of gradient directions for initial inputs on the ImageNet models. The diversity of initial inputs on each model is translated to diversity of attack results in the output space. These results are consistent with the effectiveness of ODI in Section 4.3.

b.6 Additional result for the comparison with state-of-the-art attacks in Section 4.4

In Section 4.4, we have described the accuracy curve with confidence intervals on MadryLab’s MNIST and CIFAR-10 model. We show the accuracy curve on the TRADES model in Figure 8. The result is similar to MadryLab’s CIFAR-10 model in Figure 5. We note that ODI-PGD reduces the accuracy to 88.04% after 3000 restarts on the MNIST model, to 43.97% after 100 restarts on MadryLab’s CIFAR-10 model, and to 52.98% after 100 restarts on the TRADES CIFAR-10 model. We do not report these results in Table 4 because we mainly focus on the efficiency of ODI-PGD.

Figure 8: Accuracy curve under tuned ODI-PGD on the TRADES model. The solid lines are the results in Table 4 and the gray shadows show 95% confidence intervals.

b.7 Effectiveness of restarts for the C&W attack

untargeted targeted
dataset model no restart naïve ODI no restart naïve ODI
MNIST MadryLab 2.787 2.218 2.238 4.178 2.964 3.395
CIFAR-10 MadryLab 0.930 0.712 0.669 1.271 1.156 1.133
ImageNet ResNet50 0.268 0.262 0.245 0.718 0.738 0.691
ImageNet ResNet152 (Denoise) 2.038 1.562 1.377 7.178 6.722 5.567
Table 12: Comparison between the C&W attack with ODI and the attack with no restart. Each value shows the average of the minimum perturbation distance. The distance for the C&W attack with naïve restarts is displayed as a reference.

In Section 5.1, we have evaluated the C&W attack with ODI by comparing with the C&W attack with naïve restarts. However, the C&W attack is typically performed from the original image with no restart. To confirm the effectiveness of restarts, we demonstrate the comparison between the C&W attack with ODI and the attack with no restart. Table 12 shows that the C&W attack with ODI outperforms the attack with no restart in all settings. It indicates that restarts by ODI is effective for the C&W attack.