Reducing Adversarial Example Transferability Using Gradient Regularization

04/16/2019 ∙ by George Adam, et al. ∙ 0

Deep learning algorithms have increasingly been shown to lack robustness to simple adversarial examples (AdvX). An equally troubling observation is that these adversarial examples transfer between different architectures trained on different datasets. We investigate the transferability of adversarial examples between models using the angle between the input-output Jacobians of different models. To demonstrate the relevance of this approach, we perform case studies that involve jointly training pairs of models. These case studies empirically justify the theoretical intuitions for why the angle between gradients is a fundamental quantity in AdvX transferability. Furthermore, we consider the asymmetry of AdvX transferability between two models of the same architecture and explain it in terms of differences in gradient norms between the models. Lastly, we provide a simple modification to existing training setups that reduces transferability of adversarial examples between pairs of models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Neural Networks (DNNs) consistently define state-of-the-art performance in many machine learning areas such as image classification, segmentation, speech recognition and language translation

(Hinton et al., 2012; Krizhevsky et al., 2012; Sutskever et al., 2014)

. These results have lead to DNNs being increasingly deployed in production settings, including self-driving cars, on-the-fly speech translation, and facial recognition for identification. However, like previous machine learning approaches, DNNs have been shown to be vulnerable to adversarial attacks during test time

Szegedy et al. (2013). The existence of such AdvX is of immense concern to the security community as human decision making is outsourced to machine learning. Moreover, the arms race of AdvX has been mainly one-sided with attackers being able to fool a diverse variety of proposed defenses.

While numerous approaches have been designed to reduce the threat of AdvX, iterative methods optimizing the AdvX input by computing gradients have to date been successful in breaking proposed defences (Carlini and Wagner, 2016)

. An unexpected result regarding adversarial examples is their high level of transferability between deep learning models with different architectures and objective functions. In fact, adversarial examples generated for a Deep Convolutional Neural Network (CNN) can even transfer to random forests, SVMs, and logistic regression models

(Papernot et al., 2016)

. This allows attackers to use surrogate models in a black box scenario, estimating gradients to break black box defences. Understanding why these unexpectedly high transferability rates arise could inform future work on both black box and ensemble defences.

In our work, we focus on the transferability between deep learning models and gradient-based attacks. Specifically, we consider the magnitudes and cosine similarity between input-output Jacobians of model pairs, and show that they are a reliable predictor of transferability. By minimizing the cosine similarity, we are able to significantly reduce the transferability of adversarial examples for any two models we choose. We also explain why the transferability of adversarial examples between two models is asymmetric, and show that it can be made to be symmetric using simple regularization.

The contributions of this paper are:

  • Showing that small input-output Jacobian magnitudes create a false sense of security, and should be reported along with attack parameters

  • Understanding the asymmetry of adversarial example transferability that still exists when controlling for architecture, hyperparameters, and performance

  • Introducing a novel regularization scheme for reducing the transferability of adversarial examples between two or more models

2 Related Work

Adversarial examples in the context of DNNs have come into the spotlight after Szegedy et al. Szegedy et al. (2013)

, showed the imperceptibility of the perturbations which could fool state-of-the-art computer vision systems. Since then, adversarial examples have been demonstrated in many other domains, notably including speech recognition

Carlini and Wagner (2018), and malware detection Grosse et al. (2016). Nevertheless, CNNs in computer vision provide a convenient domain to explore adversarial attacks and defenses, both due to the existence of standardized test datasets, high performing CNN models reaching human or super-human accuracy on clean data, and the marked deterioration of their performance when subjected to adversarial examples to which human vision is robust.

The intra-model and inter-model transferability of AdvX was investigated thoroughly by Papernot et al. (2016). It was found that adversarial examples created for fundamentally different MNIST classification models could transfer between each other. This showed that neural networks are not special in their vulnerability, and that simply hiding model details from an attacker is bound to fail since a substitute model can be trained whose AdvX are likely to transfer to the hidden model. Further work has revealed the space of transferable AdvX to be large in dimensionality, and sufficient conditions for transferability were posited as well Tramèr et al. (2017). The results support our assumption that defending against transfer-attacks is possible even if the original model is vulnerable to direct attacks.

Transferable AdvX pose a security risk mainly if they are able to cause a misclassification of the same class between models. Otherwise, a defender could use the disagreement between two models with similar performance as a way of detecting AdvX. Liu et al. (2017)

visualized decision boundaries of several ImageNet classifiers and found that although targeted adversarial examples transfer to other models quite easily, they do not transfer with the same target label. However, a targeted attack on an ensemble of models significantly increases the same-target transfer rate significantly.

It is possible to increase the transferability of AdvX by smoothing the loss function being optimized by an attack as done by

Wu et al. (2018). The authors use cosine similarity between the gradients of source and target models to justify the claim that increasing the robustness of AdvX to Gaussian perturbations via smoothing increases their transferability. Hence, we also consider cosine similarity between gradients as an important quantity, except that we optimize it directly in order to show that it alone is a significant cause of AdvX transferability. Contrast to Wu et al., we are not optimizing attacks to increase their strength, but rather regularizing models to be more resistant to transfer attacks. We stress that this is not the same strategy as obfuscating gradients which was shown to give a false sense of security by Athalye et al. (2018). Instead, we are constraining models to be fundamentally diverse in their robustness to different AdvX. An AdvX that fools one model will not fool the other, and vice versa. This relationship is shown to be attainable empirically via gradient regularization.

3 Methods

3.1 Attacks

We consider three attacks in this paper for completeness. For each attack, we work with the output of the classifier at the ground truth entry (gt) denoted as instead taking gradients w.r.t. loss functions since it is notationally more convenient.

Fast Gradient Sign (FGS)

We use the following definition of the FGS attack introduced by Goodfellow et al. (2014)

where is a hyperparameter that controls the norm of the perturbation. is typically chosen in such as way as to fool the model, but such that the adversarial perturbation is not noticeable to humans, so there is subjectivity involved here as with all attacks. Intuitively, this takes a single step in the direction that minimizes the prediction confidence for the ground truth class of the classifier on input . However, the step is not exactly in the direction of since the function can cause a difference in angle between and of up to in high dimensions. This attack relies on the assumption that neural networks are more or less linear, so just a single step in the gradient direction should be sufficient to cross a decision boundary.

Iterative Gradient Sign (IGS)

The IGS attack (Kurakin et al., 2016) is very similar to the FGS attack, except that it performs many smaller steps rather than a single step. Two hyperparameters and are used where bounds the norm between the final adversarial image and the initial image, and is the number of iterations. The following procedure is followed

Carlini and Wagner (CW)

Carlini and Wagner (2016) introduced a very powerful attack that solves the following optimization problem using gradient descent:

Here is a hyperaparameter that is optimized using binary search and trades off the two simultaneous minimization objectives and . Plainly, it finds a minimal perturbation that will cause a misclassification.

3.2 Why Transferability Matters

AdvX transferability is a significant security concern since it allows adversaries to fool a machine learning service in a black-box scenario. Additionally, transfer attacks can be used in grey-box scenarios where there is a defense in place that obfuscates input-output gradients thereby making it difficult to directly attack the protected model, but simple to build a surrogate model which can be easily attacked and whose adversarial examples transfer to the protected model. Thus, we propose a technique that reduces the transferability between two given models by forcing them to be robust to different sets of AdvX. Given the ability to reduce the transferability between two models, a trivial detection method can be constructed as follows:

  • Replace the original model with two models and such that

  • Compare to threshold to classify images as regular or adversarial

Intuitively, this is an agreement-based detection method that leverages the fact that and are robust to different AdvX in order to flag inputs where the models differ significantly in prediction confidence. In the following sections, we discuss in detail how to reduce AdvX transferability between two models without reducing the performance of those models on the original image classification task.

3.3 Jacobian Magnitude Determines Attack Difficulty

Basic perturbation-based attacks such as FGS or IGS exploit (input-output gradient) by making changes in input space that are in the opposite direction of . More sophisticated attacks such as the CW attack can be intuitively thought of as implicitly using this quantity even though it does not appear in the attack objective itself. We begin by showing a simple result that for a single classifier , determines the value of required for the FGS attack to fool the classifier. While this result is intuitively obvious and easy to demonstrate, it is often overlooked, and models with a small can provide a false sense of security against simple attacks like FGS.

The claim that controls how large of a perturbation must be made to an input in order to fool a classifier can be demonstrated through a Taylor series approximation.

Let be the original input and assume that

outputs probabilities. The most effective single step attack an adversary can perform is to move in the direction opposite of

. Thus, our potential adversarial example is . The first-order Taylor series approximation of around is

Thus, the success of such an attack heavily depends on . FGS and IGS both apply the signum function to as a means of controlling the norm of the adversarial perturbation, but the same principle still applies even in this context. Let . Then,

Here the last line is due to the triangle inequality. We use this bound instead of regularizing the norm directly since it seems to be more stable in practice. Given this observation, defense researchers should report when they publish new results since reporting a low attack success rate (good defense) can provide a false sense of security against simpler attacks when the gradient magnitude is low.

(a) Parallel
(b) Orthogonal
(c) Antiparallel
Figure 1: Illustration of gradient directions for function pairs.

3.4 Input-Output Jacobian Magnitude and Transferability Asymmetry

For two classifiers and assume that , where is some training set. The functions need not be equal when . If where , then by the result in the previous section

Thus, if , then , so we must regularize the models to have in order to ensure symmetric transferability results. This may not be a sufficient condition for transfer attack symmetry, but violating this condition is sure to result in asymmetric transferability.

3.5 Importance of Gradient Angles

To reduce the transferability of AdvX between two different models, we propose to enforce various geometric relationships between the gradients of the models. Such relationships are visualized in figure 1. For example, for two models and , if these models happen to be trained such that and , then the following perturbed input created for , , will not have an effect on as shown by the below Taylor series approximation

This depends on the on the accuracy of the first-order Taylor series approximation around , which can be difficult to determine for neural networks. Unfortunately, the gradient relationship is not sufficient since it implies that the two models have orthogonal gradients which still means that it is possible to simultaneously attack both models by moving in the average direction of the gradients of the two models. More specifically, let

So all the orthogonal direction can do against a clever aversary is double the magnitude of the perturbation required to fool the classifier for a single-step attack like FGS. This is likely not sufficient in most cases as the original perturbation is imperceptible to begin with. Thus, we investigate the antiparallel gradient direction induced by the following relationship . We prove that an adversarial input that uses a convex combination of the gradients cannot simultaneously decrease the prediction confidence of both models.

Let ,

Case 1: Assume for convenience which is something that can be attained via regularization but does not remove the generality of this argument. In fact, this is analogous to local Lipschitz-continuity which says that

(1)

where is simply the union of epsilon balls for , and is the i-th element of the training set (note that is an open set). There exist techniques for making neural networks Lipschitz-continuous (Gouk et al., 2018), but we use simple regularization instead to satisfy the assumption that since the condition is far less restrictive than local Lipschitz-continuity. Thus,

This implies that ’s prediction confidence is reduced for the ground truth class. As we can see though, if this input was given to , then

which implies that the prediction confidence of will actually be increased instead of decreased. We omit the other case of this proof as it is symmetric.

Based on the above proof, makes it impossible to craft an input via a one-step attack such as FGS that will fool both and .

3.6 Obtaining Arbitrary Gradient Relationships

Any gradient relationship between two models mentioned in the previous section can be obtained by algorithm 1

where parameters are updated in the direction of the negative gradient as to minimize the loss. It is important to note that the outputs of each model should be a vector softmaxed probabilities rather than a vector of logits. For example, if the cosine similarity between between

and is close to 0, but the models output logits, then the output for some other class could still be increasing/decreasing. Alternatively, controlling changes in probabilities does ensure that the probabilities of other classes will not change significantly enough to change the highest probability class since the probabilities have to sum to 1.

Note that a similar procedure can be used to train a second model once a first model has already been trained instead of training the two models jointly. We acknowledge that this training procedure could result in divergence when optimizing via SGD due to the alternating nature of the updates that switch from optimizing classification loss to gradient loss. Therefore, we also consider a version with a combined loss which updates the model parameters to simultaneously improve both classification and gradient loss. The importance of this consideration is shown in section 5.2.

Assume and are two randomly initialized neural networks
is cosine similarity.
samples unique minibatches from a dataset
Let denote the training images and labels.
Let be cross-entropy of model M
Let

be the number of epochs

while  do
     for  do
         Compute
         Compute and update params
         Compute and update params
         Compute is a vector of outputs from at true class
         Compute
         Compute
         if goal = perpendicular then
               Otherwise minimization gives -1 cosine similarity when want 0
         else
              if goal = parallel then
                   To change gradient update to maximize
              end if
         end if
         Compute and update params
         Compute and update params
     end for
     
end while
Algorithm 1 Proposed Method for Preventing AdvX Transferability

3.7 Verifying Regularization Effectiveness

Since it is analytically infeasible to verify where enforced gradient relationships will hold for an arbitrary neural network, we show empirically that the gradient relationships (perpendicular, antiparallel) enforced between two given models on the training set also hold on the test set.

For tables 1 and 2

, 5 pairs of models were trained for all of the relevant gradient similarity scenarios, including one without gradient regularization. The mean column represents the mean cosine similarity across the entire dataset averaged across the 5 model pairs, and the std column represents the standard deviations that were computed for each model pair individually across the entire dataset, and then averaged across all model pairs. It is clear that the gradient regularization works as intended, though achieving perfectly anitparallel gradients is non-trivial. Furthermore, there are some samples for which the cosine similarity between the two models is very far from the goal, but overall the standard deviation is within an acceptable range. Lastly, the L2-norm difference

for a given model pair and is an important metric to observe for transferability since it can explain why transferability between two models might be asymmetric. This will be further investigated in section 4.2

Gradient Scenario Mean Cosine Similarity Std Cosine Similarity L2-Norm Difference
No Regularization 0.209 0.017 0.001 0.000 0.001 0.000
Perpendicular 0.002 0.003 0.007 0.003 0.007 0.004
Antiparallel -0.912 0.023 0.006 0.0082 0.006 0.002
Table 1: Gradient statistics on the training set of LeNet models trained on MNIST. 5 pairs of models were trained for each gradient scenario.
Gradient Scenario Mean Cosine Similarity Std Cosine Similarity L2-Norm Difference
No Regularization 0.209 0.016 0.001 0.000 0.001 0.000
Perpendicular 0.002 0.003 0.007 0.003 0.005 0.004
Antiparallel -0.910 0.024 0.006 0.002 0.006 002
Table 2: Gradient statistics on the test of LeNet models trained on MNIST. 5 pairs of models were trained for each gradient scenario.

As a sanity check, we verify that there is minimal performance loss when using gradient regularization, otherwise the regularization defeats the purpose of the defended system. Table 3 shows the decrease in accuracy for either gradient relation (perpendicular, antiparallel) is within compared to no regularization, so gradient regularization is a reasonable procedure.

Training Accuracy Test Accuracy
Gradient Scenario M1 M2 M1 M2
No Regularization 1.000 0.000 1.000 0.000 0.991 0.001 0.992 0.000
Perpendicular 1.000 0.000 1.000 0.000 0.989 0.001 0.990 0.000
Antiparallel 0.998 0.001 0.998 0.002 0.981 0.003 0.980 0.005
Table 3: Accuracy of LeNet models trained on MNIST. 5 pairs of models were trained for each gradient scenario.

4 Experimental Results

4.1 Setup

For section 4.2, 10 models with different random seeds were trained for each gradient magnitude value. Attacks were performed on a random subset of 1000 test images. For section 4.3

, 20 pairs of models were trained for each setting, and attacks were performed on the entire test set. The use of many random seeds was done to obtain confidence intervals and ensure that the observed results are not due to chance.

All experiments were performed on MNIST. While this could be of concern when demonstrating the effectiveness of a detection method, our principle of gradient regularization has no dependence on the simplicity or complexity of a dataset, nor do we claim that it is a defense for a single model.

Furthermore, we use the following two-part definition of an adversarial example

  1. It is classified as a different class than the image from which it was derived.

  2. The image from which it was derived was originally correctly classified.

We say that an adversarial example transfers if it satisfies the above definition, but for a different model than the one it was created for.

Additionally, we did not use any data augmentation when training our models since that can be considered as a defense and could further confound results.

All model pairs in sections 4.3

have been regularized such that both models in a pair have a similar input-output Jacobian L2 Norm. This is done in order to allow a univariate analysis of the effect of cosine similarity between model gradients. Otherwise, transferability results could be easily confounded by differences in L2 norms.

Lastly, all attacks performed are untargeted since that is the setting where transferability tends to be highest.

4.2 Asymmetric AdvX Transferability Explained

Since there is a clear relationship between and attack success rates (section 3.3), we look at how predictive the difference in magnitudes between the gradients of two models is of transferability rates, i.e. if larger results in more asymmetric transferability. To ensure univariate analysis, we consider models that had the same random initialization, but were trained to have different gradient magnitudes. Cosine similarity is not considered as a factor in this analysis because it is a symmetric metric, and we expect that a low or high cosine similarity would affect transferability between models in a more or less symmetric way. Table 4 shows how asymmetry in input-output Jacobian magnitude results in asymmetry of AdvX transferability. It is clear that when the models have gradients that are similar in magnitude, AdvX transfer much more symmetrically than when there is a significant difference in magnitude. Transferability decreases quadratically with the grad magnitude of the 2nd model, and the fitted curve has an indicating a strong relationship between the two quantities.

Magnitudes M1 M2 M1 to M2 M2 to M1
0.1 & 0.5 0.984 0.003 0.990 0.001 0.824 0.053 0.823 0.040
0.1 & 1.0 0.984 0.003 0.991 0.001 0.834 0.055 0.560 0.047
0.1 & 3.0 0.984 0.003 0.990 0.001 0.783 0.054 0.278 0.034
0.1 & 5.0 0.984 0.003 0.979 0.010 0.757 0.036 0.176 0.034
0.1 & 7.0 0.984 0.003 0.966 0.021 0.768 0.041 0.143 0.025
Table 4: Asymmetry in AdvX transferability is at least partly explained by differences in input-output Jacobian magnitude. 10 pairs of LeNet models are considered for each scenario. The models were trained independently, and each model in a pair received the same random weight initialization. The attack being considered is IGS with an epsilon of 1.0

4.3 Regularized Transferability Results

We consider the effects of the various types of gradient regularization mentioned in section 3.6 using IGS-1.0 and CW-40. The unconventionally large epsilon value of 1 was selected to obtain large baseline attack success rates such that there is an obvious order of magnitude difference between baseline success rates and transfer rates. The high-confidence CW attack was used instead of the low-confidence version, as per the author’s instructions for finding transferable AdvX. Models without gradient regularization were trained to establish what transfer rates are on average in a natural setting. We note that the lowest transferability results for both attacks were achieved by the perpendicular gradients scenario. In fact, the average transfer rate, relative to no regularization, was cut by ~56% for the IGS attack (table 5), and by ~47% for the CW attack (table 6) when explicitly training for perpendicular gradients. The high transferability for the antiparallel direction is unexpected, but we consider two possible causes of this. First of all, the models trained were not able to achieve truly antiparallel gradients since the mean cosine similarity was -0.892. Secondly, it is possible that the loss manifold has many critical points, so even if gradients were truly antiparallel locally on training data, moving in the direction that increases confidence for a given model could eventually end up overshooting a peak and ending up in a valley of low confidence.

Gradient Scenario M1 M2 M1 to M2 M2 to M1
No Regularization 0.970 0.038 0.981 0.009 0.268 0.108 0.311 0.125
Parallel All 0.863 0.143 0.883 0.101 0.720 0.180 0.741 0.167
Perpendicular All 0.951 0.077 0.946 0.058 0.117 0.064 0.133 0.046
Antiparallel All 0.869 0.062 0.859 0.069 0.435 0.110 0.487 0.164
Table 5: Transferability results on LeNet model pairs for MNIST attacked with IGS-1.0.
Gradient Scenario M1 M2 M1 to M2 M2 to M1
No Regularization 0.991 0.001 0.991 0.001 0.044 0.028 0.048 0.029
Parallel All 0.965 0.020 0.970 0.014 0.696 0.119 0.687 0.151
Perpendicular All 0.987 0.002 0.987 0.002 0.023 0.011 0.022 0.009
Antiparallel All 0.957 0.030 0.955 0.031 0.456 0.154 0.493 0.202
Table 6: Transferability results on LeNet model pairs for MNIST attacked with CW-40

5 Discussion

Reducing AdvX transferability by using easy-to-implement regularization that is independent of model architecture provides new ways for making model ensembles more robust to AdvX. Attacking an ensemble when the individual models have orthogonal input-output Jacobians is a difficult task since making progress on reducing the confidence of a single model is likely to have little effect on the confidence of another model for the true class. A simple agreement-based detection method for AdvX can be created with just a two-model ensemble which provides an efficient defense that makes no assumptions about data distributions or model architectures. Although the transferability results presented in section 4.3 show a clear improvement in robustness to transfer attacks compared to when no regularization is used, even better results are likely possible. For example, we consider regularizing curvature to further reduce transferability as future work. Note that each model pair had identical architecture and hyperparameters, so it is encouraging that we were able to cut baseline transferability rates by half given this restriction. If different model architectures were used in a pair, in addition to gradient regularization, we believe that AdvX transfer attack rates would be even lower.

References

  • A. Athalye, N. Carlini, and D. Wagner (2018) Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. arXiv:1802.00420 [cs]. Note: arXiv: 1802.00420 External Links: Link Cited by: §2.
  • N. Carlini and D. Wagner (2016) Towards Evaluating the Robustness of Neural Networks. arXiv:1608.04644 [cs]. Note: arXiv: 1608.04644 External Links: Link Cited by: §1, §3.1.
  • N. Carlini and D. Wagner (2018) Audio Adversarial Examples: Targeted Attacks on Speech-to-Text. arXiv:1801.01944 [cs]. Note: arXiv: 1801.01944 External Links: Link Cited by: §2.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and Harnessing Adversarial Examples. arXiv:1412.6572 [cs, stat]. Note: arXiv: 1412.6572 External Links: Link Cited by: §3.1.
  • H. Gouk, E. Frank, B. Pfahringer, and M. Cree (2018) Regularisation of Neural Networks by Enforcing Lipschitz Continuity. External Links: 1804.04368, Link Cited by: §3.5.
  • K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel (2016) Adversarial Perturbations Against Deep Neural Networks for Malware Classification. arXiv:1606.04435 [cs]. Note: arXiv: 1606.04435 External Links: Link Cited by: §2.
  • G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and B. Kingsbury (2012) Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Processing Magazine 29 (6), pp. 82–97. External Links: ISSN 1053-5888, Document Cited by: §1.
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (Eds.), pp. 1097–1105. External Links: Link Cited by: §1.
  • A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. (en). External Links: Link Cited by: §3.1.
  • Y. Liu, X. Chen, C. Liu, and D. Song (2017) DELVING INTO TRANSFERABLE ADVERSARIAL EX- AMPLES AND BLACK-BOX ATTACKS. External Links: Link Cited by: §2.
  • N. Papernot, P. McDaniel, and I. Goodfellow (2016) Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. External Links: 1605.07277, Link Cited by: §1, §2.
  • I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to Sequence Learning with Neural Networks. pp. 9 (en). Cited by: §1.
  • C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv:1312.6199 [cs]. Note: arXiv: 1312.6199 External Links: Link Cited by: §1, §2.
  • F. Tramèr, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel (2017) The Space of Transferable Adversarial Examples. arXiv:1704.03453 [cs, stat]. Note: arXiv: 1704.03453 External Links: Link Cited by: §2.
  • L. Wu, Z. Zhu, and C. Tai (2018) UNDERSTANDING AND ENHANCING THE TRANSFER-ABILITY OF ADVERSARIAL EXAMPLES. Technical report External Links: arXiv:1802.09707v1, Link Cited by: §2.

Supplementary Material

5.1 Model Details

Individual Models

Each individual LeNet model trained with regularization that controls input-output Jacobian magnitude had the following architecture:

LeNet( (conv1): Conv2d(1, 6, kernel_size=(3, 3), stride=(1, 1)) (relu): ReLU() (max1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ΨΨΨΨΨceil_mode=False) (conv2): Conv2d(6, 16, kernel_size=(3, 3), stride=(1, 1)) (relu): ReLU() (max2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ΨΨΨΨΨceil_mode=False) (fc1): Linear(in_features=400, out_features=120, bias=True) (relu): ReLU() (fc2): Linear(in_features=120, out_features=84, bias=True) (relu): ReLU() (fc3): Linear(in_features=84, out_features=10, bias=True) )

and hyperparameters:

  • Epochs: 500

  • Learning Rate: 0.001

  • Optimizer: Adam with and

  • Batch Size: 100

Note that there was no shuffling of training data between epochs. There was also no data augmentation, batch normalization, early stopping, etc. This was all done to ensure a univariate analysis when running experiments.

The random seeds used were [100, 109].

Model Pairs

When training LeNet model pairs, the same architecture was used as when training individual models, though the number of epochs was less. The hyperparameters were

  • Epochs: 200

  • Learning Rate: 0.001

  • Optimizer: Adam with and

  • Batch Size: 100

Note that since the model weights were randomly initialized sequentially, the two models differed in initialization.

The random seeds used were [150, 169].

5.2 Importance of Gradient Updates

To determine the benefit of using simultaneous updates compared to alternating updates, we consider both model performance and gradient cosine similarity at the same time. We do this for the more complex dataset FashionMNIST which allows for a more obvious distinction between the two update techniques. Table 7 shows the distinction between the update approaches. The simultaneous update is more effective since it results in higher accuracy and cosine similarity that is closer to the goal. However, we use the alternating update in our experiments since the benefit is not significant, and because the simultaneous update presents a hyperparameter that needs to trade off classification loss and cosine similarity loss.

Accuracy Cosine Similarity
Gradient Scenario M1 M2 Mean Std
Perpendicular -A 0.996 0.002 0.994 0.002 0.008 0.002 0.016 0.001
Perpendicular -S 0.997 0.001 0.997 0.001 0.004 0.001 0.013 0.002
Antiparallel -A 0.965 0.009 0.973 0.007 -0.908 0.034 0.038 0.012
Antiparallel -S 0.982 0.013 0.979 0.011 -0.931 0.040 0.043 0.015
Table 7: Comparison of alternating and simultaneous updates for 5 pairs of LeNet models trained on FashionMNIST. -A indicates alternating, and -S indicates simlutaneous. The improvement in cosine similarity and performance is clear, even though it is small.