Art of singular vectors and universal adversarial perturbations

09/11/2017 ∙ by Valentin Khrulkov, et al. ∙ Skoltech 0

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has been attracting a lot of attention in recent studies. It has been shown that for many state of the art DNNs performing image classification there exist universal adversarial perturbations --- image-agnostic perturbations mere addition of which to natural images with high probability leads to their misclassification. In this work we propose a new algorithm for constructing such universal perturbations. Our approach is based on computing the so-called (p, q)-singular vectors of the Jacobian matrices of hidden layers of a network. Resulting perturbations present interesting visual patterns, and by using only 64 images we were able to construct universal perturbations with more than 60 % fooling rate on the dataset consisting of 50000 images. We also investigate a correlation between the maximal singular value of the Jacobian matrix and the fooling rate of the corresponding singular vector, and show that the constructed perturbations generalize across networks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 7

Code Repositories

singular-fool

Universal Adversarial Perturbations based on the (p, q)-singular vectors.


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep Neural Networks (DNNs) with great success have been applied to many practical problems in computer vision

[11, 20, 9] and in audio and text processing [7, 13, 4]. However, it was discovered that many state-of-the-art DNNs are vulnerable to adversarial attacks [6, 14, 21], based on adding a perturbation of a small magnitude to the image. Such perturbations are carefully constructed in order to lead to misclassification of the perturbed image and moreover may attempt to force a specific predicted class (targeted attacks), as opposed to just any class different from the ground truth (untargeted attacks). Potential undesirable usage of adversarial perturbations in practical applications such as autonomous driving systems and malware detection has been studied in [10, 8]. This also motivated the research on defenses against various kinds of attack strategies [16, 5].

In the recent work Moosavi et al. [14] have shown that there exist universal adversarial perturbations — image-agnostic perturbations that cause most natural images to be misclassified. They were constructed by iterating over a dataset and recomputing the ”worst” direction in the space of images by solving an optimization problem related to geometry of the decision boundary. Universal adversarial perturbations exhibit many interesting properties such as their universality across networks, which means that a perturbation constructed using one DNN will perform relatively well for other DNNs.

We present a new algorithm for constructing universal perturbations based on solving simple optimization problems which correspond to finding the so-called -singular vector of the Jacobian matrices of feature maps of a DNN. Our idea as based on the observation that since the norm of adversarial perturbations is typically very small, perturbations in the non-linear maps computed by the DNN can be reasonably well approximated by the Jacobian matrix. The -singular vector of a matrix is defined as the solution of the following optimization problem

(1)

and if we desire instead, it is sufficient to multiply the solution of (1) by . Universal adversarial perturbations are typically generated with a bound in the -norm, which motivates the usage of such general construction. To obtain the -singular vectors we use a modification of the standard power method, which is adapted to arbitrary -norms. The main contributions of our paper are

  • We propose an algorithm for generating universal adversarial perturbation, using the generalized power method for computing the -singular vectors of the Jacobian matrices of the feature maps.

  • Our method is able to produce relatively good universal adversarial examples from a relatively small number of images from a dataset.

  • We investigate a correlation between the largest -singular value and the fooling rate of the generated adversarial examples; this suggests that this singular value can be used as a quantitative measure of the robustness of a given neural network and can be in principle incorporated as the regularizer for the DNNs.

  • We analyze various properties of the computed adversarial perturbations such as generalization across networks and dependence of the fooling rate on the number of images used for construction of the perturbation.

2 Problem statement

Suppose that we have a standard feed-forward DNN which takes a vector as the input, and outputs a vector of probabilities for the class labels. Our goal given parameters and is to produce a vector such that

(2)

for as many in a dataset as possible. Efficiency of a given universal adversarial perturbation for the dataset of the size is called the fooling rate and is defined as

(3)

Let us denote the outputs of the -th hidden layer of the network by . Then for a small vector we have

where

is the Jacobian matrix of . Thus, for any -norm

(4)

We can conclude that for perturbations which are small in magnitude in order to sufficiently perturb the output of a hidden layer, it is sufficient to maximize right-hand side of the eq. 4. It seems reasonable to suggest that while propagating further in the network it will dramatically change the predicted label of .

Thus to construct an adversarial perturbation for an individual image we need to solve

(5)

and due to homogeneity of the problem defined by eq. 5 it is sufficient to solve it for . The solution of (5) is defined up to multiplication by and is called the -singular vector of . Its computation in a general case is the well-known problem [2, 3]. In several cases e.g. and , algorithms for finding the exact solution of the problem (5) are known [22], and are based on finding the element of maximal absolute value in each row (column) of a matrix. However, this approach requires iterating over all elements of the matrix and thus has complexity for the matrix of size . Typical size of such a matrix appearing in our setting, e.g. taking VGG-19 network, output of the first pooling layer, and batch size of (usage of a batch of images is explained further in the text), would be , which requires roughly TB of memory to store and makes these algorithms completely impractical. In order to avoid these problems, we switch to iterative methods. Instead of evaluating and storing the full matrix we use only the matvec function of , which is the function that given an input vector computes an ordinary product without forming the full matrix , and typically has complexity. In many applications that deal with extremely large matrices using matvec functions is essentially mandatory.

For computing the -singular vectors there exists a well-known Power Method algorithm originally developed by Boyd [3], which we explain in the next section. We also present a modification of this method in order to construct universal adversarial perturbations.

3 Generalized power method

Suppose that for some linear map we are given the matvec functions of and . Given parameter we also define a function

(6)

which applies to vectors element-wise. As usual for we also define such that . Then, given some initial condition , one can apply the following algorithm 1 to obtain a solution of (5).

1:Inputs: initial condition , the matvec functions of and
2: -singular vector
3: -singular value
4:while not converged do
5:     
6:     
7:     
8:return
Algorithm 1 Power method for generating the -singular vectors of a linear map

In the case

it becomes the familiar power method for obtaining the largest eigenvalue and the corresponding eigenvector, applied to the matrix

.
The discussion so far applies to finding an adversarial perturbation for an instance . To produce universal adversarial perturbation we would like to maximize the left-hand size of (5) uniformly across all the images in the dataset . For this we introduce a new optimization problem

(7)

A solution of the problem defined by eq. 7 uniformly perturbs the output of the -th layer of the DNN, and thus can serve as the universal adversarial perturbation due to the reasons discussed in the introduction. Note that the problem given in eq. 7 is exactly equivalent to

where is the matrix obtained via stacking vertically for each . To make this optimization problem tractable, we apply the same procedure to some randomly chosen subset of images (batch) , obtaining

(8)

and hypothesize that the obtained solution will be a good approximate to the exact solution of (7). We present this approach in more detail in the next section.

4 Stochastic power method

Let us choose a fixed batch of images from the dataset and fix a hidden layer of the DNN, defining the map . Denote sizes of , by and correspondingly. Then, using the notation from section 2 we can compute for each . Let us now stack these Jacobian matrices vertically obtaining the matrix of size :

(9)

Note that to compute the matvec functions of and it suffices to be able to compute the individual matvec functions of and . We will present an algorithm for that in the next section and for now let us assume that these matvec functions are given. We can now apply algorithm 1 to the matrix obtaining Stochastic Power Method (SPM).

1:Inputs: a batch of images , - fixed hidden layer of the DNN
2:for  do
3:     Construct the matvec functions of and
4:Construct the matvec functions of and defined in eq. 9
5:Run algorithm 1 with desired and
6:return the universal perturbation
Algorithm 2 Stochastic Power Method for generating universal adversarial perturbations

Note that in algorithm 2 we could in principle change the batch between iterations of the power method to compute ”more general” singular vectors. However in our experiments we discovered that it almost does not affect the fooling rate of the generated universal perturbation.

5 Efficient implementation of the matvec functions

Matrices involved in algorithm 2

for typical DNNs are too large to be formed explicitly. However, using automatic differentiation available in most deep learning packages it is possible to construct matvec functions which then are evaluated in a fraction of a second. To compute the matvecs we follow the well-known approach based on Pearlmutter’s R-operator

[17], which could be briefly explained as follows. Suppose that we are given an operation which computes the gradient of a scalar function with respect to the vector variable at the point . Let be some fixed layer of the DNN, such that and , thus and for vectors , we would like to compute , at some fixed point . These steps are presented in algorithm 3.
For a given batch of images this algorithm is run only once.

1:Inputs :, - vectors to compute the matvec functions of, - fixed hidden layer of the DNN
2:
3:
4:
5:return
Algorithm 3 Constructing the matvec functions of the Jacobian matrix of a hidden layer of a DNN

Let us summarize our approach for generating universal perturbations. Suppose that we have some dataset of natural images and a fixed deep neural network trained to perform image classification. At first we choose a fixed random batch of images from and specify a hidden layer of the DNN. Then using algorithm 3 we construct the matvec functions of the matrix defined by eq. 9. Finally we run algorithm 2 to obtain the perturbation and then rescale it if necessary.

6 Experiments

In this section we analyze various adversarial perturbations constructed as discussed in section 5. For testing purposes we use the ILSVRC 2012 validation dataset [18] ( images).

6.1 Adversarial perturbations

In our experiments we chose and computed the - singular vectors for various layers of VGG-16 and VGG-19 [19] and ResNet50 [9]. was chosen to smoothen optimization problem and effectively serves as the replacement for , for which the highest fooling rates were reported in [14]. We also investigate other values of in section 6.3. Batch size in algorithm 2 was chosen to be and we used the same images to construct all the adversarial perturbations.

(a) VGG-16
(b) VGG-19
(c) ResNet50
Figure 1: Universal adversarial perturbations constructed using various layers of various DNNs.

Some of the computed singular vectors are presented in figs. 0(c), 0(b) and 0(a). We observe that computed singular vectors look visually appealing and present interesting visual patterns. Possible interpretation of these patterns can be given if we note that extremely similar images were computed in [15] in relation to feature visualization. Namely, for various layers in GoogLeNet [20]

the images which activate a particular neuron were computed. In particular, visualization of the layer

conv2d0, which corresponds to edge detection, looks surprisingly similar to several of our adversarial perturbations. Informally speaking, this might indicate that adversarial perturbations constructed as the -singular vectors attack a network by ruining a certain level of image understanding, where in particular first layers correspond to edge detection. This is partly supported by the fact that the approach used for feature visualization in [15] is based on computing the Jacobian matrix of a hidden layer and maximizing the response of a fixed neuron, which is in spirit related to our method.
To measure how strongly the -singular vector disturbs the output of the hidden layer based on which it was constructed, we evaluate the corresponding singular value. We have computed it for all the layers of VGG-16, VGG-19 and ResNet50. Results are given in fig. 2. Note that in general singular values of the layers of ResNet50 are much smaller in magnitude than those of the VGG nets, which is further shown to roughly correspond to the obtained fooling rates.
Convergence of algorithm 2 is analyzed in fig. 3. We observe that a relatively low number of iterations is required to achieve good accuracy. In particular if each evaluation of the matvec functions takes operations, the total complexity is for iterations, which for as small as is a big improvement compared to of the exact algorithm.

(a) VGG-16
(b) VGG-19
(c) ResNet50
Figure 2: -singular values for all the layers of various DNNs. Values were used.
Figure 3: Convergence of the -singular value. Relative error of the singular value w.r.t iteration number is shown.

6.2 Fooling rate and singular values

As a next experiment we computed and compared the fooling rate of the perturbation by various computed singular vectors. We choose 111Pixels in the images from the dataset are normalized to be in range, so by choosing we make the adversarial perturbations quasi-imperceptible to human eye – recall that this can be achieved just by multiplying the computed singular vector by the factor of . Results are given in tables 3, 3 and 3.

Layer name block2_pool block3_conv1 block3_conv2 block3_conv3
Singular value 1165.74 2200.08 3146.66 6282.64
Fooling rate 0.52 0.39 0.50 0.50
Table 1: Fooling rates for VGG-16
Layer name block2_pool block3_conv1 block3_conv2 block3_conv3
Singular value 784.82 1274.99 1600.77 3063.72
Fooling rate 0.60 0.33 0.50 0.52
Table 2: Fooling rates for VGG-19
Layer name conv1 res3c_branch2a bn5a_branch2c activation_8
Singular value 59.69 19.21 138.81 15.55
Fooling rate 0.44 0.35 0.34 0.34
Table 3: Fooling rates for ResNet50

We see that using only images allowed us to achieve more than fooling rate for all the investigated networks on the dataset containing images of different classes. This means that by analyzing less than of the dataset it is possible to design strong universal adversarial attacks generalizing to many unseen classes and images. Similar fooling rates reported in [14, Figure 6] required roughly images to achieve (see section 6.4 for further comparison). Examples of images after addition of the adversarial perturbation with the highest fooling rate for VGG-19 are given in fig. 6, and their predicted classes for various adversarial attacks (for each network we choose the adversarial perturbation with the highest fooling rate) are reported in tables 7, 7 and 7. We note that the top-1 class probability for images after the adversarial attack is relatively low in most cases, which might indicate that images are moved away from the decision boundary. We test this behavior by computing the top-5 probabilities for several values of the -norm of the adversarial perturbation. Results are given in fig. 4. We see that top- probability decreases significantly and becomes roughly equal to the top- probability. Similar behavior was noticed in some of the cases when the adversarial example failed to fool the DNN — top- probability still has decreased significantly. It is also interesting to note that such adversarial attack indeed introduces many new edges in the image, which supports the claim made in the previous section.

Figure 4: Top-5 probabilities predicted by the VGG-19 w.r.t -norm of the universal adversarial perturbation. Tests were run for image 3 from fig. 6. Universal adversarial perturbation with highest fooling rate in table 3 was chosen.

As a next experiment we investigate the dependence of the achieved fooling rate on the batch size used in algorithm 2. Some of the results are given in fig. 5. Surprisingly, increasing the batch size does not significantly affect the fooling rate and by using as few as images it is possible to construct the adversarial perturbations with fooling rate. This suggests that the singular vector constructed using Stochastic Power Method reasonably well approximates solution of the general optimization problem (7).

Figure 5: Dependence of the fooling rate on the batch size. block2_pool layer in VGG-19 was used for the experiment.

It appears that higher singular value of the layer does not necessarily indicate higher fooling rate of the corresponding singular vector. However, as shown on fig. 2 the singular values of various layers VGG-19 are in general larger than those of VGG-16, and of VGG-16 are in general larger than the singular values of ResNet50, which is roughly in correspondence between the maximal fooling rates we obtained for these networks. Moreover, layers closer to the input of the DNN seem to produce better adversarial perturbations, than those closer to the end.
Based on this observation we hypothesize that to defend the DNN against this kind of adversarial attack one can choose some subset of the layers (preferably closer to the input) of the DNN and include the term

in the regularizer, where indicates the current learning batch. We plan to analyze this approach in future work.
Finally, we investigate if our adversarial perturbations generalize across different networks. For each DNN we have chosen the adversarial perturbation with the highest fooling rate from tables 3, 3 and 3 and tested it against other networks. Results are given in table 4. We see that these adversarial perturbations are indeed doubly universal, reasonably well generalizing to other architectures. Surprisingly, in some cases the fooling rate of the adversarial perturbation constructed using other network was higher than that of the network’s own adversarial perturbation. This universality might be explained by the fact that if Deep Neural Networks independently of specifics of their architecture indeed learn to detect low-level patterns such as edges, then adding an edge-like noise has a high chance to ruin the prediction. It is interesting to note that the adversarial perturbation obtained using block2_pool layer of VGG-19 is the most efficient one, in correspondence with its interesting edge-like structure.

VGG-16 VGG-19 ResNet50
VGG-16 0.52 0.60 0.39
VGG-19 0.48 0.60 0.38
ResNet50 0.41 0.47 0.44
Table 4: Generalization of the adversarial perturbations across networks. Columns indicate the DNN for which the adversarial perturbation was computed, rows indicate on which network it was tested. Adversarial perturbations with highest fooling rates in tables 3, 3 and 3 were chosen.

6.3 Dependence of the perturbation on

Figure 6: Examples of images misclassified after the adversarial attack (the attack based on block2_pool layer of VGG-19 is shown). Predicted classes are given in tables 7, 7 and 7.
image_1 image_2 image_3 image_4
mashed_potato pole fountain goblet
head_cabbage rubber_eraser carousel bucket
Table 5: VGG-16
image_1 image_2 image_3 image_4
mashed_potato flagpole fountain coffee_mug
flatworm letter_opener pillow candle
Table 6: VGG-19
image_1 image_2 image_3 image_4
mashed_potato totem_pole flagpole chocolate_sauce
stole fountain_pen monitor goblet
Table 7: ResNet50

In the analysis so far we have chosen as an approximate to . However, any value of can be used for constructing the adversarial perturbations and in this subsection we investigate how the choice of affects the fooling rate and the generated perturbations (while keeping ). Perturbations computed for several different values of are presented in fig. 7, and the corresponding fooling rates are reported in fig. 8. We observe that bigger values of produce more clear edge-like patterns, which is reflected in the increase of the fooling rate. However, the maximal fooling rate seems to be achieved at , probably because it is ’smoother’ substitute for than , which might be important in such large scale problems.

Figure 7: Adversarial perturbations constructed for various values of . Presented images correspond to values uniformly increasing from to . block2_pool layer of VGG-19 was used.
Figure 8: Dependence of the fooling rate on the value of . As before, and norm of the adversarial perturbation was set to be .

6.4 Comparison of the algorithms

In this subsection we perform a comparison of the algorithm presented in Moosavi et al., which we refer to as UAP, and our method. For the former we use the Python implementation https://github.com/LTS4/universal/. Since one of the main features of our method is an extremely low number of images used for constructing the perturbation, we decided to compare the fooling rates of universal perturbations constructed using these two methods for various batch sizes. Results are presented in fig. 9. Note that our method indeed captures the universal attack vector relatively fast, and the fooling rate stabilizes on roughly , while the fooling rate of the perturbation constructed by the UAP method starts low and then gradually increases as more images are added. Running time of our algorithm depends on which hidden layer we use. As an example for block2_pool layer of VGG-19 the running time per iteration of the power method for a batch of one image was roughly seconds (one NVIDIA Tesla K80 GPU was used and the algorithm was implemented using Tensorflow [1] and numpy libraries). Since the running time per iteration linearly depends on the batch size

, the total running time could be estimated as

seconds for iterations. By fixing and we obtain that the total running time to generate the universal perturbation with approximately fooling rate on the whole dataset is roughly minute (we did not include the time required to compute the symbolic Jacobian matvecs since it is performed only once, and is also required in the implementation of UAP, though different layer is used). In our hardware setup the running time of the UAP algorithm with batch size was approximately minutes, and the fooling rate of roughly was achieved. According to [14, Figure 6] approximately images will be required to obtain the fooling rates of order .

Figure 9: Dependence of the fooling rate on the number of images used for constructing the universal perturbation. SingularFool denotes the method proposed in the current paper, UAP denotes the algorithm presented in [14]. and block2_pool layer of VGG-19 were used.

7 Related work

Many different methods [6, 14, 10, 21, 12] have been proposed to perform adversarial attacks on Deep Neural Networks in the white box setting where the DNN is fully available to the attacker. Two works are especially relevant for the present paper. Goodfellow et al. [6]

propose the fast gradient sign method, which is based on computing the gradient of the loss function

at some image and taking its as the adversarial perturbation. This approach allows one to construct rather efficient adversarial perturbations for individual images and can be seen as a particular case of our method. Indeed if we take the batch size to be equal to and the loss function as the hidden layer, then is exactly the solution of the problem (5) with and (since is just a number this problem does not depend on ). Second work is Moosavi et al. [14] where the universal adversarial perturbations have been proposed. It is based on a sequential solution of nonlinear optimization problems followed by a projection onto () sphere, which iteratively computes the ’worst’ possible direction towards the decision boundary. Optimization problems proposed in the current work are simpler in nature and well-studied, and due to their homogeneous property the adversarial perturbation with an arbitrary norm is obtained by simply rescaling the once computed perturbation, in contrast with the algorithm in [14].

8 Conclusion

In this work we explored a new algorithm for generating universal adversarial perturbations and analyzed their main properties, such as generalization across networks, dependence of the fooling rate on various hyperparameters and having certain visual properties. We have showed that by using only

images a single perturbation fooling the network in roughly cases can be constructed, while the previous known approach required several thousand of images to obtain such fooling rates. In a future work we plan to address the relation between feature visualization [15] and adversarial perturbations, as well as analyzing the defense approach discussed in section 6.2.

Acknowledgements

This study was supported by the Ministry of Education and Science of the Russian Federation (grant 14.756.31.0001), by RFBR grants 16-31-60095-mol-a-dk, 16-31-00372-mola and by Skoltech NGP program.

References