AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks

05/30/2018 ∙ by Chun-Chen Tu, et al. ∙ 6

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may lead to a false sense of model robustness due to inefficient query designs. To bridge this gap, we propose a generic framework for query-efficient black-box attacks. Our framework, AutoZOOM, which is short for Autoencoder-based Zeroth Order Optimization Method, has two novel building blocks towards efficient black-box attacks: (i) an adaptive random gradient estimation strategy to balance query counts and distortion, and (ii) an autoencoder trained offline with unlabeled data towards attack acceleration. Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples. In particular, when compared to the standard ZOO method, AutoZOOM can consistently reduce the mean query counts in finding successful adversarial examples by at least 93 fine-tuning can further reduce attack distortion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In recent years, “machine learning as a service” has offered the world an effortless access to powerful machine learning tools for a wide variety of tasks. For example, commercially available services such as Google Cloud Vision API and Clarifai.com provide well-trained image classifiers to the public. One is able to upload and obtain the class prediction results for images at hand at a low price. However, the existing and emerging machine learning platforms and their low model-access costs raise ever-increasing security concerns, as they also offer an ideal environment for testing malicious attempts. Even worse, the risks can be amplified when these services are used to build derived products such that the inherent security vulnerability could be leveraged by attackers.

In many computer vision tasks, DNN models achieve the state-of-the-art prediction accuracy and hence are widely deployed in modern machine learning services. Nonetheless, recent studies have highlighted DNNs’ vulnerability to adversarial perturbations. In the

white-box setting in which the target model is entirely transparent to an attacker, visually imperceptible adversarial images can be easily crafted to fool a target DNN model towards misclassification by leveraging the input gradient information [Szegedy et al.2014, Goodfellow, Shlens, and Szegedy2015]. However, in the black-box setting in which the parameters of the deployed model are hidden and one can only observe the input-output correspondences of a queried example, crafting adversarial examples requires a gradient-free (zeroth order) optimization approach to gather necessary attack information. Figure 1 displays a prediction-evasive adversarial example crafted via iterative model queries from a black-box DNN (the Inception-v3 model [Szegedy et al.2016]) trained on ImageNet.

Figure 1: AutoZOOM significantly reduces the number of queries required to generate a successful adversarial Bagel image from the black-box Inception-v3 model.
Figure 2: Illustration of attack dimension reduction through a “decoder” in AutoZOOM for improving query efficiency in black-box attacks. The decoder has two modes: (i) An autoencoder (AE) trained on unlabeled natural images that are different from the attacked images and training data; (ii) a simple bilinear image resizer (BiLIN) that is applied channel-wise to extrapolate low-dimensional feature to the original image dimension (width height). In the latter mode, no additional training is required.

Albeit achieving remarkable attack effectiveness by the use of gradient estimation, current black-box attack methods, such as [Chen et al.2017, Nitin Bhagoji et al.2018], are not query-efficient since they exploit coordinate-wise gradient estimation and value update, which inevitably incurs an excessive number of model queries and may give a false sense of model robustness due to inefficient query designs. In this paper, we propose to tackle the preceding problem by using AutoZOOM, an Autoencoder-based Zeroth Order Optimization Method. AutoZOOM has two novel building blocks: (i) a new and adaptive random gradient estimation strategy to balance the query counts and distortion when crafting adversarial examples, and (ii) an autoencoder that is either trained offline on other unlabeled data, or based on a simple bilinear resizing operation, in order to accelerate black-box attacks. As illustrated in Figure 2, AutoZOOM utilizes a “decoder” to craft a high-dimensional adversarial perturbation from the (learned) low-dimensional latent-space representation, and its query efficiency can be well explained by the dimension-dependent convergence rate in gradient-free optimization.

Contributions. We summarize our main contributions and new insights on adversarial robustness as follows:

  1. We propose AutoZOOM, a novel query-efficient black-box attack framework for generating adversarial examples. AutoZOOM features an adaptive random gradient estimation strategy and dimension reduction techniques (either an offline trained autoencoder or a bilinear resizer) to reduce attack query counts while maintaining attack effectiveness and visual similarity. To the best of our knowledge, AutoZOOM is the first black-box attack using random full gradient estimation and data-driven acceleration.

  2. We use the convergence rate of zeroth-order optimization to motivate the query efficiency of AutoZOOM and provide an error analysis of the new gradient estimator in AutoZOOM to the true gradient for characterizing the trade-offs between estimation error and query counts.

  3. When applied to a state-of-the-art black-box attack proposed in [Chen et al.2017], AutoZOOM attains a similar attack success rate while achieving a significant reduction (at least 93%) in the mean query counts required to attack the DNN image classifiers for MNIST, CIFAR-10 and ImageNet. It can also fine-tune the distortion in the post-success stage by performing finer gradient estimation.

  4. In the experiments, we also find that AutoZOOM with a simple bilinear resizer as the decoder (AutoZOOM-BiLIN) can attain noticeable query efficiency, despite that it is still worse than AutoZOOM with an offline trained autoencoder (AutoZOOM-AE). However, AutoZOOM-BiLIN is easier to be mounted as no additional training is required. The results also suggest an interesting finding that while learning effective low-dimensional representations of legitimate images is still a challenging task, black-box attacks using significantly less degree of freedoms (i.e., reduced dimensions) are certainly plausible.

2 Related Work

Gradient-based adversarial attacks on DNNs fall within the white-box setting, since acquiring the gradient with respect to the input requires knowing the weights of the target DNN. As a first attempt towards black-box attacks, the authors in [Papernot et al.2017] proposed to train a substitute model using iterative model queries, performing white-box attacks on the substitute model, and implementing transfer attacks to the target model [Papernot, McDaniel, and Goodfellow2016, Liu et al.2017]. However, its attack performance can be severely degraded due to poor attack transferability [Su et al.2018]. Although ZOO achieves a similar attack success rate and comparable visual quality as many white-box attack methods [Chen et al.2017], its coordinate-wise gradient estimation requires excessive target model evaluations and is hence not query-efficient. The same gradient estimation technique is also used in [Nitin Bhagoji et al.2018].

Beyond optimization-based approaches, the authors in [Ilyas et al.2018]

proposed to use a natural evolution strategy (NES) to enhance query efficiency. Although there is a vector-wise gradient estimation step in the NES attack, we treat it as a parallel work since its natural evolutionary step is out of the scope of black-box attacks using zeroth-order gradient descent. We also note that different from NES, our AutoZOOM framework uses a theory-driven query-efficient random-vector based gradient estimation strategy. In addition, AutoZOOM could be applied to further improve the query efficiency of NES, since NES does not take into account the factor of attack dimension reduction, which is the novelty in AutoZOOM as well as the main focus of this paper.

Under a more restricted attack setting, where only the decision (top-1 prediction class) is known to an attacker, the authors in [Brendel, Rauber, and Bethge2018] proposed a random-walk based attack around the decision boundary. Such a black-box attack dispenses class prediction scores and hence requires additional model queries. Due to space limitation, we provide more background and a table comparing existing black-box attacks in the supplementary material.

3 AutoZOOM: Background and Methods

3.1 Black-box Attack Formulation and Zeroth Order Optimization

Throughout this paper, we focus on improving the query efficiency of gradient-estimation and gradient-descent based black-box attacks empowered by AutoZOOM, and we consider the threat model that the class prediction scores are known to an attacker. In this setting, it suffices to denote the target DNN as a classification function that takes a -dimensional scaled image as its input and yields a vector of prediction scores of all

image classes, such as the prediction probabilities for each class. We further consider the case of applying an entry-wise monotonic transformation

to the output of for black-box attacks, since monotonic transformation preserves the ranking of the class predictions and can alleviate the problem of large score variation in (e.g., probability to log probability).

Here we formulate black-box targeted attacks. The formulation can be easily adapted to untargeted attacks. Let denote a natural image and its ground-truth class label , and let () denote the adversarial example of and the target attack class label . The problem of finding an adversarial example can be formulated as an optimization problem taking the generic form of

(1)

where measures the distortion between and , is an attack objective reflecting the likelihood of predicting , is a regularization coefficient, and the constraint confines the adversarial image to the valid image space. The distortion is often evaluated by the norm defined as for , where is the adversarial perturbation to . The attack objective can be the training loss of DNNs [Goodfellow, Shlens, and Szegedy2015] or some designed loss based on model predictions [Carlini and Wagner2017b].

In the white-box setting, an adversarial example is generated by using downstream optimizers such as ADAM [Kingma and Ba2015] to solve (1); this requires the gradient of the objective function relative to the input of via back-propagation in DNNs. However, in the black-box setting, acquiring is implausible, and one can only obtain the function evaluation , which renders solving (1) a zeroth order optimization problem. Recently, zeroth order optimization approaches [Ghadimi and Lan2013, Nesterov and Spokoiny2017, Liu et al.2018] circumvent the preceding challenge by approximating the true gradient via function evaluations. Specifically, in black-box attacks, the gradient estimate is applied to both gradient computation and descent in the optimization process for solving (1).

3.2 Random Vector based Gradient Estimation

As a first attempt to enable gradient-free black-box attacks on DNNs, the authors in [Chen et al.2017] use the symmetric difference quotient method [Lax and Terrell2014] to evaluate the gradient of the -th component by

(2)

using a small . Here denotes the -th elementary basis. Albeit contributing to powerful black-box attacks and applicable to large networks like ImageNet, the nature of coordinate-wise gradient estimation step in (2) must incur an enormous amount of model queries and is hence not query-efficient. For example, the ImageNet dataset has input dimensions, rendering coordinate-wise zeroth order optimization based on gradient estimation query-inefficient.

To improve query efficiency, we dispense with coordinate-wise estimation and instead propose a scaled random full gradient estimator of , defined as

(3)

where is a smoothing parameter, is a unit-length vector that is uniformly drawn at random from a unit Euclidean sphere, and

is a tunable scaling parameter that balances the bias and variance trade-off of the gradient estimation error. Note that with

, the gradient estimator in (3) becomes the one used in [Duchi et al.2015]. With , this estimator becomes the one adopted in [Gao, Jiang, and Zhang2014]. We will provide an optimal value for balancing query efficiency and estimation error in the following analysis.

Averaged random gradient estimation. To effectively control the error in gradient estimation, we consider a more general gradient estimator, in which the gradient estimate is averaged over random directions . That is,

(4)

where is a gradient estimate defined in (3) with . The use of multiple random directions can reduce the variance of in (4

) for convex loss functions

[Duchi et al.2015, Liu et al.2018].

Below we establish an error analysis of the averaged random gradient estimator in (4) for studying the influence of the parameters and on estimation error and query efficiency.

Theorem 1.

Assume is differentiable and its gradient is -Lipschitz111A function is -Lipschitz if for any

. For DNNs with ReLU activations,

can be derived from the model weights [Szegedy et al.2014].. Then the mean squared estimation error of in (4) is upper bounded by

(5)
Proof.

The proof is given in the supplementary file. ∎

Here we highlight the important implications based on Theorem 1: (i) The error analysis holds when is non-convex; (ii) In DNNs, the true gradient can be viewed as the numerical gradient obtained via back-propagation; (iii) For any fixed , selecting a small (e.g., we set in AutoZOOM) can effectively reduce the last error term in (1), and we therefore focus on optimizing the first error term; (iv) The first error term in (1) exhibits the influence of and on the estimation error, and is independent of . We further elaborate on (iv) as follows. Fixing and let to be the coefficient of the first error term in (1), then the optimal that minimizes is . For query efficiency, one would like to keep small, which then implies and when the dimension is large. On the other hand, when , and , which yields a smaller error upper bound but is query-inefficient. We also note that by setting , the coefficient and thus is independent of the dimension and the parameter .

Adaptive random gradient estimation. Based on Theorem 1 and our error analysis, in AutoZOOM we set in (3) and propose to use an adaptive strategy for selecting . AutoZOOM uses (i.e., the fewest possible model evaluation) to first obtain rough gradient estimates for solving (1) until a successful adversarial image is found. After the initial attack success, it switches to use more accurate gradient estimates with to fine-tune the image quality. The trade-off between (which is proportional to query counts) and distortion reduction will be investigated in Section 4.

3.3 Attack Dimension Reduction via Autoencoder

Dimension-dependent convergence rate using gradient estimation. Different from the first order convergence results, the convergence rate of zeroth order gradient descent methods has an additional multiplicative dimension-dependent factor . In the convex loss setting the rate is , where is the number of iterations [Nesterov and Spokoiny2017, Liu et al.2018, Gao, Jiang, and Zhang2014, Wang et al.2018]. The same convergence rate has also been found in the nonconvex setting [Ghadimi and Lan2013]. The dimension-dependent convergence factor suggests that vanilla black-box attacks using gradient estimations can be query inefficient when the (vectorized) image dimension

is large, due to the curse of dimensionality in convergence. This also motivates us to propose using an autoencoder to reduce the attack dimension and improve query efficiency in black-box attacks.

In AutoZOOM, we propose to perform random gradient estimation from a reduced dimension to improve query efficiency. Specifically, as illustrated in Figure 2, the additive perturbation to an image is actually implemented through a “decoder” such that , where . In other words, the adversarial perturbation to is in fact generated from a dimension-reduced space, with an aim of improving query efficiency due to the reduced dimension-dependent factor in the convergence analysis. AutoZOOM provides two modes for such a decoder :
An autoencoder (AE) trained on unlabeled data that are different from the training data to learn reconstruction from a dimension-reduced representation. The encoder in an AE compresses the data to a low-dimensional latent space and the decoder reconstructs an example from its latent representation. The weights of an AE are learned to minimize the average reconstruction error. Note that training such an AE for black-box adversarial attacks is one-time and is entirely offline (i.e., no model queries needed).
A simple channel-wise bilinear image resizer (BiLIN) that scales a small image to a large image via bilinear extrapolation222See tf.image.resize_images

, a TensorFlow example.

. Note that no additional training is required for BiLIN.

Input: Black-box DNN model , original example , distortion measure , attack objective , monotonic transformation , decoder , initial coefficient , query budget
while query count  do
     1. Exploration: use and apply the random gradient estimator in (4) with to the downstream optimizer (e.g., ADAM) for solving (1) until an initial attack is found.
     2. Exploitation (post-success stage): continue to fine-tune the adversarial perturbation for solving (1) while setting in (4).
end while
Output: Least distorted successful adversarial example
Algorithm 1 AutoZOOM for black-box attacks on DNNs

Why AE? Our proposal of AE is motivated by the insightful findings in [Goodfellow, Shlens, and Szegedy2015] that a successful adversarial perturbation is highly relevant to some human-imperceptible noise pattern resembling the shape of the target class, known as the “shadow”. Since a decoder in AE learns to reconstruct data from latent representations, it can also provide distributional guidance for mapping adversarial perturbations to generate these shadows.

We also note that for any reduced dimension , the setting is optimal in terms of minimizing the corresponding estimation error from Theorem 1, despite the fact that the gradient estimation errors of different reduced dimensions cannot be directly compared. In Section 4 we will report the superior query efficiency in black-box attacks achieved with the use of AE or BiLIN as the decoder, and discuss the benefit of attack dimension reduction.

3.4 AutoZOOM Algorithm

Algorithm 1 summarizes the AutoZOOM framework towards query-efficient black-box attacks on DNNs. We also note that AutoZOOM is a general acceleration tool that is compatible with any gradient-estimation based black-box adversarial attack obeying the attack formulation in (1). It also has some theoretical estimation error guarantees and query-efficient parameter selection based on Theorem 1. The details on adjusting the regularization coefficient and the query parameter based on run-time model evaluation results will be discussed in Section 4. Our source code is publicly available333https://github.com/IBM/Autozoom-Attack.

4 Performance Evaluation

This section presents the experiments for assessing the performance of AutoZOOM in accelerating black-box attacks on DNNs in terms of the number of queries required for an initial attack success and for a specific distortion level.

4.1 Distortion Measure and Attack Objective

As described in Section 3, AutoZOOM is a query-efficient gradient-free optimization framework for solving the black-box attack formulation in (1). In the following experiments, we demonstrate the utility of AutoZOOM by using the same attack formulation proposed in ZOO [Chen et al.2017], which uses the squared norm as the distortion measure and adopts the attack objective

(6)

where this hinge function is designed for targeted black-box attacks on the DNN model , and the monotonic transformation is applied to the model output.

4.2 Comparative Black-box Attack Methods

We compare AutoZOOM-AE () and AutoZOOM-BiLIN () with two different baselines: (i) Standard ZOO implementation444https://github.com/huanzhang12/ZOO-Attack with bilinear scaling (same as BiLIN) for dimension reduction; (ii) ZOO+AE, which is ZOO with AE. Note that all attacks indeed generate adversarial perturbations based on the same reduced attack dimension.

4.3 Experiment Setup, Evaluation, Datasets and AutoZOOM Implementation

We assess the performance of different attack methods on several representative benchmark datasets, including MNIST [LeCun et al.1998], CIFAR-10 [Krizhevsky2009] and ImageNet [Russakovsky et al.2015]. For MNIST and CIFAR-10, we use the same DNN image classification models555https://github.com/carlini/nn_robust_attacks as in [Carlini and Wagner2017b]. For ImageNet, we use the Inception-v3 model [Szegedy et al.2016]. All experiments were conducted using TensorFlow Machine-Learning Library [Abadi et al.] on machines equipped with an Intel Xeon E5-2690v3 CPU and an Nvidia Tesla K80 GPU.

All attacks used ADAM [Kingma and Ba2015] for solving (1) with their estimated gradients and the same initial learning rate . On MNIST and CIFAR-10, all methods adopt 1,000 ADAM iterations. On ImageNet, ZOO and ZOO+AE adopt 20,000 iterations, whereas AutoZOOM-BiLIN and AutoZOOM-AE adopt 100,000 iterations. Note that due to different gradient estimation methods, the query counts (i.e., the number of model evaluations) per iteration of a black-box attack may vary. ZOO and ZOO+AE use the parallel gradient update of (2) with a batch of pixels, yielding 256 query counts per iteration. AutoZOOM-BiLIN and AutoZOOM-AE use the averaged random full gradient estimator in (4), resulting in query counts per iteration. For a fair comparison, the query counts are used for performance assessment.

Query reduction ratio. We use the mean query counts of ZOO with the smallest as the baseline for computing the query reduction ratio of other methods and configurations.

TPR and initial success. We report the true positive rate (TPR), which measures the percentage of successful attacks fulfilling a pre-defined constraint on the normalized (per-pixel) distortion, as well as their query counts of first successes. We also report the per-pixel distortions of initial successes, where an initial success refers to the first query count that finds a successful adversarial example.

Post-success fine-tuning. When implementing AutoZOOM in Algorithm 1, on MNIST and CIFAR-10 we find that AutoZOOM without fine-tuning (i.e., ) already yields similar distortion as ZOO. We note that ZOO can be viewed as coordinate-wise fine-tuning and is thus query-inefficient. On ImageNet, we will investigate the effect of post-success fine-tuning on reducing distortion.

Autoencoder Training. In AutoZOOM-AE, we use convolutional autoencoders for attack dimension reduction, which are trained on unlabeled datasets that are different from the training dataset and the attacked natural examples. The implementation details are given in the supplementary material.

Dynamic Switching on . To adjust the regularization coefficient in (1), in all methods we set its initial value on MNIST and CIFAR-10, and set on ImageNet. Furthermore, for balancing the distortion Dist and the attack objective Loss in (1), we use a dynamic switching strategy to update during the optimization process. Per every iterations, is multiplied by 10 times of the current value if the attack has never been successful. Otherwise, it divides its current value by 2. On MNIST and CIFAR-10, we set . On ImageNet, we set . At the instance of initial success, we also reset and the ADAM parameters to the default values, as doing so can empirically reduce the distortion for all attack methods.

max width=0.97 Method Attack success rate (ASR) Mean query count (initial success) Mean query count reduction ratio (initial success) Mean per-pixel distortion (initial success) True positive rate (TPR) Mean query count with per-pixel distortion 0.004 0.1 99.44% 35,737.60 0.00% 3.50 96.76% 47,342.85 ZOO 1 99.44% 16,533.30 53.74% 3.74 97.09% 31,322.44 10 99.44% 13,324.60 62.72% 4.85 96.31% 41,302.12 0.1 99.67% 34,093.95 4.60% 3.43 97.66% 44,079.92 ZOO+AE 1 99.78% 15,065.52 57.84% 3.72 98.00% 29,213.95 10 99.67% 12,102.20 66.14% 4.66 97.66% 38,795.98 0.1 99.89% 2,465.95 93.10% 4.51 96.55% 3,941.88 AutoZOOM-BiLIN 1 99.89% 879.98 97.54% 4.12 97.89% 2,320.01 10 99.89% 612.34 98.29% 4.67 97.11% 4,729.12 0.1 100.00% 2,428.24 93.21% 4.54 96.67% 3,861.30 AutoZOOM-AE 1 100.00% 729.65 97.96% 4.13 96.89% 1,971.26 10 100.00% 510.38 98.57% 4.67 97.22% 4,855.01

Table 1: Performance evaluation of black-box targeted attacks on MNIST

max width=0.97 Method Attack success rate (ASR) Mean query count (initial success) Mean query count reduction ratio (initial success) Mean per-pixel distortion (initial success) True positive rate (TPR) Mean query count with per-pixel distortion 0.0015 0.1 97.00% 25,538.43 0.00% 5.42 100.00% 25,568.33 ZOO 1 97.00% 11,662.80 54.33% 6.37 100.00% 11,777.18 10 97.00% 10,015.08 60.78% 8.03 100.00% 10,784.54 0.1 99.33% 19,670.96 22.98% 4.96 100.00% 20,219.42 ZOO+AE 1 99.00% 5,793.25 77.32% 6.83 99.89% 5,773.24 10 99.00% 4,892.80 80.84% 8.74 99.78% 5,378.30 0.1 99.67% 2,049.28 91.98% 1.01 98.77% 2,112.52 AutoZOOM-BiLIN 1 99.67% 813.01 96.82% 8.25 99.22% 1,005.92 10 99.33% 623.96 97.56% 9.09 98.99% 835.27 0.1 100.00% 1,523.91 94.03% 1.20 99.67% 1,752.45 AutoZOOM-AE 1 100.00% 332.43 98.70% 1.01 99.56% 345.62 10 100.00% 259.34 98.98% 1.15 99.67% 990.61

Table 2: Performance evaluation of black-box targeted attacks on CIFAR-10

4.4 Black-box Attacks on MNIST and CIFAR-10

For both MNIST and CIFAR-10, we randomly select 50 correctly classified images from their test sets, and perform targeted attacks on these images. Since both datasets have 10 classes, each selected image is attacked 9 times, targeting at all but its true class. For all attacks, the ratio of reduced attack-space dimension to the original one (i.e., ) is 25% for MNIST and 6.25% for CIFAR-10.

Table 1 shows the performance evaluation on MNIST with various values of , the initial value of the regularization coefficient in (1). We use the performance of ZOO with as a baseline for comparison. For example, with and , the mean query counts required by AutoZOOM-AE to attain an initial success is reduced by 93.21% and 98.57%, respectively. One can also observe that allowing larger generally leads to fewer mean query counts at the price of slightly increased distortion for the initial attack. The noticeable huge difference in the required attack query counts between AutoZOOM and ZOO/ZOO+AE validates the effectiveness of our proposed random full gradient estimator in (3), which dispenses with the coordinate-wise gradient estimation in ZOO but still remains comparable true positive rates, thereby greatly improving query efficiency.

For CIFAR-10, we report similar query efficiency improvements as displayed in Table 2. In particular, comparing the two query-efficient black-box attack methods (AutoZOOM-BiLIN and AutoZOOM-AE), we find that AutoZOOM-AE is more query-efficient than AutoZOOM-BiLIN, but at the cost of an additional AE training step. AutoZOOM-AE achieves the highest attack success rates (ASRs) and mean query reduction ratios for different values of . In addition, their true positive rates (TPRs) are similar but AutoZOOM-AE usually takes fewer query counts to reach the same distortion. We note that when , AutoZOOM-AE has a higher TPR but also needs slightly more mean query counts than AutoZOOM-BiLIN to reach the same distortion. This suggests that there are some adversarial examples that are difficult for a bilinear resizer to reduce their post-success distortions but can be handled by an AE.

max width=0.97 Method Attack success rate (ASR) Mean query count (initial success) Mean query count reduction ratio (initial success) Mean per-pixel distortion (initial success) True positive rate (TPR) Mean query count with per-pixel distortion 0.0002 ZOO 76.00% 2,226,405.04 (2.22M) 0.00% 4.25 100.00% 2,296,293.73 ZOO+AE 92.00% 1,588,919.65 (1.58M) 28.63% 1.72 100.00% 1,613,078.27 AutoZOOM-BiLIN 100.00% 14,228.88 99.36% 1.26 100.00% 15,064.00 AutoZOOM-AE 100.00% 13,525.00 99.39% 1.36 100.00% 14,914.92

Table 3: Performance evaluation of black-box targeted attacks on ImageNet
(a) Post-success distortion refinement
(b) Dimension reduction v.s. query efficiency
Figure 3: (a) After initial success, AutoZOOM (here ) can further decrease the distortion by setting in (4) to trade more query counts for smaller distortion in the converged stage, which saturates at . (b) Attack dimension reduction is crucial to query-efficient black-box attacks. When compared to black-box attacks on the original dimension, dimension reduction through AutoZOOM-AE reduces roughly 35-40% query counts on MNIST and CIFAR-10 and at least 95% on ImageNet.

4.5 Black-box Attacks on ImageNet

We selected 50 correctly classified images from the ImageNet test set to perform random targeted attacks and set and the attack dimension reduction ratio to 1.15%. The results are summarized in Table 3. Note that comparing to ZOO, AutoZOOM-AE can significantly reduce the query count required to achieve an initial success by 99.39% (or 99.35% to reach the same distortion), which is a remarkable improvement since this means reducing more than 2.2 million model queries given the fact that the dimension of ImageNet ( 270K) is much larger than that of MNIST and CIFAR-10.

Post-success distortion refinement. As described in Algorithm 1, adaptive random gradient estimation is integrated in AutoZOOM, offering a quick initial success in attack generation followed by a fine-tuning process to effectively reduce the distortion. This is achieved by adjusting the gradient estimate averaging parameter in (4) in the post-success stage. In general, averaging over more random directions (i.e., setting larger ) tends to better reduce the variance of gradient estimation error, but at the cost of increased model queries. Figure 3 (a) shows the mean distortion against query counts for various choices of in the post-success stage. The results suggest that setting some small but can further decrease the distortion at the converged phase when compared with the case of . Moreover, the refinement effect on distortion empirically saturates at , implying a marginal gain beyond this value. These findings also demonstrate that our proposed AutoZOOM indeed strikes a balance between distortion and query efficiency in black-box attacks.

4.6 Dimension Reduction and Query Efficiency

In addition to the motivation from the convergence rate in zeroth-order optimization (Sec. 3.3), as a sanity check, we corroborate the benefit of attack dimension reduction to query efficiency in black-box attacks by comparing AutoZOOM (here we use ) with its alternative operated on the original (non-reduced) dimension (i.e., ). Tested on all three datasets and aforementioned settings, Figure 3 (b) shows the corresponding mean query count to initial success and the mean query reduction ratio when in all three datasets. When compared to the attack results of the original dimension, attack dimension reduction through AutoZOOM reduces roughly 35-40% query counts on MNIST and CIFAR-10 and at least 95% on ImageNet. This result highlights the importance of dimension reduction towards query-efficient black-box attacks. For example, without dimension reduction, the attack on the original ImageNet dimension cannot even be successful within the query budge ( queries).

4.7 Additional Remarks and Discussion

In addition to benchmarking on initial attack success, the query reduction ratio when reaching the same distortion can be directly computed from the last column in each table.
The attack gain in AutoZOOM-AE versus AutoZOOM-BiLIN could sometimes be marginal, while we also note that there is room for improving AutoZOOM-AE by exploring different AE models. However, we advocate AutoZOOM-BiLIN as a practically ideal candidate for query-efficient black-box attacks when testing model robustness, due to its easy-to-mount nature and it has no additional training cost.
While learning effective low-dimensional representations of legitimate images is still a challenging task, black-box attacks using significantly less degree of freedoms (i.e., reduced dimensions), as demonstrated in this paper, are certainly plausible, leading to new implications on model robustness.

5 Conclusion

AutoZOOM is a generic attack acceleration framework that is compatible with any gradient-estimation based black-box attack having the general formulation in (1). It adopts a new and adaptive random full gradient estimation strategy to strike a balance between query counts and estimation errors, and features a decoder (AE or BiLIN) for attack dimension reduction and algorithmic convergence acceleration. Compared to a state-of-the-art attack (ZOO), AutoZOOM consistently reduces the mean query counts when attacking black-box DNN image classifiers for MNIST, CIFAT-10 and ImageNet, attaining at least query reduction in finding initial successful adversarial examples (or reaching the same distortion) while maintaining a similar attack success rate. It can also efficiently fine-tune the image distortion to maintain high visual similarity to the original image. Consequently, AutoZOOM provides novel and efficient means for assessing the robustness of deployed machine learning models.

Acknowledgements

Shin-Ming Cheng was supported in part by the Ministry of Science and Technology, Taiwan, under Grants MOST 107-2218-E-001-005 and MOST 107-2218-E-011-012. Cho-Jui Hsieh and Huan Zhang acknowledge the support by NSF IIS-1719097, Intel faculty award, Google Cloud and NVIDIA.

References

  • [Abadi et al.] Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning.
  • [Athalye, Carlini, and Wagner2018] Athalye, A.; Carlini, N.; and Wagner, D. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. ICML.
  • [Baluja and Fischer2018] Baluja, S., and Fischer, I. 2018.

    Adversarial transformation networks: Learning to generate adversarial examples.

    AAAI.
  • [Biggio and Roli2018] Biggio, B., and Roli, F. 2018.

    Wild patterns: Ten years after the rise of adversarial machine learning.

    Pattern Recognition 84:317–331.
  • [Biggio et al.2013] Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, 387–402.
  • [Brendel, Rauber, and Bethge2018] Brendel, W.; Rauber, J.; and Bethge, M. 2018. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. ICLR.
  • [Carlini and Wagner2017a] Carlini, N., and Wagner, D. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. In

    ACM Workshop on Artificial Intelligence and Security

    , 3–14.
  • [Carlini and Wagner2017b] Carlini, N., and Wagner, D. 2017b. Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, 39–57.
  • [Chen et al.2017] Chen, P.-Y.; Zhang, H.; Sharma, Y.; Yi, J.; and Hsieh, C.-J. 2017. ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In ACM Workshop on Artificial Intelligence and Security, 15–26.
  • [Chen et al.2018] Chen, P.-Y.; Sharma, Y.; Zhang, H.; Yi, J.; and Hsieh, C.-J. 2018. EAD: elastic-net attacks to deep neural networks via adversarial examples. AAAI.
  • [Duchi et al.2015] Duchi, J. C.; Jordan, M. I.; Wainwright, M. J.; and Wibisono, A. 2015. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory 61(5):2788–2806.
  • [Gao, Jiang, and Zhang2014] Gao, X.; Jiang, B.; and Zhang, S. 2014. On the information-adaptive variants of the admm: an iteration complexity perspective. Optimization Online 12.
  • [Ghadimi and Lan2013] Ghadimi, S., and Lan, G. 2013. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization 23(4):2341–2368.
  • [Goodfellow, Shlens, and Szegedy2015] Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and harnessing adversarial examples. ICLR.
  • [Ilyas et al.2018] Ilyas, A.; Engstrom, L.; Athalye, A.; and Lin, J. 2018. Black-box adversarial attacks with limited queries and information. ICML.
  • [Ilyas2018] Ilyas, A. 2018. Circumventing the ensemble adversarial training defense. https://github.com/andrewilyas/ens-adv-train-attack.
  • [Kingma and Ba2015] Kingma, D., and Ba, J. 2015. Adam: A method for stochastic optimization. ICLR.
  • [Krizhevsky2009] Krizhevsky, A. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto.
  • [Kurakin, Goodfellow, and Bengio2017] Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial machine learning at scale. ICLR.
  • [Lax and Terrell2014] Lax, P. D., and Terrell, M. S. 2014. Calculus with applications. Springer.
  • [LeCun et al.1998] LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11):2278–2324.
  • [Liu et al.2017] Liu, Y.; Chen, X.; Liu, C.; and Song, D. 2017. Delving into transferable adversarial examples and black-box attacks. ICLR.
  • [Liu et al.2018] Liu, S.; Chen, J.; Chen, P.-Y.; and Hero, A. O. 2018. Zeroth-order online alternating direction method of multipliers: Convergence analysis and applications. AISTATS.
  • [Lowd and Meek2005] Lowd, D., and Meek, C. 2005. Adversarial learning. In ACM SIGKDD international conference on Knowledge discovery in data mining, 641–647.
  • [Narodytska and Kasiviswanathan2016] Narodytska, N., and Kasiviswanathan, S. P. 2016. Simple black-box adversarial perturbations for deep networks. arXiv preprint arXiv:1612.06299.
  • [Nesterov and Spokoiny2017] Nesterov, Y., and Spokoiny, V. 2017. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics 17(2):527–566.
  • [Nitin Bhagoji et al.2018] Nitin Bhagoji, A.; He, W.; Li, B.; and Song, D. 2018. Practical black-box attacks on deep neural networks using efficient query mechanisms. In ECCV, 154–169.
  • [Papernot et al.2017] Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z. B.; and Swami, A. 2017. Practical black-box attacks against machine learning. In ACM Asia Conference on Computer and Communications Security, 506–519.
  • [Papernot, McDaniel, and Goodfellow2016] Papernot, N.; McDaniel, P.; and Goodfellow, I. 2016. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.
  • [Russakovsky et al.2015] Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252.
  • [Su et al.2018] Su, D.; Zhang, H.; Chen, H.; Yi, J.; Chen, P.; and Gao, Y. 2018. Is robustness the cost of accuracy? - A comprehensive study on the robustness of 18 deep image classification models. In ECCV.
  • [Suya et al.2017] Suya, F.; Tian, Y.; Evans, D.; and Papotti, P. 2017. Query-limited black-box attacks to classifiers. NIPS Workshop.
  • [Szegedy et al.2014] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. ICLR.
  • [Szegedy et al.2016] Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In CVPR, 2818–2826.
  • [Tramèr et al.2018] Tramèr, F.; Kurakin, A.; Papernot, N.; Boneh, D.; and McDaniel, P. 2018. Ensemble adversarial training: Attacks and defenses. ICLR.
  • [Wang et al.2018] Wang, Y.; Du, S.; Balakrishnan, S.; and Singh, A. 2018. Stochastic zeroth-order optimization in high dimensions. AISTATS.

Supplementary Material

Appendix A More Background on Adversarial Attacks and Defenses

The research in generating adversarial examples to deceive machine-learning models, known as adversarial attacks, tends to evolve with the advance of machine-learning techniques and new publicly available datasets. In [Lowd and Meek2005], the authors studied adversarial attacks to linear classifiers with continuous or Boolean features. In [Biggio et al.2013]

, the authors proposed a gradient-based adversarial attack on kernel support vector machines (SVMs). More recently, gradient-based approaches are also used in adversarial attacks on image classifiers trained by DNNs

[Szegedy et al.2014, Goodfellow, Shlens, and Szegedy2015]. Due to space limitation, we focus on related work in adversarial attacks on DNNs. Interested readers may refer to the survey paper [Biggio and Roli2018] for more details.

max width= Method Approach Model ouput Targeted attack Large network (ImageNet) Data-driven acceleration [Narodytska and Kasiviswanathan2016] local random search score [Papernot et al.2017] substitute model score [Suya et al.2017] acquisition via posterior score [Brendel, Rauber, and Bethge2018] Gaussian perturbation decision [Ilyas et al.2018] natural evolution strategy score/decision [Chen et al.2017] coordinate-wise gradient estimation score [Nitin Bhagoji et al.2018] coordinate-wise gradient estimation score AutoZOOM (this paper) Random (full) gradient estimation score

Table S1: Comparison of black-box attacks on DNNs

Gradient-based adversarial attacks on DNNs fall within the white-box setting, since acquiring the gradient with respect to the input requires knowing the weights of the target DNN. In principle, adversarial attacks can be formulated as an optimization problem of minimizing the adversarial perturbation while ensuring attack objectives. In image classification, given a natural image, an untargeted attack aims to find a visually similar adversarial image resulting in a different class prediction, while a targeted attack aims to find an adversarial image leading to a specific class prediction. The visual similarity between a pair of adversarial and natural images is often measured by the norm of their difference, where . Existing powerful white-box adversarial attacks using , or norms include iterative fast gradient sign methods [Kurakin, Goodfellow, and Bengio2017], Carlini and Wagner’s (C&W) attack [Carlini and Wagner2017b], elastic-net attacks to DNNs (EAD) [Chen et al.2018], etc.

Black-box adversarial attacks are practical threats to the deployed machine-learning services. Attackers can observe the input-output correspondences of any queried input, but the target model parameters are completely hidden. Therefore, gradient-based adversarial attacks are inapplicable to a black-box setting. As a first attempt, the authors in [Papernot et al.2017] proposed to train a substitute model using iterative model queries, perform white-box attacks on the substitute model, and leverage the transferability of adversarial examples [Papernot, McDaniel, and Goodfellow2016, Liu et al.2017] to attack the target model. However, training a representative surrogate for a DNN is challenging due to the complicated and nonlinear classification rules of DNNs and high dimensionality of the underlying dataset. The performance of black-box attacks can be severely degraded if the adversarial examples for the substitute model transfer poorly to the target model. To bridge this gap, the authors in [Chen et al.2017] proposed a black-box attack called ZOO that directly estimates the gradient of the attack objective by iteratively querying the target model. Although ZOO achieves a similar attack success rate and comparable visual quality as many white-box attack methods, it exploits the symmetric difference quotient method [Lax and Terrell2014] for coordinate-wise gradient estimation and value update, which requires excessive target model evaluations and is hence not query-efficient. The same gradient estimation technique is also used in the later work in [Nitin Bhagoji et al.2018]. Although acceleration techniques such as importance sampling, bilinear scaling and random feature grouping have been used in [Chen et al.2017, Nitin Bhagoji et al.2018], the coordinate-wise gradient estimation approach still forms a bottleneck for query efficiency.

Beyond optimization-based approaches, the authors in [Ilyas et al.2018] proposed to use a natural evolution strategy (NES) to enhance query efficiency. Although there is also a vector-wise gradient estimation step in the NES attack, we treat it as an independent and parallel work since its natural evolutionary step is out of the scope of black-box attacks using zeroth-order gradient descent. We also note that different from NES, our AutoZOOM framework uses a query-efficient random gradient estimation strategy. In addition, AutoZOOM could be applied to further improve the query efficiency of NES, since NES does not take into account the factor of attack dimension reduction, which is the main focus of this paper. Under a more restricted setting, where only the decision (top-1 prediction class) is known to an attacker, the authors in [Brendel, Rauber, and Bethge2018] proposed a random-walk based attack around the decision boundary. Such a black-box attack dispenses class prediction scores and hence requires additional model queries.

In this paper, we focus on improving the query efficiency of gradient-estimation and gradient-descent based black-box attacks and consider the threat model when the class prediction scores are known to an attacker. For reader’s reference, we compare existing black-box attacks on DNNs with AutoZOOM in Table S1. One unique feature of AutoZOOM is the use of reduced attack dimension when mounting black-box attacks, which is an unlabeled data-driven technique (autoencoder) for attack acceleration, and has not been studied thoroughly in existing attacks. While white-box attacks such as [Baluja and Fischer2018]

have utilized autoencoders trained on the training data and the transparent logit representations of DNNs, we propose in this work to use autoencoders trained on unlabeled natural data to improve query efficiency for black-box attacks.

There has been many methods proposed for defending adversarial attacks to DNNs. However, new defenses are continuously weakened by follow-up attacks [Carlini and Wagner2017a, Athalye, Carlini, and Wagner2018]. For instance, model ensembles [Tramèr et al.2018] were shown to be effective against some black-box attacks, while they are recently circumvented by advanced attack techniques [Ilyas2018]. In this paper, we focus on improving query efficiency in attacking black-box undefended DNNs.

Appendix B Proof of Theorem 1

Recall that the data dimension is and we assume to be differentiable and its gradient to be -Lipschitz. Fixing and consider a smoothed version of :

(S1)

Based on [Gao, Jiang, and Zhang2014, Lemma 4.1-a], we have the relation

(S2)

which then yields

(S3)

where we recall that has been defined in (3). Moreover, based on [Gao, Jiang, and Zhang2014, Lemma 4.1-b], we have

(S4)

Substituting (S3) into (S4), we obtain

This then implies that

(S5)

where

Once again, by applying [Gao, Jiang, and Zhang2014, Lemma 4.1-b], we can easily obtain that

(S6)

Now, let us consider the averaged random gradient estimator in (4),

Due to the properties of i.i.d. samples and (S5), we define

(S7)

Moreover, we have

(S8)
(S9)
(S10)

where we have used the fact that . The definition of in (S7) yields

(S11)

From (S6), we also obtain that for any ,

(S12)

Substituting (S11) and (S12) into (S10), we obtain

(S13)
(S14)

Finally, we bound the mean squared estimation error as

(S15)

which completes the proof.

max width= Dataset: MNIST                            Training MSE: 2.00 Reduction ratio / image size / feature map size: 25% / 28281 / 14141 Encoder: ConvReLU-16 ConvReLU-16 MaxPool Conv-1 Decoder: ConvReLU-16 Reshape-Re-U ConvReLU-16 Conv-1 Dataset: CIFAR-10                            Training MSE: 5.00 Reduction ratio / image size / feature map size: 6.25% / 32323 / 883 Encoder: ConvReLU-16 ConvReLU-16 MaxPool ConvReLU-3 MaxPool Conv-3 Decoder: ConvReLU-16 Reshape-Re-U ConvReLU-16 Reshape-Re-U ConvReLU-16 Conv-3 Dataset: ImageNet                            Training MSE: 1.02 Reduction ratio / image size / feature map size: 1.15% / 2992993 / 32323 Encoder: Reshape-Bi-D ConvReLU-16 ConvReLU-16 MaxPool ConvReLU-3 MaxPool ConvReLU-3 MaxPool Conv-3 Decoder: ConvReLU-16 Reshape-Re-U ConvReLU-16 Reshape-Re-U ConvReLU-16 Reshape-Re-U ConvReLU-16 Reshape-Bi-U Conv-3 ConvReLU-16: Convolution (16 filters, kernel size: 33) + ReLU activation ConvReLU-3: Convolution (3 filters, kernel size: 33) + ReLU activation Conv-3: Convolution (3 filters, kernel size: 33)        Conv-1: Convolution (1 filter, kernel size: 33) Reshape-Bi-D: Bilinear reshaping from 2992993 to 1281283 Reshape-Bi-U: Bilinear reshaping from 12812816 to 29929916 Reshape-Re-U: Reshaping by replicating pixels from to : a proper depth

Table S2: Architectures of Autoencoders in AutoZOOM

Appendix C Architectures of Convolutional Autoencoders in AutoZOOM

On MNIST, the convolutional autoencoder (CAE) is trained on 50,000 randomly selected hand-written digits from the MNIST8M dataset666http://leon.bottou.org/projects/infimnist. On CIFAR-10, the CAE is trained on 9,900 images selected from its test dataset. The remaining images are used in black-box attacks. On ImageNet, all the attacked natural images are from 10 randomly selected image labels, and these labels are also used as the candidate attack targets. The CAE is trained on about 9000 images from these classes.

Table S2 shows the architectures for all the autoencoders used in this work. Note that the autoencoders designed for ImageNet uses bilinear scaling to transform data size from to , and also back from to . This is to allow easy processing and handling for the autoencoder’s internal convolutional layers.

The normalized mean squared error of our autoencoder trained on MNIST, CIFAR-10 and 25 Imagenet is 0.0027, 0.0049 and 0.0151, respectively, which lies within a reasonable range of compression loss.

Appendix D More Adversarial Examples of Attacking Inception-v3 in the Black-box Setting

Figure S1 shows other adversarial examples of the Inception-v3 model in the black-box targeted attack setting.

(a) “French bulldog” to “traffic light”
(b) “purse” to “bagel”
(c) “bagel” to “ grand piano”
(d) “traffic light” to “ iPod”
Figure S1: Adversarial examples on ImageNet crafted by AutoZOOM when attacking on the Inception-v3 model in the black-box setting with a target class selected at random. Left: original natural images. Right: adversarial examples.

Appendix E Performance Evaluation of Black-box Untargeted Attacks

Table S3 shows the attacking performance of black-box untargeted attacks on MNIST, CIFAR-10 and ImageNet using ZOO and AutoZOOM-BiLIN attacks on the same set of images in Section 4.5. The Loss function is defined as

(S16)

where is the top-1 prediction label of a natural image . We set and use on MNIST and CIFAR-10 and on ImageNet for distortion fine-tuning in the post-attack phase. Comparing to Table 3, the number of model queries can be further reduced since untargeted attacks only require the adversarial images to be classified as any class other than rather than classified as a specific class .

max width=0.97 Dataset Method Attack success rate (ASR) Mean query count (initial success) Mean query count reduction ratio (initial success) Mean per-pixel distortion (initial success) True positive rate (TPR) Mean query count with per-pixel distortion threshold MNIST ZOO 100.00% 7856.64 0.00% 3.79 100.00% 12392.96 AutoZOOM-BiLIN 100.00% 98.82 98.74% 4.21 100.00% 692.94 CIFAR-10 ZOO 100.00% 3957.76 0.00% 5.78 100.00% 4644.60 AutoZOOM-BiLIN 100.00% 85.6 97.83% 6.47 100.00% 104.48 ImageNet ZOO 94.00% 271627.91 0.00% 2.32 91.49% 334901.58 AutoZOOM-BiLIN 100.00% 1695.27 99.37% 3.02 94.00% 4639.11

Table S3: Performance evaluation of black-box untargeted attacks on different datasets. The per-pixel distortion thresholds are 0.004, 0.0015 and for MNIST, CIFAR-10 and ImageNet, respectively.