1 Introduction
Neural networks are now wellknown to be vulnerable to adversarial examples: additive perturbations that, when applied to the input, change the network’s output classification [9]. Work investigating this lack of robustness to adversarial examples often takes the form of a backandforth between newly proposed adversarial attacks
, methods for quickly and efficiently crafting adversarial examples, and corresponding defenses that modify the classifier at either training or test time to improve robustness. The most successful adversarial attacks use gradientbased optimization methods
[9, 17], which require complete knowledge of the architecture and parameters of the target network; this assumption is referred to as the whitebox attack setting. Conversely, the more realistic blackbox setting requires an attacker to find an adversarial perturbation without such knowledge: information about the network can be obtained only through querying the target network, i.e., supplying an input to the network and receiving the corresponding output.In realworld scenarios, it is extremely improbable for an attacker to have unlimited bandwidth to query a target classifier. In evaluation of black box attacks, this constraint is usually formalized via the introduction of a query budget: a maximum number of queries allowed to the model per input, after which an attack is considered to be unsuccessful. Several recent papers have proposed attacks specifically to operate in this querylimited context [12, 11, 4, 28, 18]; nevertheless, these papers typically consider query budgets on the order of 10,000 or 100,000. This leaves open questions as to whether blackbox attacks can successfully attack a deep network based classifier in severely query limited settings, e.g., with a query budget of 100200. In such a query limited regime, it is natural for an attacker to use the entire query budget, so we ask the pertinent question: In a constrained query limited setting, can one design query efficient yet successful black box adversarial attacks?
This work proposes a blackbox attack method grounded in Bayesian optimization [13, 8], which has recently emerged as a state of the art blackbox optimization technique in settings where minimizing the number of queries is of paramount importance. Straightforward application of Bayesian optimization to the problem of finding adversarial examples is not feasible: the input dimension of even a small neural networkbased image classifier is orders of magnitude larger than the standard use case for Bayesian optimization. Rather, we show that we can bridge this gap by performing Bayesian optimization in a reduceddimension setting and upsampling to obtain our final perturbation. We explore several upsampling techniques and find that a relatively simple nearestneighbor upsampling method allows us to sufficiently reduce the optimization problem dimension such that Bayesian optimization can find adversarial perturbations with more success than existing blackbox attacks in queryconstrained settings.
We compare the efficacy of our adversarial attack with a set of experiments attacking three of the most commonly used pretrained ImageNet
[7] classifiers: ResNet50 [10], Inceptionv3 [27], and VGG16bn [24]. Results from these experiments show that with very small query budgets (under 200 queries), the proposed method BayesAttack achieves success rates comparable to or exceeding existing methods, and does so with far smaller average and median query counts. Further experiments are performed on the MNIST dataset to compare how various upsampling techniques affect the attack accuracy of our method. Given these results we argue that, despite being a simple approach (indeed, largely because it is such a simple and standard approach for blackbox optimization), Bayesian Optimization should be a standard baseline for any blackbox adversarial attack task in the future, especially in the small query budget regime.2 Related Work
Within the blackbox setting, adversarial attacks can be further categorized by the exact nature of the information received from a query. The most closely related work to our approach are scorebased
attacks, where queries to the network return the entire output layer of the network, either as logits or probabilities. Within this category, existing approaches draw from a variety of optimization fields and techniques. One popular approach in this area is to attack with zerothorder methods via some method of derivativefree gradient estimation, as in methods proposed in
Ilyas et al. [12], which uses timedependent and datadependent priors to improve the estimate, as well as Ilyas et al. [11], which replaces the gradient direction found using natural evolution strategies (NES). Other methods search for the best perturbation outside of this paradigm; Moon et al. [18] cast the problem of finding an adversarial perturbation as a discrete optimization problem and use local search methods to solve. These works all search for adversarial perturbations within a search space with a hard constraint on perturbation size; other work [4, 28]incorporates a soft version of this constraint and performs coordinate descent to decrease the perturbation size while keeping the perturbed image misclassified. The latter of these methods incorporates an autoencoderbased upsampling method with which we compare in Section
5.3.1.One may instead assume that only part of the information from the network’s output layer is received as the result of a query. This can take the form of only receiving the output of the top predicted classes [11], but more often the restrictive decisionbased setting is considered. Here, queries yield only the predicted class, with no probability information. The most successful work in this area is in [5], which reformulates the problem as a search for the direction of the nearest decision boundary and solves using a random gradientfree method, and in [1] and [3], both of which use random walks along the decision boundary to perform an attack. The latter work significantly improves over the former with respect to query efficiency, but the number of queries required to produce adversarial examples with small perturbations in this setting remains in the tens of thousands.
A separate class of transferbased attacks train a second, fullyobservable substitute network, attack this network with whitebox methods, and transfer these attacks to the original target network. These may fall into one of the preceding categories or exist outside of the distinction: in Papernot et al. [20], the substitute model is built with scorebased queries to the target network, whereas Liu et al. [16] trains an ensemble of models without directly querying the network at all. These methods come with their own drawbacks: they require training a substitute model, which may be costly or timeconsuming, and overall attack success tends to be lower than that of gradientbased methods.
Finally, there has been some recent interest in leveraging Bayesian optimization for constructing adversarial perturbations. Bayesian optimization has played a supporting role in several methods, including Tu et al. [28], where it is used to solve the step of an alternating direction of method multipliers (ADMM) approach, and in [6], which uses it to search within a set of procedural noise perturbations. On the other hand, prior work in which Bayesian optimization plays a central role performs experiments only in relatively lowdimensional problems, highlighting the main challenge of its application: Suya et al. [26] examines an attack on a spam email classifier with 57 input features, and in MunozGonzález [19] image classifiers are attacked but notably do not scale beyond MNIST classifiers. In contrast to these past works, the main contribution of this paper is to show that Bayesian Optimization presents a scalable, queryefficient approach for largescale blackbox adversarial attacks, when combined with upsampling procedures.
3 Problem Formulation
The following notation and definitions will be used throughout the remainder of the paper. Let be the target neural network. We assume that is a class image classifier that takes normalized inputs: each dimension of an input represents a single pixel and is bounded between and , denotes the original label, and the corresponding output is a
dimensional vector representing a probability distribution over classes.
Rigorous evaluation of an adversarial attack requires careful definition of a threat model: a set of formal assumptions about the goals, knowledge, and capabilities of an attacker [2]. We assume that, given a correctly classified input image , the goal of the attacker is to find a perturbation such that is misclassified, i.e., . We operate in the scorebased blackbox setting, where we have no knowledge of the internal workings of the network, and a query to the network yields the entire corresponding dimensional output. To enforce the notion that the adversarial perturbation should be small, we take the common approach of requiring that be smaller than a given threshold in some norm, where varies depending on the classifier. This work considers the norm, but our attack can easily be adapted to other norms. Finally, we denote the query budget with ; if an adversarial example is not found after queries to the target network, the attack fails.
As in most work, we pose the attack as a constrained optimization problem. We use an objective function suggested by Carlini and Wagner [2] and used in Tu et al. [28], Chen et al. [4]:
(1)  
where 
Most importantly, the input to is an adversarial example for if and only if .
We briefly note that the above threat model and objective function were chosen for simplicity and for ease of directly comparing with other black box attacks, but the attack method we propose is compatible with many other threat models. For example, we may change the goals of the attacker or measure in or norms instead of with appropriate modifications to the objective function and constraints in equation 1.
4 Model Framework
In this section, we present the proposed blackbox attack method. We begin with a brief description of Bayesian optimization [13] followed by its application to generate blackbox adversarial examples. Finally, we describe our method for attacking a classifier trained with highdimensional inputs (e.g. ImageNet) in a queryefficient manner.
4.1 Bayesian Optimization
Bayesian Optimization (BO) is a method for black box optimization particularly suited to problems with low dimension and expensive queries. Bayesian Optimization consists of two main components: a Bayesian statistical model and an acquisition function. The Bayesian statistical model, also referred to as the surrogate model, is used for approximating the objective function: it provides a Bayesian posterior probability distribution that describes potential values for the objective function at any candidate point. This posterior distribution is updated each time we query the objective function at a new point. The most common surrogate model for Bayesian optimization are Gaussian processes (GPs)
[21], which define a prior over functions that are cheap to evaluate and are updated as and when new information from queries becomes available. We model the objective function using a GP with prior distribution with constant mean function and Matern kernel [23, 25] as the covariance function , which is defined as:where is the dimension of input and and
are hyperparameters. We select hyperparameters that maximize the posterior of the observations under a prior
[23, 8].The second component, the acquisition function , assigns a value to each point that represents the utility of querying the model at this point given the surrogate model. We sample the objective function at where comprises of samples drawn from so far. Although this itself may be a hard (nonconvex) optimization problem to solve, in practice we use a standard approach and approximately optimize this objective using the LBFGS algorithm. There are several popular choices of acquisition function; we use expected improvement (EI) [13], which is defined as
(2) 
where denotes the expectation taken over the posterior distribution given evaluations of at , and is the best value observed so far.
Bayesian optimization framework as shown in Algorithm 2 runs these two steps iteratively for the given budget of function evaluations. It updates the posterior probability distribution on the objective function using all the available data. Then, it finds the next sampling point by optimizing the acquisition function over the current posterior distribution of GP. The objective function is evaluated at this chosen point and the whole process repeats.
In theory, we may apply Bayesian optimization directly to the optimization problem in equation 1 to obtain an adversarial example, stopping once we find a point where the the objective function rises above . In practice, Bayesian optimization’s speed and overall performance fall dramatically as the input dimension of the problem increases. This makes running Bayesian optimization over high dimensional inputs such as ImageNet (input dimension ) practically infeasible; we therefore require a method for reducing the dimension of this optimization problem.
4.2 BayesAttack: Generating Adversarial Examples using Bayesian Optimization
Images tend to exhibit spatial local similarity i.e. pixels that are close to each other tend to be similar. Ilyas et al. [12]
showed that this similarity also extends to gradients and used this to reduce query complexity. Our method uses this data dependent prior to reduce the search dimension of the perturbation. We show that the adversarial perturbations also exhibit spatial local similarity and we do not need to learn the adversarial perturbation conforming to the actual dimensions of the image. Instead, we learn the perturbation in a much lower dimension. We obtain our final adversarial perturbation by interpolating the learned, lowdimension perturbation to the original input dimension.
We define the objective function for running the Bayesian optimization in low dimension in Algorithm 1. We let be the projection onto the ball of radius centered at origin. Our method finds a low dimension perturbation and upsamples to obtain the adversarial perturbation. Since this upsampled image may not lie inside the ball of radius centered at the origin, we project back to ensure remains bounded by . With the perturbation in hand, we compute the objective function of the original optimization problem defined in equation 1.
We describe the complete algorithm our complete framework in Algorithm 2 where and denote the original input image and label respectively. The goal is to learn an adversarial perturbation in much lower dimension, i.e., . We begin with a small dataset where each is a dimensional vector sampled from a given distribution and is the function evaluation at i.e . We iteratively update the posterior distribution of the GP using all available data and query new perturbations obtained by maximizing the acquisition function over the current posterior distribution of GP until we find an adversarial perturbation or run out of query budget. The Bayesian optimization iterations run in low dimension but for querying the model we upsample, project and then add the perturbation to the original image as shown in Algorithm 1 to get the perturbed image to conform to the input space of the model. To generate a successful adversarial perturbation, it is necessary and sufficient to have , as described in Section 3. We call our attack successful with queries to the model if the Bayesian optimization loop exits after iterations (line 12 in Algorithm 2), otherwise it is unsuccessful. Finally, we note that the final adversarial image can be obtained by upsampling the learned perturbation and adding to the original image as shown in Figure 1.
In this work, we focus on norm perturbations, where projection is defined as:
(3) 
where is the given perturbation bound. The upsampling method can be linear or nonlinear. In this work, we conduct experiments using nearest neighbor upsampling. A variational autoencoder [14] or vanilla autoencoder could also be trained to map the low dimension perturbation to the original input space. We compare these different upsampling schemes in Section 5.3.1. The initial choice of the dataset
to form a prior can be done using standard normal distribution, uniform distribution or even in a deterministic manner (e.g. with Sobol sequences).
5 Experiments
Our experiments focus on the untargeted attack setting where the goal is to perturb the original image originally classified correctly by the classification model to cause misclassification. We primarily consider performance of BayesAttack on ImageNet classifiers and compare its performance to other blackbox attacks in terms of success rate over a given query budget. We also perform ablation studies on the MNIST dataset [15] by examining different upsampling techniques and varying the latent dimension of the optimization problem.
We define success rate as the ratio of the number of images successfully perturbed for a given query budget to the total number of input images. In all experiments, images that are already misclassified by the target network are excluded from the test set; only images that are initially classified with the correct label are attacked. For each method of attack and each target network, we compute the average and median number of queries used to attack among images that were successfully perturbed.
5.1 Empirical Protocols
We treat the latent dimension used for running the Bayesian optimization loop as a hyperparameter. For MNIST, we tune the latent dimension over . Note that is the original input dimension for MNIST. While for ImageNet, we search for latent dimension and shape over the range . For ImageNet, the latent shapes with first dimension as indicate that the same perturbation is added to all three channels while the ones with 3 indicate that the perturbation across channels are different. In case of ImageNet, we found that for ResNet50 and VGG16bn different perturbation across channels work much better than adding the same perturbation across channels. While for Inceptionv3, both seem to work equally well.
We initialize the GP with samples sampled from a standard normal distribution. For all the experiments in next section, we use expected improvement as the acquisition function. We also examined other acquisition functions (posterior mean, probability of improvement, upper confidence bound) and observed that our method works equally well with other acquisition functions. We independently tune the hyperparameters on a small validation set and exclude it from our final test set. We used BoTorch^{1}^{1}1https://botorch.org/ packages for implementation.
5.2 Experiments on ImageNet
We compare the performance of the proposed method BayesAttack against NES [11], Banditstd [12] and Parsimonious [18], which is the current state of the art among scorebased blackbox attacks within the threat model. On ImageNet, we attack the pretrained^{2}^{2}2Pretrained models available at https://pytorch.org/docs/stable/torchvision/models ResNet50 [10], Inceptionv3 [27] and VGG16bn [24]. We use 10,000 randomly selected images (normalized to [0, 1]) from the ImageNet validation set that were initially correctly classified.
We set the perturbation bound to and evaluate the performance of all the methods for low query budgets. We use the implementation^{3}^{3}3https://github.com/MadryLab/blackboxbandits and hyperparameters provided by Ilyas et al. [12] for NES and Banditstd. Similarly for Parsimonious, we use the implementation^{4}^{4}4https://github.com/snumllab/parsimoniousblackboxattack and hyperparameters given by Moon et al. [18].
Figure 2 compares the performance of the proposed method BayesAttack against the set of baseline methods in terms of success rate at different query budgets. We can see that BayesAttack consistently performs better than baseline methods for query budgets . Even for query budgets , BayesAttack achieves better success rates than Banditstd and NES on ResNet50 and VGG16bn. Finally, we note that for higher query budgets , both Parsimonious and Banditstd method perform better than BayesAttack.
Classifier  Method  Success  Average  Median 

Rate  Query  Query  
ResNet50  NES  
Banditstd  
Parsimonious  
BayesAttack  
Inceptionv3  NES  
Banditstd  
Parsimonious  
BayesAttack  
VGG16bn  NES  
Banditstd  
Parsimonious  
BayesAttack 
To compare the success rate and average/median query, we select a point on the plots shown in Figure 2. Table 1 compares the performance of all the methods in terms of success rate, average and median query for a query budget of . We can see that BayesAttack achieves higher success rate with less average queries as compared to the next best Parsimonious method. Thus, we argue that although the Bayesian Optimization adversarial attack approach is to some extent a “standard” application of traditional Bayesian Optimization methods, the performance over the existing state of the art makes it a compelling approach particularly for the very low query setting.
5.3 Experiments on MNIST
For MNIST, we use the pretrained network (used in Carlini and Wagner [2]) with convolutional layers, maxpooling layers and fullyconnected layers which achieves accuracy on MNIST test set. We conduct untargeted adversarial attacks with perturbation bound on a randomly sampled images from the test set. All the experiments performed on MNIST follow the same protocols.
5.3.1 Upsampling Methods
The proposed method requires an upsampling technique for mapping the perturbation learnt in the latent dimension to the original input dimension. In this section, we examine different linear and nonlinear upsampling schemes and compare their performance on MNIST. The approaches we consider here can be divided into two broad groups: EncoderDecoder based methods and Interpolation methods. For interpolationbased methods, we consider nearestneighbor, bilinear and bicubic interpolation.
For encoderdecoder based approaches, we train a variational autoencoder [14, 22] by maximizing a variational lower bound on the log marginal likelihood. We also consider a simple autoencoder trained by minimizing the mean squared loss between the generated image and the original image. For both the approaches, we run the Bayesian optimization loop in latent space and use the pretrained decoder (or generator) for mapping the latent vector into image space. For these approaches, rather than searching for adversarial perturbation in the latent space, we learn the adversarial image directly using the Bayesian optimization.
Figure 2(a) compares the performance of different upsampling methods. We can see that Nearest Neighbor (NN) interpolation and VAEbased decoder perform better than rest of the upsampling schemes. However, the NN interpolation achieves similar performance to the VAEbased method but without the need of a large training dataset which is required for accurately training a VAEbased decoder.
5.3.2 Latent Dimension Sensitivity Analysis
We perform a sensitivity analysis on the latent dimension hyperparameter used for running the Bayesian optimization. We vary the latent dimension over the range . Figure 2(b) shows the performance of nearest neighbor interpolation method for different latent dimension. We observe that lower latent dimensions achieve better success rates than the original input dimension for MNIST. This could be because with increase in search dimension, Bayesian optimization needs more queries to find successful perturbation. We also note that for the case of latent dimension , BayesAttack achieves lower success rates which could mean that it is hard to find adversarial perturbations in such low dimension.
6 Conclusions
We considered the problem of blackbox adversarial attacks in settings involving constrained query budgets. We employed Bayesian optimization based method to construct a query efficient attack strategy. The proposed method searches for an adversarial perturbation in low dimensional latent space using Bayesian optimization and then maps the perturbation to the original input space using the nearest neighbor upsampling scheme. We successfully demonstrated the efficacy of our method in attacking multiple deep learning architectures for high dimensional inputs. Our work opens avenues regarding applying BO for adversarial attacks in high dimensional settings.
References
 Brendel et al. [2017] W. Brendel, J. Rauber, and M. Bethge. Decisionbased adversarial attacks: Reliable attacks against blackbox machine learning models. arXiv preprint arXiv:1712.04248, 2017.
 Carlini and Wagner [2017] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57, May 2017. doi: 10.1109/SP.2017.49.
 Chen et al. [2019] J. Chen, M. I. Jordan, and M. J. Wainwright. Hopskipjumpattack: A queryefficient decisionbased attack. arXiv preprint arXiv:1904.02144, 2019.

Chen et al. [2017]
P.Y. Chen, H. Zhang, Y. Sharma, J. Yi, and C.J. Hsieh.
Zoo: Zeroth order optimization based blackbox attacks to deep neural
networks without training substitute models.
In
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security
, AISec ’17, pages 15–26, New York, NY, USA, 2017. ACM. ISBN 9781450352024. doi: 10.1145/3128572.3140448.  Cheng et al. [2018] M. Cheng, T. Le, P.Y. Chen, J. Yi, H. Zhang, and C.J. Hsieh. Queryefficient hardlabel blackbox attack: An optimizationbased approach. arXiv preprint arXiv:1807.04457, 2018.
 Co et al. [2018] K. T. Co, L. MuñozGonzález, and E. C. Lupu. Procedural noise adversarial examples for blackbox attacks on deep neural networks. arXiv preprint arXiv:1810.00470, 2018.
 Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.J. Li, K. Li, and L. FeiFei. ImageNet: A LargeScale Hierarchical Image Database. In CVPR09, 2009.
 Frazier [2018] P. I. Frazier. A tutorial on bayesian optimization. ArXiv, abs/1807.02811, 2018.
 Goodfellow et al. [2014] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.

He et al. [2015]
K. He, X. Zhang, S. Ren, and J. Sun.
Deep residual learning for image recognition.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
, pages 770–778, 2015. 
Ilyas et al. [2018]
A. Ilyas, L. Engstrom, A. Athalye, and J. Lin.
Blackbox adversarial attacks with limited queries and information.
In J. Dy and A. Krause, editors,
Proceedings of the 35th International Conference on Machine Learning
, volume 80 of Proceedings of Machine Learning Research, pages 2137–2146, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.  Ilyas et al. [2019] A. Ilyas, L. Engstrom, and A. Madry. Prior convictions: Blackbox adversarial attacks with bandits and priors. In International Conference on Learning Representations, 2019.
 Jones et al. [1998] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive blackbox functions. J. of Global Optimization, 13(4):455–492, Dec. 1998. ISSN 09255001. doi: 10.1023/A:1008306431147.
 Kingma and Welling [2014] D. P. Kingma and M. Welling. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 1416, 2014, Conference Track Proceedings, 2014.
 Lecun et al. [1998] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradientbased learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998.
 Liu et al. [2016] Y. Liu, X. Chen, C. Liu, and D. Song. Delving into transferable adversarial examples and blackbox attacks. arXiv preprint arXiv:1611.02770, 2016.
 Madry et al. [2017] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.

Moon et al. [2019]
S. Moon, G. An, and H. O. Song.
Parsimonious blackbox adversarial attacks via efficient combinatorial optimization.
In K. Chaudhuri and R. Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 4636–4645, Long Beach, California, USA, 09–15 Jun 2019. PMLR.  MunozGonzález [2017] L. MunozGonzález. Bayesian Optimization for BlackBox Evasion of Machine Learning Systems. PhD thesis, Imperial College London, 2017.
 Papernot et al. [2016] N. Papernot, P. D. McDaniel, I. J. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical blackbox attacks against machine learning. CoRR, abs/1602.02697, 2016.
 Rasmussen and Williams [2005] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. ISBN 026218253X.
 Rezende et al. [2014] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
 Shahriari et al. [2016] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104:148–175, 2016.
 Simonyan and Zisserman [2014] K. Simonyan and A. Zisserman. Very deep convolutional networks for largescale image recognition, 2014. cite arxiv:1409.1556.
 Snoek et al. [2012] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 2951–2959. Curran Associates, Inc., 2012.
 Suya et al. [2017] F. Suya, Y. Tian, D. Evans, and P. Papotti. Querylimited blackbox attacks to classifiers. arXiv preprint arXiv:1712.08713, 2017.
 Szegedy et al. [2015] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818–2826, 2015.
 Tu et al. [2019] C. Tu, P. Ting, P. Chen, S. Liu, H. Zhang, J. Yi, C. Hsieh, and S. Cheng. Autozoom: Autoencoderbased zeroth order optimization method for attacking blackbox neural networks. In The ThirtyThird AAAI Conference on Artificial Intelligence, AAAI 2019, The ThirtyFirst Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27  February 1, 2019., pages 742–749, 2019.
Comments
There are no comments yet.