Automated Decision-based Adversarial Attacks

05/09/2021
by   Qi-An Fu, et al.
0

Deep learning models are vulnerable to adversarial examples, which can fool a target classifier by imposing imperceptible perturbations onto natural examples. In this work, we consider the practical and challenging decision-based black-box adversarial setting, where the attacker can only acquire the final classification labels by querying the target model without access to the model's details. Under this setting, existing works often rely on heuristics and exhibit unsatisfactory performance. To better understand the rationality of these heuristics and the limitations of existing methods, we propose to automatically discover decision-based adversarial attack algorithms. In our approach, we construct a search space using basic mathematical operations as building blocks and develop a random search algorithm to efficiently explore this space by incorporating several pruning techniques and intuitive priors inspired by program synthesis works. Although we use a small and fast model to efficiently evaluate attack algorithms during the search, extensive experiments demonstrate that the discovered algorithms are simple yet query-efficient when transferred to larger normal and defensive models on the CIFAR-10 and ImageNet datasets. They achieve comparable or better performance than the state-of-the-art decision-based attack methods consistently.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/24/2022

Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries

Adversarial examples are inputs intentionally generated for fooling a de...
research
06/04/2022

Saliency Attack: Towards Imperceptible Black-box Adversarial Attack

Deep neural networks are vulnerable to adversarial examples, even in the...
research
06/17/2022

Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian Optimization

We focus on the problem of adversarial attacks against models on discret...
research
07/13/2023

Microbial Genetic Algorithm-based Black-box Attack against Interpretable Deep Learning Systems

Deep learning models are susceptible to adversarial samples in white and...
research
06/15/2020

Efficient Black-Box Adversarial Attack Guided by the Distribution of Adversarial Perturbations

This work studied the score-based black-box adversarial attack problem, ...
research
09/10/2019

Toward Finding The Global Optimal of Adversarial Examples

Current machine learning models are vulnerable to adversarial examples (...
research
04/03/2019

Boundary Attack++: Query-Efficient Decision-Based Adversarial Attack

Decision-based adversarial attack studies the generation of adversarial ...

Please sign up or login with your details

Forgot password? Click here to reset