Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization

05/16/2019
by   Seungyong Moon, et al.
9

Solving for adversarial examples with projected gradient descent has been demonstrated to be highly effective in fooling the neural network based classifiers. However, in the black-box setting, the attacker is limited only to the query access to the network and solving for a successful adversarial example becomes much more difficult. To this end, recent methods aim at estimating the true gradient signal based on the input queries but at the cost of excessive queries. We propose an efficient discrete surrogate to the optimization problem which does not require estimating the gradient and consequently becomes free of the first order update hyperparameters to tune. Our experiments on Cifar-10 and ImageNet show the state of the art black-box attack performance with significant reduction in the required queries compared to a number of recently proposed methods. The source code is available at https://github.com/snu-mllab/parsimonious-blackbox-attack.

READ FULL TEXT

page 4

page 6

research
05/28/2018

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

Deep neural networks (DNNs) are vulnerable to adversarial examples, even...
research
09/15/2020

Switching Gradient Directions for Query-Efficient Black-Box Adversarial Attacks

We propose a simple and highly query-efficient black-box adversarial att...
research
09/02/2020

MetaSimulator: Simulating Unknown Target Models for Query-Efficient Black-box Attacks

Many adversarial attacks have been proposed to investigate the security ...
research
11/25/2020

SurFree: a fast surrogate-free black-box attack

Machine learning classifiers are critically prone to evasion attacks. Ad...
research
11/03/2022

Data-free Defense of Black Box Models Against Adversarial Attacks

Several companies often safeguard their trained deep models (i.e. detail...
research
09/16/2022

A Large-scale Multiple-objective Method for Black-box Attack against Object Detection

Recent studies have shown that detectors based on deep models are vulner...
research
10/21/2020

Learning Black-Box Attackers with Transferable Priors and Query Feedback

This paper addresses the challenging black-box adversarial attack proble...

Please sign up or login with your details

Forgot password? Click here to reset