EvoBA: An Evolution Strategy as a Strong Baseline forBlack-Box Adversarial Attacks

07/12/2021
by   Andrei Ilie, et al.
0

Recent work has shown how easily white-box adversarial attacks can be applied to state-of-the-art image classifiers. However, real-life scenarios resemble more the black-box adversarial conditions, lacking transparency and usually imposing natural, hard constraints on the query budget. We propose EvoBA, a black-box adversarial attack based on a surprisingly simple evolutionary search strategy. EvoBA is query-efficient, minimizes L_0 adversarial perturbations, and does not require any form of training. EvoBA shows efficiency and efficacy through results that are in line with much more complex state-of-the-art black-box attacks such as AutoZOOM. It is more query-efficient than SimBA, a simple and powerful baseline black-box attack, and has a similar level of complexity. Therefore, we propose it both as a new strong baseline for black-box adversarial attacks and as a fast and general tool for gaining empirical insight into how robust image classifiers are with respect to L_0 adversarial perturbations. There exist fast and reliable L_2 black-box attacks, such as SimBA, and L_∞ black-box attacks, such as DeepSearch. We propose EvoBA as a query-efficient L_0 black-box adversarial attack which, together with the aforementioned methods, can serve as a generic tool to assess the empirical robustness of image classifiers. The main advantages of such methods are that they run fast, are query-efficient, and can easily be integrated in image classifiers development pipelines. While our attack minimises the L_0 adversarial perturbation, we also report L_2, and notice that we compare favorably to the state-of-the-art L_2 black-box attack, AutoZOOM, and of the L_2 strong baseline, SimBA.

READ FULL TEXT

page 13

page 16

research
01/29/2021

You Only Query Once: Effective Black Box Adversarial Attacks with Minimal Repeated Queries

Researchers have repeatedly shown that it is possible to craft adversari...
research
05/11/2020

Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data

Adversarial black-box attacks aim to craft adversarial perturbations by ...
research
05/17/2019

Simple Black-box Adversarial Attacks

We propose an intriguingly simple method for the construction of adversa...
research
04/05/2023

A Certified Radius-Guided Attack Framework to Image Segmentation Models

Image segmentation is an important problem in many safety-critical appli...
research
04/24/2020

One Sparse Perturbation to Fool them All, almost Always!

Constructing adversarial perturbations for deep neural networks is an im...
research
11/08/2021

Geometrically Adaptive Dictionary Attack on Face Recognition

CNN-based face recognition models have brought remarkable performance im...
research
09/09/2021

Energy Attack: On Transferring Adversarial Examples

In this work we propose Energy Attack, a transfer-based black-box L_∞-ad...

Please sign up or login with your details

Forgot password? Click here to reset