AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

07/15/2020
by   Hadi M. Dolatabadi, et al.
0

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers, outperforming them in both the number of queries and attack success rate. The code is available at https://github.com/hmdolatabadi/AdvFlow.

READ FULL TEXT

page 2

page 22

page 23

page 25

page 29

page 30

research
07/06/2020

Black-box Adversarial Example Generation with Normalizing Flows

Deep neural network classifiers suffer from adversarial vulnerability: w...
research
11/02/2022

Improving transferability of 3D adversarial attacks with scale and shear transformations

Previous work has shown that 3D point cloud classifiers can be vulnerabl...
research
12/26/2020

Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many...
research
06/10/2021

InFlow: Robust outlier detection utilizing Normalizing Flows

Normalizing flows are prominent deep generative models that provide trac...
research
06/14/2021

PopSkipJump: Decision-Based Attack for Probabilistic Classifiers

Most current classifiers are vulnerable to adversarial examples, small i...
research
10/22/2020

An Efficient Adversarial Attack for Tree Ensembles

We study the problem of efficient adversarial attacks on tree based ense...
research
03/10/2020

Using an ensemble color space model to tackle adversarial examples

Minute pixel changes in an image drastically change the prediction that ...

Please sign up or login with your details

Forgot password? Click here to reset