AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows

07/15/2020 ∙ by Hadi M. Dolatabadi, et al. ∙ 0

Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers, outperforming them in both the number of queries and attack success rate. The code is available at https://github.com/hmdolatabadi/AdvFlow.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 22

page 23

page 25

page 29

page 30

Code Repositories

AdvFlow

The official repository of the paper: "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.