Making targeted black-box evasion attacks effective and efficient

06/08/2019
by   Mika Juuti, et al.
0

We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting. We formalize the problem setting and systematically evaluate what benefits the adversary can gain by using substitute models. We show that there is an exploration-exploitation tradeoff in that query efficiency comes at the cost of effectiveness. We present two new attack strategies for using substitute models and show that they are as effective as previous query-only techniques but require significantly fewer queries, by up to three orders of magnitude. We also show that an agile adversary capable of switching through different attack techniques can achieve pareto-optimal efficiency. We demonstrate our attack against Google Cloud Vision showing that the difficulty of black-box attacks against real-world prediction APIs is significantly easier than previously thought (requiring approximately 500 queries instead of approximately 20,000 as in previous works).

READ FULL TEXT

page 4

page 12

research
04/23/2018

Black-box Adversarial Attacks with Limited Queries and Information

Current neural network-based classifiers are susceptible to adversarial ...
research
12/19/2017

Query-Efficient Black-box Adversarial Examples

Current neural network-based image classifiers are susceptible to advers...
research
06/16/2020

AdvMind: Inferring Adversary Intent of Black-Box Attacks

Deep neural networks (DNNs) are inherently susceptible to adversarial at...
research
09/30/2021

First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data

Model extraction attacks are a kind of attacks where an adversary obtain...
research
12/23/2017

Query-limited Black-box Attacks to Classifiers

We study black-box attacks on machine learning classifiers where each qu...
research
05/17/2019

Simple Black-box Adversarial Attacks

We propose an intriguingly simple method for the construction of adversa...
research
05/04/2021

Broadly Applicable Targeted Data Sample Omission Attacks

We introduce a novel clean-label targeted poisoning attack on learning m...

Please sign up or login with your details

Forgot password? Click here to reset