Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries

08/19/2019
by   Fnu Suya, et al.
0

In a black-box setting, the adversary only has API access to the target model and each query is expensive. Prior work on black-box adversarial examples follows one of two main strategies: (1) transfer attacks use white-box attacks on local models to find candidate adversarial examples that transfer to the target model, and (2) optimization-based attacks use queries to the target model and apply optimization techniques to search for adversarial examples. We propose hybrid attacks that combine both strategies, using candidate adversarial examples from local models as starting points for optimization-based attacks and using labels learned in optimization-based attacks to tune local models for finding transfer candidates. We empirically demonstrate on the MNIST, CIFAR10, and ImageNet datasets that our hybrid attack strategy reduces cost and improves success rates, and in combination with our seed prioritization strategy, enables batch attacks that can efficiently find adversarial examples with only a handful of queries.

READ FULL TEXT
research
09/16/2019

They Might NOT Be Giants: Crafting Black-Box Adversarial Examples with Fewer Queries Using Particle Swarm Optimization

Machine learning models have been found to be susceptible to adversarial...
research
06/14/2019

Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks

Many optimization methods for generating black-box adversarial examples ...
research
01/21/2019

Perception-in-the-Loop Adversarial Examples

We present a scalable, black box, perception-in-the-loop technique to fi...
research
07/10/2020

Generating Adversarial Inputs Using A Black-box Differential Technique

Neural Networks (NNs) are known to be vulnerable to adversarial attacks....
research
04/04/2019

White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

Adversarial examples are important for understanding the behavior of neu...
research
02/08/2016

Practical Black-Box Attacks against Machine Learning

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vul...
research
08/31/2021

Morphence: Moving Target Defense Against Adversarial Examples

Robustness to adversarial examples of machine learning models remains an...

Please sign up or login with your details

Forgot password? Click here to reset