Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes

07/13/2020
by   Satya Narayan Shukla, et al.
0

We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples for deep learning models solely based on information limited to output labels (hard label) to a queried data input. We use Bayesian optimization (BO) to specifically cater to scenarios involving low query budgets to develop efficient adversarial attacks. Issues with BO's performance in high dimensions are avoided by searching for adversarial examples in structured low-dimensional subspace. Our proposed approach achieves better performance to state of the art black-box adversarial attacks that require orders of magnitude more queries than ours.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2019

Black-box Adversarial Attacks with Bayesian Optimization

We focus on the problem of black-box adversarial attacks, where the aim ...
research
08/25/2019

Adversarial Edit Attacks for Tree Data

Many machine learning models can be attacked with adversarial examples, ...
research
05/19/2020

Adversarial Attacks for Embodied Agents

Adversarial attacks are valuable for providing insights into the blind-s...
research
06/14/2019

Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks

Many optimization methods for generating black-box adversarial examples ...
research
09/30/2021

Mitigating Black-Box Adversarial Attacks via Output Noise Perturbation

In black-box adversarial attacks, adversaries query the deep neural netw...
research
01/31/2022

MEGA: Model Stealing via Collaborative Generator-Substitute Networks

Deep machine learning models are increasingly deployedin the wild for pr...
research
06/25/2018

Exploring Adversarial Examples: Patterns of One-Pixel Attacks

Failure cases of black-box deep learning, e.g. adversarial examples, mig...

Please sign up or login with your details

Forgot password? Click here to reset