Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

07/20/2018
by   Andrew Ilyas, et al.
0

We introduce a framework that unifies the existing work on black-box adversarial example generation. We demonstrate that the current state of the art in the field is optimal in a certain natural sense. Despite this optimality, we show how to improve black-box attacks by bringing a new element into the problem: ambient priors for the gradient. We identify two such priors, and give an algorithm based on bandit optimization that allows for seamless integration of these and other priors. Our framework leads to methods that are two to three times more query-efficient and two to three times smaller failure rate than the state-of-the-art approaches.

READ FULL TEXT
research
02/19/2019

There are No Bit Parts for Sign Bits in Black-Box Attacks

Machine learning models are vulnerable to adversarial examples. In this ...
research
09/30/2019

Black-box Adversarial Attacks with Bayesian Optimization

We focus on the problem of black-box adversarial attacks, where the aim ...
research
07/13/2019

Distributed Black-Box Optimization via Error Correcting Codes

We introduce a novel distributed derivative-free optimization framework ...
research
03/16/2022

Attacking deep networks with surrogate-based adversarial black-box methods is easy

A recent line of work on black-box adversarial attacks has revived the u...
research
08/24/2022

Attacking Neural Binary Function Detection

Binary analyses based on deep neural networks (DNNs), or neural binary a...
research
04/04/2017

Learning Approximately Objective Priors

Informative Bayesian priors are often difficult to elicit, and when this...
research
08/31/2021

Half-Space and Box Constraints as NUV Priors: First Results

Normals with unknown variance (NUV) can represent many useful priors and...

Please sign up or login with your details

Forgot password? Click here to reset