An Empirical Study of Derivative-Free-Optimization Algorithms for Targeted Black-Box Attacks in Deep Neural Networks

12/03/2020
by   Giuseppe Ughi, et al.
0

We perform a comprehensive study on the performance of derivative free optimization (DFO) algorithms for the generation of targeted black-box adversarial attacks on Deep Neural Network (DNN) classifiers assuming the perturbation energy is bounded by an ℓ_∞ constraint and the number of queries to the network is limited. This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA, a model-based DFO method. We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify given a maximum number of queries to the DNN. The experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack; algorithms limiting the search of adversarial example to the vertices of the ℓ^∞ constraint work particularly well without structural defenses, while the presented BOBYQA based algorithm works better for especially small perturbation energies. This variance in performance highlights the importance of new algorithms being compared to the state-of-the-art in a variety of settings, and the effectiveness of adversarial defenses being tested using as wide a range of algorithms as possible.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

02/24/2020

A Model-Based Derivative-Free Approach to Black-Box Adversarial Examples: BOBYQA

We demonstrate that model-based derivative free optimisation algorithms ...
10/05/2019

Yet another but more efficient black-box adversarial attack: tiling and evolution strategies

We introduce a new black-box attack achieving state of the art performan...
08/11/2021

Simple black-box universal adversarial attacks on medical image classification based on deep neural networks

Universal adversarial attacks, which hinder most deep neural network (DN...
01/02/2020

Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural Networks Against Adversarial Attacks

Despite achieving state-of-the-art performance across many domains, mach...
06/26/2020

Orthogonal Deep Models As Defense Against Black-Box Attacks

Deep learning has demonstrated state-of-the-art performance for a variet...
11/09/2018

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

We study the problem of finding a universal (image-agnostic) perturbatio...
07/13/2019

Distributed Black-Box Optimization via Error Correcting Codes

We introduce a novel distributed derivative-free optimization framework ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.