Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

12/12/2017
by   Wieland Brendel, et al.
0

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox at https://github.com/bethgelab/foolbox .

READ FULL TEXT

page 7

page 8

page 10

research
11/15/2022

Universal Distributional Decision-based Black-box Adversarial Attack with Reinforcement Learning

The vulnerability of the high-performance machine learning models implie...
research
07/01/2019

Accurate, reliable and fast robustness evaluation

Throughout the past five years, the susceptibility of neural networks to...
research
04/21/2022

Robustness of Machine Learning Models Beyond Adversarial Attacks

Correctly quantifying the robustness of machine learning models is a cen...
research
09/15/2020

Decision-based Universal Adversarial Attack

A single perturbation can pose the most natural images to be misclassifi...
research
07/20/2022

Illusionary Attacks on Sequential Decision Makers and Countermeasures

Autonomous intelligent agents deployed to the real-world need to be robu...
research
07/10/2018

Fooling the classifier: Ligand antagonism and adversarial examples

Machine learning algorithms are sensitive to so-called adversarial pertu...
research
01/15/2021

Black-box Adversarial Attacks in Autonomous Vehicle Technology

Despite the high quality performance of the deep neural network in real-...

Please sign up or login with your details

Forgot password? Click here to reset