Localized Uncertainty Attacks

06/17/2021
by   Ousmane Amadou Dia, et al.
3

The susceptibility of deep learning models to adversarial perturbations has stirred renewed attention in adversarial examples resulting in a number of attacks. However, most of these attacks fail to encompass a large spectrum of adversarial perturbations that are imperceptible to humans. In this paper, we present localized uncertainty attacks, a novel class of threat models against deterministic and stochastic classifiers. Under this threat model, we create adversarial examples by perturbing only regions in the inputs where a classifier is uncertain. To find such regions, we utilize the predictive uncertainty of the classifier when the classifier is stochastic or, we learn a surrogate model to amortize the uncertainty when it is deterministic. Unlike ℓ_p ball or functional attacks which perturb inputs indiscriminately, our targeted changes can be less perceptible. When considered under our threat model, these attacks still produce strong adversarial examples; with the examples retaining a greater degree of similarity with the inputs.

READ FULL TEXT

page 2

page 7

page 8

page 13

page 14

research
05/29/2019

Functional Adversarial Attacks

We propose functional adversarial attacks, a novel class of threat model...
research
05/13/2020

Adversarial examples are useful too!

Deep learning has come a long way and has enjoyed an unprecedented succe...
research
05/08/2020

Towards Robustness against Unsuspicious Adversarial Examples

Despite the remarkable success of deep neural networks, significant conc...
research
06/21/2021

Hardness of Samples Is All You Need: Protecting Deep Learning Models Using Hardness of Samples

Several recent studies have shown that Deep Neural Network (DNN)-based c...
research
11/22/2017

Adversarial Phenomenon in the Eyes of Bayesian Deep Learning

Deep Learning models are vulnerable to adversarial examples, i.e. images...
research
04/03/2019

Interpreting Adversarial Examples by Activation Promotion and Suppression

It is widely known that convolutional neural networks (CNNs) are vulnera...
research
08/05/2021

BOSS: Bidirectional One-Shot Synthesis of Adversarial Examples

The design of additive imperceptible perturbations to the inputs of deep...

Please sign up or login with your details

Forgot password? Click here to reset