SparseFool: a few pixels make a big difference

11/06/2018
by   Apostolos Modas, et al.
0

Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data. Although most attacks usually change values of many image's pixels, it has been shown that deep networks are also vulnerable to sparse alterations of the input. However, no efficient method has been proposed to compute sparse perturbations. In this paper, we exploit the low mean curvature of the decision boundary, and propose SparseFool, a geometry inspired sparse attack that controls the sparsity of the perturbations. Extensive evaluations show that our approach outperforms related methods, and scales to high dimensional data. We further analyze the transferability and the visual effects of the perturbations, and show the existence of shared semantic information across the images and the networks. Finally, we show that adversarial training using ℓ_∞ perturbations can slightly improve the robustness against sparse additive perturbations.

READ FULL TEXT
research
05/26/2017

Analysis of universal adversarial perturbations

Deep networks have recently been shown to be vulnerable to universal per...
research
07/03/2021

Demiguise Attack: Crafting Invisible Semantic Adversarial Perturbations with Perceptual Similarity

Deep neural networks (DNNs) have been found to be vulnerable to adversar...
research
06/05/2020

Adversarial Image Generation and Training for Deep Convolutional Neural Networks

Deep convolutional neural networks (DCNNs) have achieved great success i...
research
05/26/2017

Classification regions of deep neural networks

The goal of this paper is to analyze the geometric properties of deep ne...
research
12/01/2020

Adversarial Robustness Across Representation Spaces

Adversarial robustness corresponds to the susceptibility of deep neural ...
research
08/30/2022

Robustness and invariance properties of image classifiers

Deep neural networks have achieved impressive results in many image clas...
research
03/16/2018

Vulnerability of Deep Learning

The Renormalisation Group (RG) provides a framework in which it is possi...

Please sign up or login with your details

Forgot password? Click here to reset