SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations

10/08/2019
by   Ali Dabouei, et al.
14

Deep neural networks are susceptible to adversarial manipulations in the input domain. The extent of vulnerability has been explored intensively in cases of ℓ_p-bounded and ℓ_p-minimal adversarial perturbations. However, the vulnerability of DNNs to adversarial perturbations with specific statistical properties or frequency-domain characteristics has not been sufficiently explored. In this paper, we study the smoothness of perturbations and propose SmoothFool, a general and computationally efficient framework for computing smooth adversarial perturbations. Through extensive experiments, we validate the efficacy of the proposed method for both the white-box and black-box attack scenarios. In particular, we demonstrate that: (i) there exist extremely smooth adversarial perturbations for well-established and widely used network architectures, (ii) smoothness significantly enhances the robustness of perturbations against state-of-the-art defense mechanisms, (iii) smoothness improves the transferability of adversarial perturbations across both data points and network architectures, and (iv) class categories exhibit a variable range of susceptibility to smooth perturbations. Our results suggest that smooth APs can play a significant role in exploring the vulnerability extent of DNNs to adversarial examples.

READ FULL TEXT

page 1

page 5

page 11

page 12

page 13

page 14

page 15

page 16

research
06/28/2020

Geometry-Inspired Top-k Adversarial Perturbations

State-of-the-art deep learning models are untrustworthy due to their vul...
research
06/15/2020

Efficient Black-Box Adversarial Attack Guided by the Distribution of Adversarial Perturbations

This work studied the score-based black-box adversarial attack problem, ...
research
11/08/2018

A Geometric Perspective on the Transferability of Adversarial Directions

State-of-the-art machine learning models frequently misclassify inputs t...
research
09/07/2020

Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

Convolutional and recurrent neural networks have been widely employed to...
research
10/14/2020

Linking average- and worst-case perturbation robustness via class selectivity and dimensionality

Representational sparsity is known to affect robustness to input perturb...
research
12/02/2020

From a Fourier-Domain Perspective on Adversarial Examples to a Wiener Filter Defense for Semantic Segmentation

Despite recent advancements, deep neural networks are not robust against...
research
03/21/2018

Adversarial Defense based on Structure-to-Signal Autoencoders

Adversarial attack methods have demonstrated the fragility of deep neura...

Please sign up or login with your details

Forgot password? Click here to reset