On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions

09/11/2018
by   Yusuke Tsuzuku, et al.
0

Data-agnostic quasi-imperceptible perturbations on inputs can severely degrade recognition accuracy of deep convolutional networks. This indicates some structural instability of their predictions and poses a potential security threat. However, characterization of the shared directions of such harmful perturbations remains unknown if they exist, which makes it difficult to address the security threat and performance degradation. Our primal finding is that convolutional networks are sensitive to the directions of Fourier basis functions. We derived the property by specializing a hypothesis of the cause of the sensitivity, known as the linearity of neural networks, to convolutional networks and empirically validated it. As a by-product of the analysis, we propose a fast algorithm to create shift-invariant universal adversarial perturbations available in black-box settings.

READ FULL TEXT

page 2

page 4

page 8

page 10

page 11

page 12

page 13

page 14

research
06/08/2019

Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Un...
research
05/26/2017

Analysis of universal adversarial perturbations

Deep networks have recently been shown to be vulnerable to universal per...
research
03/23/2017

On the Robustness of Convolutional Neural Networks to Internal Architecture and Weight Perturbations

Deep convolutional neural networks are generally regarded as robust func...
research
11/11/2019

GraphDefense: Towards Robust Graph Convolutional Networks

In this paper, we study the robustness of graph convolutional networks (...
research
03/03/2019

A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations

The linear and non-flexible nature of deep convolutional models makes th...
research
01/31/2023

Fourier Sensitivity and Regularization of Computer Vision Models

Recent work has empirically shown that deep neural networks latch on to ...
research
07/11/2018

With Friends Like These, Who Needs Adversaries?

The vulnerability of deep image classification networks to adversarial a...

Please sign up or login with your details

Forgot password? Click here to reset