Sensitivity of Deep Convolutional Networks to Gabor Noise

06/08/2019
by   CK, et al.
2

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.

READ FULL TEXT

page 1

page 4

page 8

page 9

page 10

research
09/11/2018

On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions

Data-agnostic quasi-imperceptible perturbations on inputs can severely d...
research
07/11/2018

With Friends Like These, Who Needs Adversaries?

The vulnerability of deep image classification networks to adversarial a...
research
11/18/2021

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal Adversarial Perturbations are image-agnostic and model-indepen...
research
09/11/2017

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has ...
research
11/18/2020

Adversarial Turing Patterns from Cellular Automata

State-of-the-art deep classifiers are intriguingly vulnerable to univers...
research
01/09/2018

Adversarial Spheres

State of the art computer vision models have been shown to be vulnerable...
research
06/20/2021

Attack to Fool and Explain Deep Networks

Deep visual models are susceptible to adversarial perturbations to input...

Please sign up or login with your details

Forgot password? Click here to reset