With Friends Like These, Who Needs Adversaries?

07/11/2018
by   Saumya Jetley, et al.
0

The vulnerability of deep image classification networks to adversarial attack is now well known, but less well understood. Via a novel experimental analysis, we illustrate some facts about deep convolutional networks (DCNs) that shed new light on their behaviour and its connection to the problem of adversaries, with two key results. The first is a straightforward explanation of the existence of universal adversarial perturbations and their association with specific class identities, obtained by analysing the properties of nets' logit responses as functions of 1D movements along specific image-space directions. The second is the clear demonstration of the tight coupling between classification performance and vulnerability to adversarial attack within the spaces spanned by these directions. Prior work has noted the importance of low-dimensional subspaces in adversarial vulnerability: we illustrate that this likewise represents the nets' notion of saliency. In all, we provide a digestible perspective from which to understand previously reported results which have appeared disjoint or contradictory, with implications for efforts to construct neural nets that are both accurate and robust to adversarial attack.

READ FULL TEXT
research
06/08/2019

Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Un...
research
11/18/2021

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal Adversarial Perturbations are image-agnostic and model-indepen...
research
03/23/2018

Detecting Adversarial Perturbations with Saliency

In this paper we propose a novel method for detecting adversarial exampl...
research
06/20/2023

Comparative Evaluation of Recent Universal Adversarial Perturbations in Image Classification

The vulnerability of Convolutional Neural Networks (CNNs) to adversarial...
research
09/10/2019

FDA: Feature Disruptive Attack

Though Deep Neural Networks (DNN) show excellent performance across vari...
research
09/11/2018

On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions

Data-agnostic quasi-imperceptible perturbations on inputs can severely d...
research
03/19/2021

Back to the Coordinated Attack Problem

We consider the well known Coordinated Attack Problem, where two general...

Please sign up or login with your details

Forgot password? Click here to reset