Adversarial Example Decomposition

12/04/2018
by   Horace He, et al.
0

Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization. We show that one can decompose adversarial examples into an architecture-dependent component, data-dependent component, and noise-dependent component and that these components behave intuitively. For example, noise-dependent components transfer poorly to all other models, while architecture-dependent components transfer better to retrained models with the same architecture. In addition, we demonstrate that these components can be recombined to improve transferability without sacrificing efficacy on the original model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/16/2020

Blind Adversarial Network Perturbations

Deep Neural Networks (DNNs) are commonly used for various traffic analys...
research
06/09/2022

Early Transferability of Adversarial Examples in Deep Neural Networks

This paper will describe and analyze a new phenomenon that was not known...
research
09/21/2020

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations

We study the effect of adversarial perturbations of images on the estima...
research
02/27/2018

Understanding and Enhancing the Transferability of Adversarial Examples

State-of-the-art deep neural networks are known to be vulnerable to adve...
research
06/15/2020

Efficient Black-Box Adversarial Attack Guided by the Distribution of Adversarial Perturbations

This work studied the score-based black-box adversarial attack problem, ...
research
01/09/2018

Adversarial Spheres

State of the art computer vision models have been shown to be vulnerable...
research
08/14/2022

Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks

A powerful category of data poisoning attacks modify a subset of trainin...

Please sign up or login with your details

Forgot password? Click here to reset