Adversarial Attacks are Reversible with Natural Supervision

03/26/2021
by   Chengzhi Mao, et al.
16

We find that images contain intrinsic structure that enables the reversal of many adversarial attacks. Attack vectors cause not only image classifiers to fail, but also collaterally disrupt incidental structure in the image. We demonstrate that modifying the attacked image to restore the natural structure will reverse many types of attacks, providing a defense. Experiments demonstrate significantly improved robustness for several state-of-the-art models across the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. Our results show that our defense is still effective even if the attacker is aware of the defense mechanism. Since our defense is deployed during inference instead of training, it is compatible with pre-trained networks as well as most other defenses. Our results suggest deep networks are vulnerable to adversarial examples partly because their representations do not enforce the natural structure of images.

READ FULL TEXT

page 1

page 3

page 16

research
04/26/2020

Harnessing adversarial examples with a surprisingly simple defense

I introduce a very simple method to defend against adversarial examples....
research
07/14/2020

Multitask Learning Strengthens Adversarial Robustness

Although deep networks achieve strong accuracy on a range of computer vi...
research
12/21/2021

Improving Robustness with Image Filtering

Adversarial robustness is one of the most challenging problems in Deep L...
research
05/28/2019

ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation

Deep neural networks are vulnerable to adversarial attacks. The literatu...
research
05/27/2020

Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models

The vulnerability of deep networks to adversarial attacks is a central p...
research
09/06/2020

Detection Defense Against Adversarial Attacks with Saliency Map

It is well established that neural networks are vulnerable to adversaria...
research
02/14/2021

Perceptually Constrained Adversarial Attacks

Motivated by previous observations that the usually applied L_p norms (p...

Please sign up or login with your details

Forgot password? Click here to reset