MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

11/22/2017
by   Nicholas Carlini, et al.
0

MagNet and "Efficient Defenses..." were recently proposed as a defense to adversarial examples. We find that we can construct adversarial examples that defeat these defenses with only a slight increase in distortion.

READ FULL TEXT

page 3

page 5

research
09/06/2018

Are adversarial examples inevitable?

A wide range of defenses have been proposed to harden neural networks ag...
research
04/26/2020

Harnessing adversarial examples with a surprisingly simple defense

I introduce a very simple method to defend against adversarial examples....
research
02/19/2020

On Adaptive Attacks to Adversarial Example Defenses

Adaptive attacks have (rightfully) become the de facto standard for eval...
research
10/23/2018

Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses

It has been shown that adversaries can craft example inputs to neural ne...
research
02/12/2019

A survey on Adversarial Attacks and Defenses in Text

Deep neural networks (DNNs) have shown an inherent vulnerability to adve...
research
12/19/2017

Adversarial Examples: Attacks and Defenses for Deep Learning

With rapid progress and great successes in a wide spectrum of applicatio...

Please sign up or login with your details

Forgot password? Click here to reset