Harnessing adversarial examples with a surprisingly simple defense

04/26/2020
by   Ali Borji, et al.
0

I introduce a very simple method to defend against adversarial examples. The basic idea is to raise the slope of the ReLU function at the test time. Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of the proposed defense against a number of strong attacks in both untargeted and targeted settings. While perhaps not as effective as the state of the art adversarial defenses, this approach can provide insights to understand and mitigate adversarial attacks. It can also be used in conjunction with other defenses.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2017

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

MagNet and "Efficient Defenses..." were recently proposed as a defense t...
research
03/26/2021

Adversarial Attacks are Reversible with Natural Supervision

We find that images contain intrinsic structure that enables the reversa...
research
11/12/2017

Machine vs Machine: Minimax-Optimal Defense Against Adversarial Examples

Recently, researchers have discovered that the state-of-the-art object c...
research
10/05/2021

Adversarial defenses via a mixture of generators

In spite of the enormous success of neural networks, adversarial example...
research
03/22/2023

Test-time Defense against Adversarial Attacks: Detection and Reconstruction of Adversarial Examples via Masked Autoencoder

Existing defense methods against adversarial attacks can be categorized ...
research
12/25/2018

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Deep neural networks have demonstrated cutting edge performance on vario...
research
10/13/2021

Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness

The vulnerability of deep neural networks to adversarial examples has mo...

Please sign up or login with your details

Forgot password? Click here to reset