Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

09/30/2020
by   Guneet S. Dhillon, et al.
0

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018). We discover a flaw in the re-implementation that artificially weakens SAP. When SAP is applied properly, the proposed attack is not effective. However, we show that a new use of the BPDA attack technique can still reduce the accuracy of SAP to 0.1

READ FULL TEXT

page 1

page 2

research
11/18/2021

A Review of Adversarial Attack and Defense for Classification Methods

Despite the efficiency and scalability of machine learning systems, rece...
research
03/05/2018

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...
research
03/16/2018

Adversarial Logit Pairing

In this paper, we develop improved techniques for defending against adve...
research
12/21/2021

A Theoretical View of Linear Backpropagation and Its Convergence

Backpropagation is widely used for calculating gradients in deep neural ...
research
08/02/2019

On the modes of convergence of Stochastic Optimistic Mirror Descent (OMD) for saddle point problems

In this article, we study the convergence of Mirror Descent (MD) and Opt...
research
08/25/2021

Dropout against Deep Leakage from Gradients

As the scale and size of the data increases significantly nowadays, fede...
research
05/06/2021

Probablistic Bigraphs

Bigraphs are a universal computational modelling formalism for the spati...

Please sign up or login with your details

Forgot password? Click here to reset