DeepAI AI Chat
Log In Sign Up

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

09/30/2020
by   Guneet S. Dhillon, et al.
0

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018). We discover a flaw in the re-implementation that artificially weakens SAP. When SAP is applied properly, the proposed attack is not effective. However, we show that a new use of the BPDA attack technique can still reduce the accuracy of SAP to 0.1

READ FULL TEXT

page 1

page 2

11/18/2021

A Review of Adversarial Attack and Defense for Classification Methods

Despite the efficiency and scalability of machine learning systems, rece...
03/05/2018

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...
03/16/2018

Adversarial Logit Pairing

In this paper, we develop improved techniques for defending against adve...
09/10/2019

Toward Finding The Global Optimal of Adversarial Examples

Current machine learning models are vulnerable to adversarial examples (...
12/21/2021

A Theoretical View of Linear Backpropagation and Its Convergence

Backpropagation is widely used for calculating gradients in deep neural ...
07/01/2022

Studying the impact of magnitude pruning on contrastive learning methods

We study the impact of different pruning techniques on the representatio...
08/02/2019

On the modes of convergence of Stochastic Optimistic Mirror Descent (OMD) for saddle point problems

In this article, we study the convergence of Mirror Descent (MD) and Opt...