DeepAI AI Chat
Log In Sign Up

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

by   Guneet S. Dhillon, et al.

Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018). We discover a flaw in the re-implementation that artificially weakens SAP. When SAP is applied properly, the proposed attack is not effective. However, we show that a new use of the BPDA attack technique can still reduce the accuracy of SAP to 0.1


page 1

page 2


A Review of Adversarial Attack and Defense for Classification Methods

Despite the efficiency and scalability of machine learning systems, rece...

Stochastic Activation Pruning for Robust Adversarial Defense

Neural networks are known to be vulnerable to adversarial examples. Care...

Adversarial Logit Pairing

In this paper, we develop improved techniques for defending against adve...

Toward Finding The Global Optimal of Adversarial Examples

Current machine learning models are vulnerable to adversarial examples (...

A Theoretical View of Linear Backpropagation and Its Convergence

Backpropagation is widely used for calculating gradients in deep neural ...

Studying the impact of magnitude pruning on contrastive learning methods

We study the impact of different pruning techniques on the representatio...

On the modes of convergence of Stochastic Optimistic Mirror Descent (OMD) for saddle point problems

In this article, we study the convergence of Mirror Descent (MD) and Opt...