-
On Evaluating Adversarial Robustness
Correctly evaluating defenses against adversarial examples has proven to...
read it
-
Unrestricted Adversarial Examples
We introduce a two-player contest for evaluating the safety and robustne...
read it
-
Understanding the Error in Evaluating Adversarial Robustness
Deep neural networks are easily misled by adversarial examples. Although...
read it
-
Gradient-Free Adversarial Attacks for Bayesian Neural Networks
The existence of adversarial examples underscores the importance of unde...
read it
-
Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Adversarial Training is proved to be an efficient method to defend again...
read it
-
New CleverHans Feature: Better Adversarial Robustness Evaluations with Attack Bundling
This technical report describes a new feature of the CleverHans library ...
read it
-
Statistically Robust Neural Network Classification
Recently there has been much interest in quantifying the robustness of n...
read it
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. The existence of adversarial examples in trained neural networks reflects the fact that expected risk alone does not capture the model's performance against worst-case inputs. We motivate the use of adversarial risk as an objective, although it cannot easily be computed exactly. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may be obscured to adversaries, by optimizing this surrogate rather than the true adversarial risk. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
READ FULL TEXT
Comments
There are no comments yet.