DeepAI AI Chat
Log In Sign Up

Adversarial Robustness Curves

by   Christina Göpfert, et al.

The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems. This uncertainty has, in turn, lead to considerable research effort in understanding adversarial robustness. In this work, we take first steps towards separating robustness analysis from the choice of robustness threshold and norm. We propose robustness curves as a more general view of the robustness behavior of a model and investigate under which circumstances they can qualitatively depend on the chosen norm.


page 1

page 2

page 3

page 4


Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...

Gradient-Free Adversarial Attacks for Bayesian Neural Networks

The existence of adversarial examples underscores the importance of unde...

Achieving Adversarial Robustness Requires An Active Teacher

A new understanding of adversarial examples and adversarial robustness i...

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples

Adversarial examples are a pervasive phenomenon of machine learning mode...

Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness

As a certified defensive technique, randomized smoothing has received co...