Adversarial Robustness Curves

07/31/2019
by   Christina Göpfert, et al.
0

The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems. This uncertainty has, in turn, lead to considerable research effort in understanding adversarial robustness. In this work, we take first steps towards separating robustness analysis from the choice of robustness threshold and norm. We propose robustness curves as a more general view of the robustness behavior of a model and investigate under which circumstances they can qualitatively depend on the chosen norm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/22/2020

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...
research
03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
research
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
research
12/14/2020

Achieving Adversarial Robustness Requires An Active Teacher

A new understanding of adversarial examples and adversarial robustness i...
research
11/15/2020

Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations

Top-k predictions are used in many real-world applications such as machi...
research
05/10/2019

Interpreting and Evaluating Neural Network Robustness

Recently, adversarial deception becomes one of the most considerable thr...
research
05/15/2020

Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness

As a certified defensive technique, randomized smoothing has received co...

Please sign up or login with your details

Forgot password? Click here to reset