Adversarial Robustness Curves

07/31/2019
by   Christina Göpfert, et al.
0

The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems. This uncertainty has, in turn, lead to considerable research effort in understanding adversarial robustness. In this work, we take first steps towards separating robustness analysis from the choice of robustness threshold and norm. We propose robustness curves as a more general view of the robustness behavior of a model and investigate under which circumstances they can qualitatively depend on the chosen norm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

04/22/2020

Adversarial examples and where to find them

Adversarial robustness of trained models has attracted considerable atte...
03/25/2019

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Adversarial examples are malicious inputs crafted to cause a model to mi...
02/11/2020

Generalised Lipschitz Regularisation Equals Distributional Robustness

The problem of adversarial examples has highlighted the need for a theor...
12/14/2020

Achieving Adversarial Robustness Requires An Active Teacher

A new understanding of adversarial examples and adversarial robustness i...
11/15/2019

On Model Robustness Against Adversarial Examples

We study the model robustness against adversarial examples, referred to ...
05/15/2020

Towards Assessment of Randomized Mechanisms for Certifying Adversarial Robustness

As a certified defensive technique, randomized smoothing has received co...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Robustness of machine learning models has recently attracted massive research interest. This interest is particularly pronounced in the context of deep learning. On the one hand, this is due to the massive success and widespread deployment of deep learning. On the other hand, it is due to the intriguing properties that can be demonstrated for deep learning (although these are not unique to this setting): the circumstance that deep learning can produce models that achieve or surpass human-level performance in a wide variety of tasks, but completely disagree with human judgment after application of imperceptible perturbations 

[Szegedy2014Intriguing]

. The ability of a classifier to maintain its performance under such changes to the input data is commonly referred to as

robustness to adversarial perturbations.

In order to better understand adversarial robustness, recent years have seen the development of a host of methods that produce adversarial examples, in the white box and black box settings, with specific or arbitrary target labels, and varying additional constraints [Goodfellow2014Explaining, Kurakin2016Adversarial, Kurakin2016Adversariala, Papernot2016Practical, Su2017One]. There has also been a push towards training regimes that produce adversarially robust networks, such as data augmentation with adversarial examples or distillation [Papernot2016Distillation, Gu2014Towards, Huang2015Learning, Bastani2016Measuring]. The difficulty faced by such approaches is that robustness is difficult to measure and quantify: even if a model is shown to be robust against current state of the art attacks, this does not exclude the possibility that newly devised attacks may be successful [Carlini2017Towards]. The complexity of deep learning models and counter-intuitive nature of some phenomena surrounding adversarial examples further make it challenging to understand the impact of robust training or the properties that determine whether a model is robust or non-robust. Recent work has highlighted settings where no model can be simultaneously accurate and robust [Tsipras2019Robustness], or where finding a model that is simultaneously robust and accurate requires optimizing over a different hypothesis class than finding one that is simply accurate [Nakkiran2019Adversarial]. These examples rely on linear models, as they are easy for humans to understand. They analyze robustness properties for a fixed choice of norm and, typically, a fixed disadvantageous perturbation size (dependent on the model). This raises the question: “How do the presented results depend on the choice of norm, choice of perturbation size, and choice of linear classifier as a hypothesis class?”

In this contribution, we:

  • propose robustness curves as a way of better representing adversarial robustness in place of “point-wise” measures,

  • show that linear classifiers are not sufficient to illustrate all interesting robustness phenomena, and

  • investigate how robustness curves may depend on the choice of norm.

2 Definitions

In the following, we assume data , , are generated i.i.d. according to distribution with marginal . Let denote some classifier and let . The standard loss of on is

(1)

Let be some norm, let and let

(2)

Following [Tsipras2019Robustness], we define the -adversarial loss of regarding and as

(3)

We have . Alternatively, we can exclude from this definition any points that are initially misclassified by the model, and instead consider as adversarial examples all points where the model changes its behavior under small perturbations. Then the -margin loss is defined as

(4)

is the weight of all points within an -margin of a decision boundary. We have .

There are two somewhat arbitrary choices in the definition in Equations 4 and 3: the choice of and the choice of the norm . The aim of this contribution is to investigate how and impact the adversarial robustness.

3 Robustness Curves

As a first step towards understanding robustness globally, instead of for an isolated perturbation size , we propose to view robustness as a function of . This yields an easy-to-understand visual representation of adversarial robustness in the form of a robustness curve.

Definition 1

The robustness curve of a classifier , given a norm and underlying distribution , is the curve defined by

(5)
(6)

The margin curve of given and is the curve defined by

(7)
(8)

Commonly chosen norms for the investigation of adversarial robustness are the norm (denoted by ), the norm (denoted by ), and the norm (denoted by ). In the following, we will investigate robustness curves for these three choices of .

Tsipras2019Robustness propose a distribution where and

(9)

For this distribution, they show that the linear classifier with has high accuracy, but low -robustness in norm for , while the classifier with has high -robustness for , but low accuracy. Nakkiran2019Adversarial proposes a distribution where and

(10)

where the linear classifier with has high accuracy, but low -robustness in norm for . Figure 1 shows margin curves and robustness curves for and , and and and .

Figure 1: Margin curves and robustness curves for several examples of distributions and linear models from the literature. Row a) shows curves for classifier and distribution . Row b) shows curves for classifier and distribution . In this case, all three curves are identical and thus appear as one. Row c) shows curves for classifier and distribution .

4 The impact of

The curves shown in Figure 1 seem to behave similarly for each norm. Is this always the case? Indeed, if

is a linear classifier parameterized by normal vector

and offset , denote by

(11)

the shortest distance between and in norm . Then a series of algebraic manipulations yield

(12)
(13)
(14)

In particular, there exist constants and depending on such that for all ,

(15)

This implies the following Theorem:

Theorem 1

For any linear classifier , there exist constants such that for any ,

(16)

As a consequence, for linear classifiers, dependence of robustness curves on the choice of norm is purely a matter of compression and elongation.

What can we say about classifiers with more complex decision boundaries? For all , we have

(17)

These inequalities are tight, i.e. there for each inequality there exists some such that equality holds. It follows that, for any ,

(18)

and so

(19)
(20)

In particular, the robustness curve for the -norm is always an upper bound for the robustness curve for any other -norm (since for all and ). Thus, for linear classifiers as well as classifiers with more complicated decision boundaries, in order to show that a model is adversarially robust for any fixed norm, it is sufficient to show that it exhibits the desired robustness behavior for the -norm. On the other hand, in order to show that a model is not adversarially robust, showing this for the norm does not necessarily imply the same qualities in another norm, as the robustness curves may be strongly separated in high-dimensional spaces, both for linear and non-linear models.

Contrary to linear models, for more complicated decision boundaries, robustness curves may also exhibit qualitatively different behavior. This is illustrated in Figure 2. The decision boundary in each case is given by a quadratic model in 2-dimensional space: . In the first example, we construct a finite set of points, all at -distance 1 from the decision boundary, but at various and distances. For any distribution concentrated on a set of such points, the -robustness curve jumps from zero to one at a single threshold value, while the - and -robustness curves are step functions with the height of the steps determined by the distribution across the points and the width determined by the variation in or distances from the decision boundary. The robustness curves in this example also exhibit, at some points, the maximal possible separation by a factor of (note that ) while touching in other points. In the second example, we show a continuous version of the same phenomenon, with points inside and outside the parabola distributed at constant -distance from the decision boundary, but with varying and distances. As a result, the robustness curves for different norms are qualitatively different. The third example, on the other hand, shows a setting where the robustness curves for the three norms are both quantitatively and qualitatively similar.

Figure 2: Margin curves and robustness curves for and three different underlying distributions, illustrating varying behavior of the robustness curves for different norms. In rows a) and b), the robustness curves are qualitatively different, while they are almost identical in row c). Note that in these examples, robustness curves and margin curves are nearly identical, as the standard loss of is zero or close to zero in all cases.

These examples drive home two points:

  • The robustness properties of a classifier may depend both quantitatively and qualitatively on the norm chosen to measure said robustness. When investigating robustness, it is therefore imperative to consider which norm, or, more broadly, which concept of closeness best represents the type of perturbation to guard against.

  • Linear classifiers are not a sufficient tool for understanding adversarial robustness in general, as they in effect neutralize a degree of freedom given by the choice of norm.

5 Discussion

We have proposed robustness curves as a more general perspective on the robustness properties of a classifier and have discussed how these curves can or cannot be affected by the choice of norm. Robustness curves are a tool for a more principled investigation of adversarial robustness, while their dependence on a chosen norm underscores the necessity of basing robustness analyses on a clear problem definition that specifies what kind of perturbations a model should be robust to. We note that the use of norms in current research is frequently meant only as an approximation of a “human perception distance” [HiddenInPlainSight]. A human’s ability to detect a perturbation depends on the point the perturbation is applied to, meaning that human perception distance is not a homogeneous metric, and thus not induced by a norm. In this sense, where adversarial robustness is meant to describe how faithful the behavior of a model matches that of a human, the adversarial loss in Equation 3 can only be seen as a starting point of analysis. Nonetheless, since perturbations with small -norm are frequently imperceptible to humans, adversarial robustness regarding some

-norm is a reasonable lower bound for adversarial robustness in human perception distance. In future work, we would like to investigate how robustness curves can be estimated for deep networks and extend the definition to robustness against targeted attacks.